Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Calyx Institute: A Case Study in Grassroots Innovation

Par : Rory Mir
3 avril 2025 à 09:36

Technologists play a huge role in building alternative tools and resources when our right to privacy and security are undermined by governments and major corporations. This direct resistance ensures that even in the face of powerful adversaries, communities can find some safety and autonomy through community-built tools.

One of the most renowned names in this work is the Calyx Institute, a New York based 501(c)3 nonprofit founded by Nicholas Merrill, after a successful and influential constitutional challenge to the National Security Letter (NSL) statute in the USA Patriot Act. Today Calyx’s mission is to defend digital privacy, advance connectivity, and strive for a future where everyone has access to the resources and tools they need to remain securely connected. Their work is made possible thanks to the generous donations of their over 12,000 grassroots members.

More recently, Calyx joined EFF’s network of grassroots organizations across the US, the Electronic Frontier Alliance (EFA). Members of the alliance are not-for-profit local organizations dedicated to EFA’s five guiding principles: privacy, free expression, access to knowledge, creativity, and security. Calyx has since been an exceptional ally, lifting up and collaborating with fellow members.

If you’re inspired by Calyx to start making a difference in your community, you can get started with our organizer toolkits. Once you’re ready, we hope you consider applying to join the alliance.

JOIN EFA

Defend Digital Rights Locally

We corresponded with Calyx over email to discuss the group's ambitious work, and what the future holds for Calyx. Here are excerpts from our conversation:

Thanks for chatting with us, to get started could you tell us a bit about Calyx’s current work?

Calyx focuses on three areas: (1) developing a privacy-respecting software ecosystem, (2) bridging the digital divide with affordable internet access, and (3) sustaining our community through grants, and research, and educational initiatives.

We build and maintain a digital ecosystem of free and open-source software (FOSS) centering on CalyxOS, an Android operating system that encrypts communications, combats invasive metadata collection, and protects users from geolocation tracking. The Calyx Internet Membership Program offers mobile hotspots so people have a way to stay connected despite limited resources or a lack of viable alternatives. Finally, Calyx actively engages with diverse stakeholder groups to build a shared understanding of privacy and expand digital-security literacy and provide grants to directly support aligned organizations. By partnering with our peers, funders, and service providers, we hope to drive collective action toward a privacy-and-rights-respecting future of technology.

Calyx projects work with a wide range of technologies. What are some barriers Calyx runs into in this work?

Our biggest challenge is one shared by many tech communities, particularly FOSS advocates: it is difficult to balance privacy and security with usability in tool development. On the one hand, the current data-mining business model of the tech sector makes it extremely hard to provide FOSS solutions to proprietary tech while keeping the tool intuitive and easy to use. On the other, there is a general lack of momentum for funding and growing an alternative digital ecosystem.

As a result, many digital rights enthusiasts are left with scarce resources and a narrow space within which to work on technical solutions. We need more people to work together and collectively advocate for a privacy-respecting tech ecosystem that cares about all communities and does not marginalize anyone.

Take CalyxOS, for example. Before it became a tangible project, our founder Nick spent years thinking about an alternative mobile operating system that put privacy first. Back in 2012, Nick spoke to Moxie Marlinspike, the creator of the Signal messaging app, about his idea. Moxie shared several valid concerns that almost led Nick to stop working on it. Fortunately, these warnings, which came from Moxie’s experience and success with Signal, made Nick even more determined, and he recruited an expert global team to help realize his idea.

What do you see as the role of technologists in defending civil liberties with local communities?

Technologists are enablers—they build tools and technical infrastructures, fundamental parts of the digital ecosystem within which people exercise their rights and enjoy their lives. A healthy digital ecosystem consists of technologies that liberate people. It is an arena where people willingly and actively connect and share their expertise, confident in the shared protocols that protect everyone’s rights and dignity. That is why Calyx builds and advocates for people-centered, privacy-focused FOSS tools.

How has Calyx supported folks in NYC? What have you learned from it?

It’s a real privilege to be part of the NYC tech community, which has such a wealth of technologists, policy experts, human rights watchdogs, and grassroots activists. In recent years, we joined efforts led by multiple networks and organizations to mobilize against unjustifiable mass surveillance and other digital threats faced by millions of people of color, immigrants, and other underrepresented groups.

We’re particularly proud of the support we provided to another EFA member, Surveillance Technology Oversight Project, on the Ban the Scan campaign to ban facial recognition in NYC, and CryptoHarlem to sustain their work bringing digital privacy and cybersecurity education to communities in Harlem and beyond. Most recently, we funded Sunset Spark—a small nonprofit offering free education in science and technology in the heart of Brooklyn—to develop a multipurpose curriculum focused on privacy, internet infrastructure, and the roles of the public and private sectors in our digital world.

These experiences deeply inspired us to shape a funding philosophy that centers the needs of organizations and groups with limited resources, helps local communities break barriers and build capacity, and grows reciprocal relationships between each member of the community.

You mentioned a grantmaking program, which is a really unique project for an EFA member. Could you tell us a bit about your theory of change for the program?

Since 2020, the Calyx Institute has been funding the development of digital privacy and security tools, research on mass surveillance systems, and training efforts to equip people with the knowledge and tools they need to protect their right to privacy and connectivity. In 2022, Calyx launched the Fusion Center Research Fund to aid investigations into law enforcement harvesting of personal data through intelligence-sharing centers. This effort, with nearly $200,000 disbursed to grantees, helped reveal the deleterious impact of surveillance technology on privacy and freedom of expression.

These efforts have led to the Sepal Fund, Calyx’s pilot program to offer small groups unrestricted and holistic grants. This program will provide five organizations, collectives, or projects a yearly grant of up to $50,000 for a total of three years. In addition, we will provide our grantees opportunities for professional development, as well as other resources. Through this program, we hope to sustain and elevate research, tool development, and education that will support digital privacy and defend internet freedom.


Could you tell us a bit about how people can get involved?

All our projects are, at their core, community projects, and we welcome insights and involvement from anyone to whom our work is relevant. CalyxOS offers a variety of ways to connect, including a CalyxOS Matrix room and GitLab repository where users and programmers interact in real time to troubleshoot and discuss improvements. Part of making CalyxOS accessible is ensuring that it’s as widely available as possible, so anyone who would like to be part of that translation and localization effort should visit our weblate site.

What does the future look like for Calyx?

We are hoping that the future holds big things for us, like CalyxOS builds on more affordable and globally available mobile devices so that people in different locations with varied resources can equally enjoy the right to privacy. We are also looking forward to updating our visual communication—we have been “substance over style” for so long that it will be exciting to see how a refreshed look will help us reach new audiences.

Finally, what’s your “moonshot”? What’s the ideal future Calyx wants to build?

The Calyx dream is accessible digital privacy, security, and connectivity for all, regardless of budget or tech background, centering communities that are most in need.

We want a future where everyone has access to the resources and tools they need to remain securely connected. To get there, we’ll need to work on building a lot of capacity, both technological and informational. Great tools can only fulfill their purpose if people know why and how to use them. Creating those tools and spreading the word about them requires collaboration, and we are proud to be working toward that goal alongside all the organizations that make up the EFA.

Our thanks to the Calyx Institute for their continued efforts to build private and secure tools for targeted groups, in New York City and across the globe. You can find and support other Electronic Frontier Alliance affiliated groups near you by visiting eff.org/fight.

Site-Blocking Legislation Is Back. It’s Still a Terrible Idea.

Par : Joe Mullin
2 avril 2025 à 11:53

More than a decade ago, Congress tried to pass SOPA and PIPA—two sweeping bills that would have allowed the government and copyright holders to quickly shut down entire websites based on allegations of piracy. The backlash was immediate and massive. Internet users, free speech advocates, and tech companies flooded lawmakers with protests, culminating in an “Internet Blackout” on January 18, 2012. Turns out, Americans don’t like government-run internet blacklists. The bills were ultimately shelved. 

Thirteen years later, as institutional memory fades and appetite for opposition wanes, members of Congress in both parties are ready to try this again. 

take action

Act Now To Defend the Open Web  

The Foreign Anti-Digital Piracy Act (FADPA), along with at least one other bill still in draft form, would revive this reckless strategy. These new proposals would let rights holders get federal court orders forcing ISPs and DNS providers to block entire websites based on accusations of infringing copyright. Lawmakers claim they’re targeting “pirate” sites—but what they’re really doing is building an internet kill switch.

These bills are an unequivocal and serious threat to a free and open internet. EFF and our supporters are going to fight back against them. 

Site-Blocking Doesn’t WorkAnd Never Will 

Today, many websites are hosted on cloud infrastructure or use shared IP addresses. Blocking one target can mean blocking thousands of unrelated sites. That kind of digital collateral damage has already happened in Austria, Russia​and in the US.

Site-blocking is both dangerously blunt and trivially easy to evade. Determined evaders can create the same content on a new domain within hours. Users who want to see blocked content can fire up a VPN or change a single DNS setting to get back online. 

These workarounds aren’t just popular—they’re essential tools in countries that suppress dissent. It’s shocking that Congress is on the verge of forcing Americans to rely on the same workarounds that internet users in authoritarian regimes must rely on just to reach mislabeled content. It will force Americans to rely on riskier, less trustworthy online services. 

Site-Blocking Silences Speech Without a Defense

The First Amendment should not take a back seat because giant media companies want the ability to shut down websites faster. But these bills wrongly treat broad takedowns as a routine legal process. Most cases would be decided in ex parte proceedings, with no one there to defend the site being blocked. This is more than a shortcut–it skips due process entirely. 

Users affected by a block often have no idea what happened. A blocked site may just look broken, like a glitch or an outage. Law-abiding publishers and users lose access, and diagnosing the problem is difficult. Site-blocking techniques are the bluntest of instruments, and they almost always punish innocent bystanders. 

The copyright industries pushing these bills know that site-blocking is not a narrowly tailored fix for a piracy epidemic. The entertainment industry is booming right now, blowing past its pre-COVID projections. Site-blocking legislation is an attempt to build a new American censorship system by letting private actors get dangerous infrastructure-level control over internet access. 

EFF and the Public Will Push Back

FADPA is already on the table. More bills are coming. The question is whether lawmakers remember what happened the last time they tried to mess with the foundations of the open web. 

If they don’t, they’re going to find out the hard way. Again. 

take action

Tell Congress: No To Internet Blacklists  

Site-blocking laws are dangerous, unnecessary, and ineffective. Lawmakers need to hear—loud and clear—that Americans don’t support government-mandated internet censorship. Not for copyright enforcement. Not for anything.

New USPTO Memo Makes Fighting Patent Trolls Even Harder

Par : Joe Mullin
21 mars 2025 à 14:49

The U.S. Patent and Trademark Office (USPTO) just made a move that will protect bad patents at the expense of everyone else. In a memo released February 28, the USPTO further restricted access to inter partes review, or IPR—the process Congress created to let the public challenge invalid patents without having to wage million-dollar court battles.

If left unchecked, this decision will shield bad patents from scrutiny, embolden patent trolls, and make it even easier for hedge funds and large corporations to weaponize weak patents against small businesses and developers.

IPR Exists Because the Patent Office Makes Mistakes

The USPTO grants over 300,000 patents a year, but many of them should not have been issued in the first place. Patent examiners spend, on average, around 20 hours per patent, often missing key prior art or granting patents that are overly broad or vague. That’s how bogus patents on basic ideas—like podcasting, online shopping carts, or watching ads online—have ended up in court.

Congress created IPR in 2012 to fix this problem. IPR allows anyone to challenge a patent’s validity based on prior art, and it’s done before specialized judges at the USPTO, where experts can re-evaluate whether a patent was properly granted. It’s faster, cheaper, and often fairer than fighting it out in federal court.

The USPTO is Blocking Patent Challenges—Again

Instead of defending IPR, the USPTO is working to sabotage it. The February 28 memo reinstates a rule that allows for widespread use of “discretionary denials.” That’s when the Patent Trial and Appeal Board (PTAB) refuses to hear an IPR case for procedural reasons—even if the patent is likely invalid. 

The February 28 memo reinstates widespread use of the Apple v. Fintiv rule, under which the USPTO often rejected IPR petitions whenever there’s an ongoing district court case about the same patent. This is backwards. If anything, an active lawsuit is proof that a patent’s validity needs to be reviewed—not an excuse to dodge the issue.

In 2022, former USPTO Director Kathi Vidal issued a memo making clear that the PTAB should hear patent challenges when “a petition presents compelling evidence of unpatentability,” even if there is parallel court litigation. 

That 2022 guidance essentially saved the IPR system. Once PTAB judges were told to consider all petitions that showed “compelling evidence,” the procedural denials dropped to almost nothing. This February 28 memo signals that the USPTO will once again use discretionary denials to sharply limit access to IPR—effectively making patent challenges harder across the board.  

Discretionary Denials Let Patent Trolls Rig the System

The top beneficiary of this decision will be patent trolls, shell companies formed expressly for the purpose of filing patent lawsuits. Often patent trolls seek to extract a quick settlement before a patent can be challenged. With IPR becoming increasingly unavailable, that will be easier than ever. 

Patent owners know that discretionary denials will block IPRs if they file a lawsuit first. That’s why trolls flock to specific courts, like the Western District of Texas, where judges move cases quickly and rarely rule against patent owners.

By filing lawsuits in these troll-friendly courts, patent owners can game the system—forcing companies to pay up rather than risk millions in litigation costs.

The recent USPTO memo makes this problem even worse. Instead of stopping the abuse of discretionary denials, the USPTO is doubling down—undermining one of the most effective ways businesses, developers, and consumers can fight back against bad patents.

Congress Created IPR to Protect the Public—Not Just Patent Owners

The USPTO doesn’t get to rewrite the law. Congress passed IPR to ensure that weak patents don’t become weapons for extortionary lawsuits. By reinforcing discretionary denials with minimal restrictions, and, as a result, blocking access to IPRs, the USPTO is directly undermining what Congress intended.

Leaders at the USPTO should immediately revoke the February 28 memo. If they refuse, as we pointed out the last time IPR denials spiraled out of control, it’s time for Congress to step in and fix this. They must ensure that IPR remains a fast, affordable way to challenge bad patents—not just a tool for the largest corporations. Patent quality matters—because when bad patents stand, we all pay the price.

California’s A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly

Par : Joe Mullin
17 mars 2025 à 18:55

California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush small AI startups and developers while giving big tech firms even more power.

A Burden That Small Developers Can’t Bear

The AI landscape is in danger of being dominated by large companies with deep pockets. These big names are in the news almost daily. But they’re far from the only ones – there are dozens of AI companies with fewer than 10 employees trying to build something new in a particular niche. 

This bill demands that creators of any AI model–even a two-person company or a hobbyist tinkering with a small software build– identify copyrighted materials used in training.  That requirement will be incredibly onerous, even if limited just to works registered with the U.S. Copyright Office. The registration system is a cumbersome beast at best–neither machine-readable nor accessible, it’s more like a card catalog than a database–that doesn’t offer information sufficient to identify all authors of a work,  much less help developers to reliably match works in a training set to works in the system.

Even for major tech companies, meeting these new obligations  would be a daunting task. For a small startup, throwing on such an impossible requirement could be a death sentence. If A.B. 412 becomes law, these smaller players will be forced to devote scarce resources to an unworkable compliance regime instead of focusing on development and innovation. The risk of lawsuits—potentially from copyright trolls—would discourage new startups from even attempting to enter the field.

A.I. Training Is Like Reading And It’s Very Likely Fair Use 

A.B. 412 starts from a premise that’s both untrue and harmful to the public interest: that reading, scraping or searching of open web content shouldn’t be allowed without payment. In reality, courts should, and we believe will, find that the great majority of this activity is fair use. 

It’s now bedrock internet law principle that some forms of copying content online are transformative, and thus legal fair use. That includes reproducing thumbnail images for image search, or snippets of text to search books

The U.S. copyright system is meant to balance innovation with creator rights, and courts are still working through how copyright applies to AI training. In most of the AI cases, courts have yet to consider—let alone decide—how fair use applies. A.B. 412 jumps the gun, preempting this process and imposing a vague, overly broad standard that will do more harm than good.

Importantly, those key court cases are all federal. The U.S. Constitution makes it clear that copyright is governed by federal law, and A.B. 412 improperly attempts to impose state-level copyright regulations on an issue still in flux. 

A.B. 412 Is A Gift to Big Tech

The irony of A.B. 412 is that it won’t stop AI development—it will simply consolidate it in the hands of the largest corporations. Big tech firms already have the resources to navigate complex legal and regulatory environments, and they can afford to comply (or at least appear to comply) with A.B. 412’s burdensome requirements. Small developers, on the other hand, will either be forced out of the market or driven into partnerships where they lose their independence. The result will be less competition, fewer innovations, and a tech landscape even more dominated by a handful of massive companies.

If lawmakers are able to iron out some of the practical problems with A.B. 412 and pass some version of it, they may be able to force programmers to research–and effectively, pay off–copyright owners before they even write a line of code. If that’s the outcome in California, Big Tech will not despair. They’ll celebrate. Only a few companies own large content libraries or can afford to license enough material to build a deep learning model. The possibilities for startups and small programmers will be so meager, and competition will be so limited, that profits for big incumbent companies will be locked in for a generation. 

If you are a California resident and want to speak out about A.B. 412, you can find and contact your legislators through this website

EFF to NSF: AI Action Plan Must Put People First

Par : Rory Mir
13 mars 2025 à 18:53

This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).

EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.

Government Automated Decision Making

US procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.

These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.

Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.

Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.

Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.

For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.

This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments here.

Anti-Surveillance Mapmaker Refuses Flock Safety's Cease and Desist Demand

Flock Safety loves to crow about the thousands of local law enforcement agencies around the United States that have adopted its avian-themed automated license plate readers (ALPRs). But when a privacy activist launched a website to map out the exact locations of these pole-mounted devices, the company tried to clip his wings.  

The company sent DeFlock.me and its creator Will Freeman a cease-and-desist letter, claiming that the project dilutes its trademark. Suffice it to say, and to lean into ornithological wordplay, the letter is birdcage liner.  

Representing Freeman, EFF sent Flock Safety a letter rejecting the demand, pointing out that the grassroots project is well within its First Amendment rights.  

Flock Safety’s car-tracking cameras have been spreading across the United States like an invasive species, preying on public safety fears and gobbling up massive amounts of sensitive driver data. The technology not only tracks vehicles by their license plates, but also creates “fingerprints” of each vehicle, including the make, model, color and other distinguishing features. This is a mass surveillance technology that collects information on everyone, regardless of whether they are connected to a crime. It has been misused by police to spy on their ex-partners and could be used to target people engaged in First Amendment activities or seeking medical care.  

Through crowdsourcing and open-source research, DeFlock.me aims to “shine a light on the widespread use of ALPR technology, raise awareness about the threats it poses to personal privacy and civil liberties, and empower the public to take action.”  While EFF’s Atlas of Surveillance project has identified more than 1,700 agencies using ALPRs, DeFlock has mapped out more than 16,000 individual camera locations, more than a third of which are Flock Safety devices.  

Flock Safety is so integrated into law enforcement, it’s not uncommon to see law enforcement agencies actually promoting the company by name on their websites. The Sussex County Sheriff’s website in Virginia has only two items in its menu bar: Accident Reports and Flock Safety. The name “DeFlock,” EFF told the vendor, represents the project’s goal of “ending ALPR usage and Flock’s status as one of the most widely used ALPR providers.” It’s accurate, appropriate, effective, and most importantly, legally protected.  

 We wrote:  

Your claims of dilution by blurring and/or tarnishment fail at the threshold, without even needing to address why dilution is unlikely. Federal anti-dilution law includes express carve-outs for any noncommercial use of a mark and for any use in connection with criticizing or commenting on the mark owner or its products. Mr. Freeman’s use of the name “DeFlock” is both.

Flock Safety’s cease and desist later is just the latest in a long list of groups turning to bogus intellectual property claims to silence their critics. Frequently, these have no legal basis and are designed to frighten under-resourced activists and advocacy groups with high-powered law firm letterheads. EFF is here to stand up against these trademark bullies, and in the case of Flock Safety, flip them the bird.  

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the fact that they don’t own much of the media they have paid for.  

All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright, they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore. 

Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection. 

As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.  

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek. 

Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards

Par : Jason Kelley
14 janvier 2025 à 12:33

On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”  

But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.

Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.

Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users. 

The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’” 

A design for the Most Wanted CEO playing cards

Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition). 

A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.

As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods. 

Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.) 

TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to  update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines. 

These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.

On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”  

Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation,  uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.  

Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.  

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

27 décembre 2024 à 13:29

The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.

Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.

But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.

The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate).  The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.

There’s a host of problems with these bills, and you can read more about them here and here. 

A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. 

And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.  

Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.

These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction. 

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Saving the Internet in Europe: How EFF Works in Europe

16 décembre 2024 à 11:32

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

Introducing EFF’s New Video Series: Gate Crashing

10 décembre 2024 à 14:56

The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that. 

There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy. 

In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way. 

play
Privacy info. This embed will serve content from youtube.com

Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"

26 novembre 2024 à 12:58

Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of release, without needing to go to a store. 

But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the differences between owning physical and digital media, and why we need to break down that barrier. 

Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure you actually own the digital media you pay for, and how you can take action, too. 

Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes. 

The 2024 U.S. Election is Over. EFF is Ready for What's Next.

Par : Cindy Cohn
6 novembre 2024 à 11:56

The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high. 

No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running. 

During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally.  We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.

Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data. 

EFF is at its best when the stakes are high. 

But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech. 

Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid: 

  • Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.  
  • Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
  • Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress. 
  • Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions. 

We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user. 

We’re not merely fighting against individual companies or elected officials or even specific administrations.  We are fighting for you. That won’t stop no matter who’s in office. 

DONATE TODAY

Sorry, Gas Companies - Parody Isn't Infringement (Even If It Creeps You Out)

30 octobre 2024 à 17:09

Activism comes in many forms. You might hold a rally, write to Congress, or fly a blimp over the NSA. Or you might use a darkly hilarious parody to make your point, like our client Modest Proposals recently did.

Modest Proposals is an activist collective that uses parody and culture jamming to advance environmental justice and other social causes. As part of a campaign shining a spotlight on the environmental damage and human toll caused by the liquefied natural gas (LNG) industry, Modest Proposals invented a company called Repaer. The fake company’s website offers energy companies the opportunity to purchase “life offsets” that balance the human deaths their activities cause by extending the lives of individuals deemed economically valuable. The website also advertises a “Plasma Pals” program that encourages parents to donate their child’s plasma to wealthy recipients. Scroll down on the homepage a bit, and you’ll see the logos for three (real) LNG companies—Repaer’s “Featured Partners.” 

Believe it or not, the companies didn’t like this. (Shocking!) Two of them—TotalEnergies and Equinor—sent our client stern emails threatening legal action if their names and logos weren’t removed from the website. TotalEnergies also sent a demand to the website’s hosting service, Netlify, that got repaer.earth taken offline. That was our cue to get involved.

We sent letters to both companies, explaining what should be obvious: the website was a noncommercial work of activism, unlikely to confuse any reasonable viewer. Trademark law is about protecting consumers; it’s not a tool for businesses to shut down criticism. We also sent a counternotice to Netlify denying TotalEnergies’ allegations and demanding that repaer.earth be restored. 

 We wish this were the first time we’ve had to send letters like that, but EFF regularly helps activists and critics push back on bogus trademark and copyright claims. This incident is also part of a broader and long-standing pattern of the energy industry weaponizing the law to quash dissent by environmental activists. These are just examples EFF has written about. We’ve been fighting these tactics for a long time, both by representing individual activist groups and through supporting legislative efforts like a federal anti-SLAPP bill. 

Frustratingly, Netlify made us go through the full DMCA counternotice process—including a 10-business-day waiting period to have the site restored—even though this was never a DMCA claim. (The DMCA is copyright law, not trademark, and TotalEnergies didn’t even meet the notice requirements that Netlify claims to follow.) Rather than wait around for Netlify to act, Modest Proposals eventually moved the website to a different hosting service. 

Equinor and TotalEnergies, on the other hand, have remained silent. This is a pretty common result when we help push back against bad trademark and copyright claims: the rights owners slink away once they realize their bullying tactics won’t work, without actually admitting they were wrong. We’re glad these companies seem to have backed off regardless, but victims of bogus claims deserve more certainty than this.

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

16 octobre 2024 à 15:29

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

NextNav’s Callous Land-Grab to Privatize 900 MHz

Par : Rory Mir
13 septembre 2024 à 10:52

The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. 

Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. 

EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav’s Proposed 'Band-Grab'

NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. 

Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS.

NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. 

What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

Stifling the Future of Open Communication

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band.

One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail.

This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses.

Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

FCC Must Protect the Commons

NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. 

The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it.

You can read our FCC comments here.

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

EFF Honored as DEF CON 32 Uber Contributor

Par : Rory Mir
15 août 2024 à 15:23

At DEF CON 32 this year, the Electronic Frontier Foundation became the first organization to be given the Uber Contributor award. This award recognizes EFF’s work in education and litigation, naming us “Defenders of the Hacker Spirit.”

Image of award outside, silver brick with DEF CON logo.

DEF CON Uber Contributor Award

EFF staff accetping the award on stage.

EFF Staff Attorney Hannah Zhao and Staff Technologist Cooper Quintin accepting the Uber Contributor Award from DEF CON founder Jeff Moss

The Uber Contributor Award is an honor created three years ago to recognize people and groups who have made exceptional contributions to the infosec and hacker community at DEF CON. Our connection with DEF CON runs deep, dating back over 20 years. The conference has become a vital part of keeping EFF’s work, grounded in the ongoing issues faced by the creative builders and experimenters keeping tech secure (and fun).

Silly selfie of EFF staff holding the award

EFF Staff Attorney Hannah Zhao (left) and Staff Technologist Cooper Quintin (right) with the Uber Contributor Award (center)

Every year attendees and organizers show immense support and generosity in return, but this year exceeded all expectations. EFF raised more funds than all previous years at hacker summer camp—the three annual Las Vegas hacker conferences, BSidesLV, Black Hat USA, and DEF CON. We also gained over 1,000 new supporting and renewing members supporting us year-round. This community’s generosity fuels our work to protect encrypted messaging, fight back against illegal surveillance, and defend your right to hack and experiment. We’re honored to be welcomed so warmly year after year. 

Just this year, we saw another last minute cease-and-desist order sent to a security researcher about their DEF CON talk. EFF attorneys from our  Coders’ Rights Project attend every year, and were able to  jump into action to protect the speaker. While the team puts out fires at DEF CON for one week in August, their year-round support of coders is thanks to the continued support of the wider community. Anyone facing intimidation and spurious legal threats can always reach out for support at info@eff.org

We are deeply grateful for this honor and the unwavering support from DEF CON. Thank you to everyone who supported EFF at the membership booth, participated in our Poker Tournament and Tech Trivia, or checked out our talks. 

We remain committed to meeting the needs of coders and will continue to live up to this award, ensuring the hacker spirit thrives despite an increasingly hostile landscape. We look forward to seeing you again next year!

EFF Tells Yet Another Court to Ensure Everyone Has Access to the Law and Reject Private Gatekeepers

7 août 2024 à 13:09

Our laws belong to all of us, and we should be able to find, read, and comment on them free of registration requirements, fees, and other roadblocks. That means private organizations shouldn’t be able to control who can read and share the law, or where and how we can do those things. But that’s exactly what some industry groups are trying to do.

EFF has been fighting for years to stop them. The most recent instance is ASTM v. Upcodes. ASTM, an organization that develops technical standards, claims it retains copyright in those standards even when they’ve become binding law through “incorporation by reference.” When a standard is incorporated “by reference,” that means its text is not actually reprinted in the body of the government’s published regulations. Instead, the regulations include a citation to the standard, which means you have to track down a copy somewhere else if you want to know what the law requires.

 Incorporation by reference is common for a wide variety of laws governing the safety of buildings, pipelines, consumer products and so on. Often, these are laws that affect us directly in our everyday lives—but they can also be the most inaccessible. ASTM makes some of those laws available for free, but not all of them, and only via “reading rooms” that are hard to navigate and full of restrictions. Services like UpCodes have emerged to try to bridge the gap by making mandatory standards more easily available online. Among other things, UpCodes has created a searchable online library of some of the thousands of ASTM standards that have been incorporated by reference around the country. According to ASTM, that’s copyright infringement.

 EFF litigated a pair of cases on this issue for our client Public.Resource.Org (or “Public Resource”). We argued there that incorporated standards are the law, and no one can own copyright in the law. And in any event, it’s a fair use to republish incorporated standards in a centralized repository that makes them easier to access and use. In December 2023, the D.C. Circuit Court of Appeals ruled in Public Resource’s favor on fair use grounds.

 Based on our experience, we filed an amicus brief supporting UpCodes, joined by Public Knowledge and iFixit, Inc. and with essential support from local counsel Sam Silver and Abigail Burton at Welsh & Recker.  Unlike our cases for Public Resource, in UpCodes the standards at issue haven’t been directly incorporated into any laws. Instead, they’re incorporated by reference into other standards, which in turn have been incorporated into law. As we explain in our brief, this extra degree of separation shouldn’t make a difference in the legal analysis. If the government tells you, “Do what Document A says,” and Document A says, “Do what Document B says,” you’re going to need to read Document B to know what the government is telling you to do.

TAKE ACTION

Tell Congress: Access To Laws Should Be Fully Open

At the same time that we’re fighting this battle in the courts, we’re fighting a similar one in Congress. The Pro Codes Act would effectively endorse the claim that organizations like ASTM can “retain” copyright in codes, even after they are made law, as long as they make the codes available through a “publicly accessible” website—which means read-only, and subject to licensing limits. The Pro Codes Act recently fell short of the necessary votes to pass through the House, but it’s still being pushed by some lawmakers.

Whether it’s in courts or in Congress, we’ll keep fighting for your right to read and share the laws that we all must live by. A nation governed by the rule of law should not tolerate private control of that law. We hope the court in UpCodes comes to the same conclusion.

❌
❌