Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

16 octobre 2024 à 15:29

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

The X Corp. Shutdown in Brazil: What We Can Learn

8 octobre 2024 à 12:39

Update (10/8/2024): Brazil lifted a ban on the X Corp. social media platform today after the country's Supreme Court said the company had complied with all of its orders. Regulators have 24 hours to reinstate the platform, though it could take longer for it to come back online.

The feud between X Corp. and Brazil’s Supreme Court continues to drag on: After a month-long standoff, X Corp. folded and complied with court orders to suspend several accounts, name a legal representative in Brazil, and pay 28.6 million reais ($5.24 million) in fines. That hasn’t cleared the matter up, though.

The Court says X paid the wrong bank, which X denies. Justice Alexandre de Moraes has asked that the funds be redirected to the correct bank and for Brazil’s prosecutor general to weigh in on X’s requests to be reinstated in Brazil.

So the drama continues, as does the collateral damage to millions of Brazilian users who rely on X Corp. to share information and expression. While we watch it unfold, it’s not too early to draw some important lessons for the future.

Let’s break it down.

How We Got Here

The Players

Unlike courts in many countries, the Brazilian Supreme Court has the power to conduct its own investigations in limited circumstances, and issue orders based on its findings. Justice Moraes has drawn on this power frequently in the past few years to target what he called “digital militias,” anti-democratic acts, and fake news. Many in Brazil believe that these investigations, combined with other police work, have helped rein in genuinely dangerous online activities and protect the survival of Brazil’s democratic processes, particularly in the aftermath of January 2023 riots.

At the same time, Moraes’ actions have raised concerns about judicial overreach. For instance, his work is less than transparent. And the resulting content blocking orders more often than not demand suspension of entire accounts, rather than specific posts. Other leaked orders include broad requests for subscriber information of people who used a specific hashtag.

X Corp.’s controversial CEO, Elon Musk has publicly criticized the blocking orders. And while he may be motivated by concern for online expression, it is difficult to untangle that motivation from his personal support for the far-right causes Moraes and others believe threaten democracy in Brazil.

The Standoff

In August, as part of an investigation into coordinated actions to spread disinformation and destabilize Brazilian democracy, Moraes ordered X Corp. to suspend accounts that were allegedly used to intimidate and expose law enforcement officers. Musk refused, directly contradicting his past statements that X Corp. “can’t go beyond the laws of a country”—a stance that supposedly justified complying with controversial orders to block accounts and posts in Turkey and India.

After Moraes gave X Corp. 24 hours to fulfill the order or face fines and the arrest of one of its lawyers, Musk closed down the company’s operations in Brazil altogether. Moraes then ordered Brazilian ISPs to block the platform until Musk designated a legal representative. And people who used tools such as VPNs to circumvent the block can be fined 50,000 reais (approximately $ 9,000 USD) per day.

These orders remain in place unless or until pending legal challenges succeed. Justice Moraes has also authorized Brazil’s Federal Police to monitor “extreme cases” of X Corp. use. It’s unclear what qualifies as an “extreme case,” or how far the police may take that monitoring authority. Flagged users must be notified that X Corp. has been blocked in Brazil; if they continue to use it via VPNs or other means, they are on the hook for substantial daily fines.

A Bridge Too Far

Moraes’ ISP blocking order, combined with the user fines, has been understandably controversial. International freedom of expression standards treat these kinds of orders as extreme measures, permissible only in exceptional circumstances where provided by law and in accordance with necessary and proportionate principles. Justice Moraes said the blocking was necessary given upcoming elections and the risk that X Corp. would ignore future orders and allow the spread of disinformation.

But it has also meant that millions of Brazilians cannot access a platform that, for them, is a valuable source of information. Indeed, restrictions on accessing X Corp. ended up creating hurdles to understanding and countering electoral disinformation. The Brazilian Association of Newspapers has argued the restrictions adversely impact journalism. At the same time, online electoral disinformation holds steady on other platforms (while possibly at a slower pace).

Moreover, now that X Corp. has bowed to his demands, Moraes’ concerns that the company cannot be trusted to comply with Brazilian law are harder to justify. In any event, there are far more balanced options now to deal with the remaining fines that don’t create collateral damage to millions of users.

What Comes Next: Concerns and Open Questions

There are several structural issues that have helped fuel the conflict and exacerbated its negative effects. First, the mechanisms for legal review of Moraes’ orders are unclear and/or ineffective. The Supreme Court has previously held that X Corp. itself cannot challenge suspension of user accounts, thwarting a legal avenue for platforms to defend their users’ speech—even where they may be the only entities that even know about the order before accounts are shut down.

A Brazilian political party and the Federal Council of the Brazilian Bar Association filed legal challenges to the blocking order and user fines, respectively, but it is likely that courts will find these challenges procedurally improper as well.

Back in 2016, a single Supreme Court Justice held back a wave of blocking orders targeting WhatsApp. Eight years later, a single Justice may have created a new precedent in the opposite direction—with little or no means to appeal it.

Second, this case highlights what can happen when too much power is held by just a few people or institutions. On the one hand, in Brazil as elsewhere, a handful of wealthy corporations wield enormous power over online expression. Here, that problem is exacerbated by Elon Musk’s control of Starlink, an important satellite internet provider in Brazil.

On the other hand, the Supreme Court also has tremendous power. Although the court’s actions may have played an important role in preserving Brazilian democracy in recent years, powers that are not properly subject to public oversight or meaningful challenge invite overreach.

All of which speaks to a need for better transparency (in both the public and private sectors) and real checks and balances. Independent observers note that, despite challenges, Brazil has already improved its democratic processes. Strengthening this path includes preventing judicial overreach.

As for social media platforms, the best way to stave off future threats to online expression may be to promote more alternatives, so no single powerful person, whether a judge, a billionaire, or even a president, can dramatically restrict online expression with the stroke of a pen.

 

 

 

 

The Climate Has a Posse – And So Does Political Satire

16 septembre 2024 à 11:36

Greenwashing is a well-worn strategy to try to convince the public that environmentally damaging activities aren’t so damaging after all. It can be very successful precisely because most of us don’t realize it’s happening.

Enter the Yes Men, skilled activists who specialize in elaborate pranks that call attention to corporate tricks and hypocrisy. This time, they’ve created a website – wired-magazine.com—that looks remarkably like Wired.com and includes, front and center, an op-ed from writer (and EFF Special Adviser) Cory Doctorow. The op-ed, titled “Climate change has a posse” discussed the “power and peril” of a new “greenwashing” emoji designed by renowned artist Shepard Fairey:

First, we have to ask why in hell Unicode—formerly the Switzerland of tech standards—decided to plant its flag in the greasy battlefield of eco-politics now. After rejecting three previous bids for a climate change emoji, in 2017 and 2022, this one slipped rather suspiciously through the iron gates.

Either the wildfire smoke around Unicode’s headquarters in Silicon Valley finally choked a sense of ecological urgency into them, or more likely, the corporate interests that comprise the consortium finally found a way to appease public contempt that was agreeable to their bottom line.

Notified of the spoof, Doctorow immediately tweeted his joy at being included in a Yes Men hoax.

Wired.com was less pleased. An attorney for its corporate parent, Condé  Nast (CDN) demanded the Yes Men take the site down and transfer the domain name to CDN, claiming trademark infringement and misappropriation of Doctorow’s identity, with a vague reference to copyright infringement thrown in for good measure.

As we explained in our response on the Yes Men’s behalf, Wired’s heavy-handed reaction was both misguided and disappointing. Their legal claims are baseless given the satirical, noncommercial nature of the site (not to mention Doctorow’s implicit celebration of it after the fact). And frankly, a publication of Wired’s caliber should be celebrating this form of political speech, not trying to shut it down.

Hopefully Wired and CDN will recognize this is not a battle they want or need to fight. If not, EFF stands ready to defend the Yes Men and their critical work.

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

Security, Surveillance, and Government Overreach – the United States Set the Path but Canada Shouldn’t Follow It

The Canadian House of Commons is currently considering Bill C-26, which would make sweeping amendments to the country’s Telecommunications Act that would expand its Minister of Industry’s power over telecommunication service providers. It’s designed to accomplish a laudable and challenging goal: ensure that government and industry partners efficiently and effectively work together to strengthen Canada’s network security in the face of repeated hacking attacks.

C-26 is not identical to US national security laws. But without adequate safeguards, it could open the door to similar practices and orders.

As researchers and civil society organizations have noted, however, the legislation contains vague and overbroad language that may invite abuse and pressure on ISPs to do the government’s bidding at the expense of Canadian privacy rights. It would vest substantial authority in Canadian executive branch officials to (in the words of C-26’s summary) “direct telecommunications service providers to do anything, or refrain from doing anything, that is necessary to secure the Canadian telecommunications system.” That could include ordering telecommunications companies to install backdoors inside encrypted elements in Canada’s networksSafeguards to protect privacy and civil rights are few; C-26’s only express limit is that Canadian officials cannot order service providers to intercept private or radio-based telephone communications.

Unfortunately, we in the United States know all too well what can happen when government officials assert broad discretionary power over telecommunications networks. For over 20 years, the U.S. government has deputized internet service providers and systems to surveil Americans and their correspondents, without meaningful judicial oversight. These legal authorities and details of the surveillance have varied, but, in essence, national security law has allowed the U.S. government to vacuum up digital communications so long as the surveillance is directed at foreigners currently located outside the United States and doesn’t intentionally target Americans. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches to find Americans’ communications.

Congress has attempted to add in additional safeguards over the years, to little avail. In 2023, for example, the Federal Bureau of Investigation (FBI) released internal documents used to guide agency personnel on how to search the massive databases of information they collect. Despite reassurances from the intelligence community about its “culture of compliance,” these documents reflect little interest in protecting privacy or civil liberties. At the same time, the NSA and domestic law enforcement authorities have been seeking to undermine the encryption tools and processes on which we all rely to protect our privacy and security.

C-26 is not identical to U.S. national security laws. But without adequate safeguards, it could open the door to similar practices and orders. What is worse, some of those orders could be secret, at the government’s discretion. In the U.S., that kind of secrecy has made it impossible for Americans to challenge mass surveillance in court. We’ve also seen companies presented with gag orders in connection with “national security letters” compelling them to hand over information. C-26 does allow for judicial review of non-secret orders, e.g. an order requiring an ISP to cut off an account-holder or website, if the subject of those orders believes they are unreasonable or ungrounded. But that review may include secret evidence that is kept from applicants and their counsel.

Canadian courts will decide whether a law authorizing secret orders and evidence is consistent with Canada’s legal tradition. But either way, the U.S. experience offers a cautionary tale of what can happen when a government grants itself broad powers to monitor and direct telecommunications networks, absent corresponding protections for human rights. In effect, the U.S. government has created, in the name of national security, a broad exception to the Constitution that allows the government to spy on all Americans and denies them any viable means of challenging that spying. We hope Canadians will refuse to allow their government to do the same in the name of “cybersecurity.”

No Country Should be Making Speech Rules for the World

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Congress Should Just Say No to NO FAKES

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

EFF to Ninth Circuit: There’s No Software Exception to Traditional Copyright Limits

Copyright’s reach is already far too broad, and courts have no business expanding it any further, particularly where that reframing will undermine adversarial interoperability. Unfortunately, a federal district court did just that in the latest iteration of Oracle v. Rimini, concluding that software Rimini developed was a “derivative work” because it was intended to interoperate with Oracle's software, even though the update didn’t use any of Oracle’s copyrightable code.

That’s a dangerous precedent. If a work is derivative, it may infringe the copyright in the preexisting work from which it, well, derives. For decades, software developers have relied, correctly, on the settled view that a work is not derivative under copyright law unless it is “substantially similar” to a preexisting work in both ideas and expression. Thanks to that rule, software developers can build innovative new tools that interact with preexisting works, including tools that improve privacy and security, without fear that the companies that hold rights in those preexisting works would have an automatic copyright claim to those innovations.

That’s why EFF, along with a diverse group of stakeholders representing consumers, small businesses, software developers, security researchers, and the independent repair community, filed an amicus brief in the Ninth Circuit Court of Appeals explaining that the district court ruling is not just bad policy, it’s also bad law.  Court after court has confronted the challenging problem of applying copyright to functional software, and until now none have found that the copyright monopoly extends to interoperable software absent substantial similarity. In other words, there is no “software exception” to the definition of derivative works, and the Ninth Circuit should reject any effort to create one.

The district court’s holding relied heavily on an erroneous interpretation of a 1998 case, Micro Star v. FormGen. In that case, the plaintiff, FormGen, published a video game following the adventures of action hero Duke Nukem. The game included a software tool that allowed players themselves to build new levels to the game and share them with others. Micro Star downloaded hundreds of those user-created files and sold them as a collection. When FormGen sued for copyright infringement, Micro Star argued that because the user files didn’t contain art or code from the FormGen game, they were not derivative works.

The Ninth Circuit Court of Appeals ruled against Micro Star, explaining that:

[t]he work that Micro Star infringes is the [Duke Nukem] story itself—a beefy commando type named Duke who wanders around post-Apocalypse Los Angeles, shooting Pig Cops with a gun, lobbing hand grenades, searching for medkits and steroids, using a jetpack to leap over obstacles, blowing up gas tanks, avoiding radioactive slime. A copyright owner holds the right to create sequels and the stories told in the [user files] are surely sequels, telling new (though somewhat repetitive) tales of Duke’s fabulous adventures.

Thus, the user files were “substantially similar” because they functioned as sequels to the video game itself—specifically the story and principal character of the game. If the user files had told a different story, with different characters, they would not be derivative works. For example, a company offering a Lord of the Rings game might include tools allowing a user to create their own character from scratch. If the user used the tool to create a hobbit, that character might be considered a derivative work. A unique character that was simply a 21st century human in jeans and a t-shirt, not so much.

Still, even confined to its facts, Micro Star stretched the definition of derivative work. By misapplying Micro Star to purely functional works that do not incorporate any protectable expression, however, the district court rewrote the definition altogether. If the court’s analysis were correct, rightsholders would suddenly have a new default veto right in all kinds of works that are intended to “interact and be useable with” their software. Unfortunately, they are all too likely to use that right to threaten add-on innovation, security, and repair.

Defenders of the district court’s approach might argue that interoperable software will often be protected by fair use. As copyrightable software is found in everything from phones to refrigerators, fair use is an essential safeguard for the development of interoperable tools, where those tools might indeed qualify as derivative works. But many developers cannot afford to litigate the question, and they should not have to just because one federal court misread a decades-old case.

What Home Videotaping Can Tell Us About Generative AI

24 janvier 2024 à 16:04

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.


It’s 1975. Earth, Wind and Fire rule the airwaves, Jaws is on every theater screen, All In the Family is must-see TV, and Bill Gates and Paul Allen are selling software for the first personal computer, the Altair 8800.

But for copyright lawyers, and eventually the public, something even more significant is about to happen: Sony starts selling the first videotape recorder, or VTR. Suddenly, people had the power to  store TV programs and watch them later. Does work get in the way of watching your daytime soap operas? No problem, record them and watch when you get home. Want to watch the game but hate to miss your favorite show? No problem. Or, as an ad Sony sent to Universal Studios put it, “Now you don’t have to miss Kojak because you’re watching Columbo (or vice versa).”

What does all of this have to do with Generative AI? For one thing, the reaction to the VTR was very similar to today’s AI anxieties. Copyright industry associations ran to Congress, claiming that the VTR "is to the American film producer and the American public as the Boston strangler is to the woman home alone" – rhetoric that isn’t far from some of what we’ve heard in Congress on AI lately. And then, as now, rightsholders also ran to court, claiming Sony was facilitating mass copyright infringement. The crux of the argument was a new legal theory: that a machine manufacturer could be held liable under copyright law (and thus potentially subject to ruinous statutory damages) for how others used that machine.

The case eventually worked its way up to the Supreme Court, and in 1984 the Court rejected the copyright industry’s rhetoric and ruled in Sony’s favor. Forty years later, at least two aspects of that ruling are likely to get special attention.

First, the Court observed that where copyright law has not kept up with technological innovation, courts should be careful not to expand copyright protections on their own. As the decision reads:

Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology. In a case like this, in which Congress has not plainly marked our course, we must be circumspect in construing the scope of rights created by a legislative enactment which never contemplated such a calculus of interests.

Second, the Court borrowed from patent law the concept of “substantial noninfringing uses.” In order to hold Sony liable for how its customers used their VTRs, rightholders had to show that the VTR was simply a tool for infringement. If, instead, the VTR was “capable of substantial noninfringing uses,” then Sony was off the hook. The Court held that the VTR fell in the latter category because it was used for private, noncommercial time-shifting, and that time-shifting was a lawful fair use.  The Court even quoted Fred Rogers, who testified that home-taping of children’s programs served an important function for many families.

That rule helped unleash decades of technological innovation. If Sony had lost, Hollywood would have been able to legally veto any tool that could be used for infringing as well as non-infringing purposes. With Congress’ help, it has found ways to effectively do so anyway, such as Section 1201 of the DMCA. Nonetheless, Sony remains a crucial judicial protection for new creativity.

Generative AI may test the enduring strength of that protection. Rightsholders argue that generative AI toolmakers directly infringe when they used copyrighted works as training data. That use is very likely to be found lawful. The more interesting question is whether toolmakers are liable if customers use the tools to generate infringing works. To be clear, the users themselves may well be liable – but they are less likely to have the kind of deep pockets that make litigation worthwhile. Under Sony, however, the key question for the toolmakers will be whether their tools are capable of substantial non-infringing uses. The answer to that question is surely yes, which should preclude most of the copyright claims.

But there’s risk here as well – if any of these cases reach its doors, the Supreme Court could overturn Sony. Hollywood certainly hoped it would do so it when considered the legality of peer-to-peer file-sharing in MGM v Grokster. EFF and many others argued hard for the opposite result. Instead, the Court side-stepped Sony altogether in favor of creating a new form of secondary liability for “inducement.”

The current spate of litigation may end with multiple settlements, or Congress may decide to step in. If not, the Supreme Court (and a lot of lawyers) may get to party like it’s 1975. Let’s hope the justices choose once again to ensure that copyright maximalists don’t get to control our technological future.

Related Cases: 

The No AI Fraud Act Creates Way More Problems Than It Solves

19 janvier 2024 à 18:27

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes senseyou should be able to prevent a company from running an advertisement that falsely claims that you endorse its productsbut the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that categoryfrom pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyoneincluding performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendmentthat’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

Access to Law Should Be Fully Open: Tell Congress Not to Be Fooled by the Pro Codes Act

25 octobre 2023 à 13:07

It’s Open Access Week in the United States, which means it’s a chance to celebrate the accomplishments of the Open Access movementand reinforce the need to keep fighting. We’ve come a long way, with governments, universities, and research funders all successfully pressuring publishers to improve access to knowledge and finding ways to do it themselves. 

TAKE ACTION

Tell Congress: Access To Laws Should Be Fully Open

At EFF, we are especially proud of the work we have done helping our client, Public.Resource.Org (PRO), improve public access to the law. Public Resource’s mission is to make all government information available to the governed. As part of that mission, it posts safety codes such as the National Electrical Code, on its website, for free, in a fully accessible formatwhere those codes have been adopted into law by reference.  

You didn’t learn about incorporation by reference from Schoolhouse Rock, but it’s one of the key ways policymakers create law. A huge portion of the regulations we all live by (such as fire safety codes, or the National Electrical Code) are initially writtenby industry experts, government officials, and other volunteersunder the auspices of standards development organizations (SDOs). Federal, state, or municipal policymakers then review the codes and decide whether the standard is a good broad rule. If so, it is adopted into law “by reference.” In other words, the regulation cites the code by name but doesn’t copy and paste the entire thing into law (useful when the code is long and detailed). For example, if a regulation requires compliance with the National Fire Safety Code, it might simply refer to specific provisions or the code as a whole, rather than copying it in directly. But that doesn’t make compliance any less mandatory. 

When a pipeline bursts, journalists might want to investigate whether the pipeline complied with federal regulations, or compare federal, state, and local rules. When a toy is recalled, parents want to know whether its maker followed child safety rules. When a fire breaks out, homeowners and communities want to know whether the building complied with fire safety regulations. Online access to safety regulations helps make that reviewand accountabilitypossible.  

The rub: the SDOs claim to own copyright in these rules, even after they become law, and that they are therefore allowed to sell and otherwise control access to them. Based on that claim, they sued Public Resource for copyright infringement. 

But court after court has recognized that no one can own the law.  The Supreme Court held as much in its very first copyright case, and recently reaffirmed it: if “every citizen is presumed to know the law,” the Court observed, “it needs no argument to show . . . that all should have free access to its contents.” And in September 2023, after a decade of litigation, a federal appeals court held that Public Resource’s database was a lawful fair use 

Which brings us to the latest threat. Having lost in court, the SDOs are now looking to Congress to shore up their copyright claim, via the Pro Codes Act. It’s a tricky bit of legislation that seems innocuous if you don’t know the context. 

Pro Codes’ main provision requires that: 

An original work of authorship otherwise subject to protection under this title that has been adopted or incorporated by reference, in full or in part, into any Federal, State, or municipal law or regulation, shall retain such protection only if the owner of the copyright makes the work available at no monetary cost for viewing by the public in electronic form on a publicly accessible website in a location on the website that is readily accessible to the public.

Sounds good, right? In fact, it sounds obvious: mandatory regulations should be made available online, for free, so people can more easily know, share, and comment on them. Here’s the trick: this language would effectively endorse the claim that SDOs can “retain” copyright in the law, as long as they let the public read it online.  

There are many problems with this approach. First and foremost, “access” here means read-only, and subject to licensing limits.  We already know what that looks like: currently the SDOs that make their codes available to the public online do so through clunky, disorganized, siloed websites, largely inaccessible to the print-disabled, and subject to onerous contractual terms (like a requirement to give up your personal information). The public can’t copy, print, or even link to specific portions of the codes. In other words, you can look at the law (as long as you aren’t print-disabled and you know what to look for), but you can’t share it, compare it, or comment on it. As multiple amici who filed briefs in support of Public Resource explained, the public needs more.  

Second, it doesn’t really make sense. The many volunteers who develop these codes neither need nor want a copyright incentive. The SDOs don’t need it either—they don’t do anything creative (convening volunteers is important work, but not creative work), and they make plenty of profit though trainings, membership fees, and selling standards that haven’t been incorporated into law.  

Third, it’s unconstitutional under the First, Fifth, and Fourteenth Amendments, which guarantee the public’s right to read, share, and discuss the law.  

Finally, there is no need for this bill. It simply mandates that SDOs do badly what Public Resource is already doing, better, for free.  

The Pro Codes Act is a deceptive power grab that will help giant industry associations ration access to huge swaths of U.S. law. Tell Congress not to fall for it. 

 EFF is proud to celebrate Open Access Week. 

TAKE ACTION

Tell Congress: Access To Laws Should Be Fully Open

Internet Access Shouldn't Be a Bargaining Chip In Geopolitical Battles

We at EFF are horrified by the events transpiring in the Middle East: Hamas’ deadly attack on southern Israel last weekend and Israel’s ongoing retributive military attack and siege on Gaza. While we are not experts in military strategy or international diplomacy, we do have expertise with how human rights and civil liberties should be protected on the internet—even in times of conflict and war. 

That is why we are deeply concerned that a key part of Israel’s response has been to target telecommunications infrastructure in Gaza, including effectively shutting down the internet 

Here are a few reasons why:  

Shutting down telecommunications deprives civilians of a life-saving tool for sharing information when they need it the most. 

In wartime, being able to communicate directly with the people you trust is instrumental to personal safety and protection, and may ultimately mean the difference between life and death. But right now, the millions of people in Gaza, who are already facing a dire humanitarian crisis, are experiencing oppressive limitations on their access to the internet—stifling their ability to find out where their families are, obtain basic information about resources and any promised humanitarian aid, share safer border crossings, and other crucial information.  

The internet was built, in part, to make sure that communications like this are possible. And despite its use for spreading harmful content and misinformation, the internet is particularly imperative in moments of war and conflict when sharing and receiving real-time and up to date information is critical for survival. For example, what was previously a safe escape route may no longer be safe even a few hours later, and news printed in a broadsheet may no longer be reliable or relevant the following day.  

The internet enables this flow of information to remain active and alert to new realities. Shutting down access to internet services creates impossible obstacles for the millions of people trapped in Gaza. It is eroding access to the lifeline that millions of civilians need to stay alive.  

Shutting down telecommunications will not silence Hamas.  

We also understand the impulse to respond to Hamas’ shocking use of the internet to terrorize Israelis, including by taking over Facebook pages of people they have taken hostage to live stream and post horrific footage. We urge social media and other platforms to act quickly when those occur, which typically they can already do under their respective terms of use. But the Israeli government’s reaction of shutting down all internet communications in Gaza is a wrongheaded response and one that will impact exactly the wrong people.  

Hamas is sufficiently well-resourced to maneuver through any infrastructural barriers, including any internet shutdowns imposed by the Israeli government. Further, since Israel isn’t able to limit the voice of Hamas, the internet shutdown effectively allows Hamas to dominate the Palestinian narrative in the public vernacular—eliminating the voices of activists, journalists, and ordinary people documenting their realities and sharing facts in real-time.  

Shutting down telecommunications sets a dangerous precedent.  

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power to cut off that access. Shutdowns have become a blunt instrument that aid state violence and deprive free speech, and are routinely deployed by authoritarian governments that do not care about the rule of law or human rights. For example, limiting access to the internet was a vital component of the Syrian government’s repressive strategy in 2013, and Egyptian President Hosni Mubarak shut down all internet access for five days in 2011 in an effort to impair Egyptians’ ability to coordinate and communicate. As we’ve said before, access to the Internet shouldn't be a bargaining chip in geopolitical battles. Instead of protecting human rights of civilians, Israel has adopted a disproportionate tactic often used by the authoritarian governments of Iran, Russia, and Myanmar. 

Israel is a party to the International Covenant on Civil and Political Rights and has long claimed to be committed to upholding and protecting human rights. But shutting off access to telecommunications for millions of ordinary Palestinians is grossly inconsistent with that claim and instead sends the message that the Israeli government is actively working to ensure that ordinary Palestinians are placed at an even greater risk of harm than they already are. It also sends the unmistakable message that the Israeli government is preventing people around the world learning the truth about its actions in Gaza, something that is affirmed by Israel’s other actions like approving new regulation to temporarily shut down news channels which ‘damage national security.’ 

We call on Israel to stop interfering with the telecommunications infrastructure in Gaza, and to ensure Palestinians from Gaza to the West Bank immediately have unrestricted access to the internet.    

 

 

❌
❌