Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

EFF Reminds the Supreme Court That Copyright Trolls Are Still a Problem

At EFF, we spend a lot of time calling out the harm caused by copyright trolls and protecting internet users from their abuses. Copyright trolls are serial plaintiffs who use search tools to identify technical, often low-value infringements on the internet, and then seek nuisance settlements from many defendants. These trolls take advantage of some of copyright law’s worst features—especially the threat of massive, unpredictable statutory damages—to impose a troublesome tax on many uses of the internet.

On Monday, EFF continued the fight against copyright trolls by filing an amicus brief in Warner Chappell Music v. Nealy, a case pending in the U.S. Supreme Court. The case doesn’t deal with copyright trolls directly. Rather, it involves the interpretation of the statute of limitations in copyright cases. Statutes of limitations are laws that limit the time after an event within which legal proceedings may be initiated. The purpose is to encourage plaintiffs to file their claims promptly, and to avoid stale claims and unfairness to defendants when time has passed and evidence might be lost. For example, in California, the statute of limitations for a breach of contract claim is generally four years.

U.S. copyright law contains a statute of limitations of three years “after the claim accrued.” Warner Chappell Music v. Nealy deals with the question of exactly what this means. Warner Chappell Music, the defendant in the case, argued that the claim accrued when the alleged infringement occurred, giving a plaintiff three years after that to recover damages. Plaintiff Nealy argued that his claim didn’t “accrue” until he discovered the alleged infringement, or reasonably should have discovered it. This “discovery rule” would permit Nealy to recover damages for acts that occurred long ago—much longer than three years—as long as he filed suit within three years of that “discovery.”

How does all this affect copyright trolls? The “discovery rule” lets trolls reach far, far back in time to find alleged infringements (such as a photo posted on a website), and plausibly threaten their targets with years of accumulated damages. All they have to do is argue that they couldn’t reasonably have discovered the infringement until recently. The trolls’ targets would have trouble defending against ancient claims, and be more likely to have to pay nuisance settlements.

EFF’s amicus brief provided the court with an overview of the copyright trolling problem and gave examples of types of trolls. The brief then showed how an unlimited look-back period for damages under the discovery rule adds risk and uncertainty for the targets of copyright trolls and would encourage abuse of the legal system.

EFF’s brief in this case is a little unusual—the case doesn’t directly involve technology or technology companies (except indirectly, to the extent they could be targets of copyright trolls). The party we’re supporting is a leading music publishing company. Other amici on the same side include the RIAA, the U.S. Chamber of Commerce, and the Association of American Publishers. Because statutes of limitations are fundamental to the justice system, this infrequent coalition perhaps isn’t that surprising.

In many previous copyright troll cases, the courts have caught on to their abuse of the judicial system, and taken steps to shut down the trolling. EFF filed its brief in this case to ask the Supreme Court to extend these judicial safeguards, by holding that copyright infringement damages can only be recovered for acts occurring three years before the filing of the complaint. An indefinite statute of limitations would throw gasoline on the copyright troll fire and risk encouraging new trolls to come out from under the figurative bridge.

❌