Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the fact that they don’t own much of the media they have paid for.  

All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright, they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore. 

Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection. 

As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.  

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek. 

EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements

Par : Hannah Zhao
24 janvier 2025 à 19:03

Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.

In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.

On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.

In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.

As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health. As the refrain goes, whatever the need, there's an app for that. This special nature of digital devices requires courts to review warrants to search digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.

In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.

Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal constraints and a limit to the categories of data to be searched. Neither limitationsor any other limitationswere in the issued search warrant. The police should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.

As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.

Face Scans to Estimate Our Age: Harmful and Creepy AF

23 janvier 2025 à 18:56

Government must stop restricting website access with laws requiring age verification.

Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?

Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.

Error and discrimination

Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.

Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.

Estimating our identity and demographics

Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.

Some companies are in both the age estimation business and the face identification business.

Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.

Estimating our emotions and honesty

Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.

Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.

When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.

Privacy and infosec

The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.

Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).

Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.

Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.

Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.

Next steps

Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.

At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.

Second Circuit Rejects Record Labels’ Attempt to Rewrite the DMCA

Par : Tori Noble
23 janvier 2025 à 18:22

In a major win for creator communities, the U.S. Court of Appeals for the Second Circuit has once again handed video streaming site Vimeo a solid win in its long-running legal battle with Capitol Records and a host of other record labels.

The labels claimed that Vimeo was liable for copyright infringement on its site, and specifically that it can’t rely on the Digital Millennium Copyright Act’s safe harbor because Vimeo employees “interacted” with user-uploaded videos that included infringing recordings of musical performances owned by the labels. Those interactions included commenting on, liking, promoting, demoting , or posting them elsewhere on the site. The record labels contended that these videos contained popular songs, and it would’ve been obvious to Vimeo employees that this music was unlicensed.

But as EFF explained in an amicus brief filed in support of Vimeo, even rightsholders themselves mistakenly demand takedowns. Labels often request takedowns of music they don’t own or control, and even request takedowns of their own content. They also regularly target fair uses. When rightsholders themselves cannot accurately identify infringement, courts cannot presume that a service provider can do so, much less a blanket presumption as to hundreds of videos.

In an earlier ruling, the court  held that the labels had to show that it would be apparent to a person without specialized knowledge of copyright law that the particular use of the music was unlawful, or prove that the Vimeo workers had expertise in copyright law. The labels argued that Vimeo’s own efforts to educate its employees and user about copyright, among other circumstantial evidence, were enough to meet that burden. The Second Circuit disagreed, finding that:

Vimeo’s exercise of prudence in instructing employees not to use copyrighted music and advising users that use of copyrighted music “generally (but not always) constitutes copyright infringement” did not educate its employees about how to distinguish between infringing uses and fair use.

The Second Circuit also rejected another equally dangerous argument: that Vimeo lost safe harbor protection by receiving a “financial benefit” from infringing activity, such as user-uploaded videos, that the platform had a “right and ability to control.” The labels contended that any website that exercises editorial judgment—for example, by removing, curating, or organizing content—would necessarily have the “right and ability to control” that content. If they were correct, ordinary content moderation would put a platform at risk of crushing copyright liability.

As the Second Circuit put it, the labels’ argument:

would substantially undermine what has generally been understood to be one of Congress’s major objectives in passing the DMCA: encouraging entrepreneurs to establish websites that can offer the public rapid, efficient, and inexpensive means of communication by shielding service providers from liability for infringements placed on the sites by users.

Fortunately, the Second Circuit’s decisions in this case help preserve the safe harbors and the expression and innovation that they make possible. But it should not have taken well over a decade of litigation—and likely several millions of dollars in legal fees—to get there.

Related Cases: 

Speaking Freely: Lina Attalah

23 janvier 2025 à 14:51

This interview has been edited for length and clarity.*

Jillian York: Welcome, let’s start here. What does free speech or free expression mean to you personally?

Lina Attalah: Being able to think without too many calculations and without fear.

York: What are the qualities that make you passionate about the work that you do, and also about telling stories and utilizing your free expression in that way? 

Well, it ties in with your first question. Free speech is basically being able to express oneself without fear and without too many calculations. These are things that are not granted, especially in the context I work in. I know that it does not exist in any absolute way anywhere, and increasingly so now, but even more so in our context, and historically it hasn't existed in our context. So this has also drawn me to try to unearth what is not being said, what is not being known, what is not being shared. I guess the passion came from that lack more than anything else. Perhaps, if I lived in a democracy, maybe I wouldn't have wanted to be a journalist. 

York: I’d like to ask you about Syria, since you just traveled there. I know that you're familiar with the context there in terms of censorship and the Internet in particular. What do you see in terms of people's hopes for more expression in Syria in the future?

I think even though we share an environment where freedom of expression has been historically stifled, there is an exception to Syria when it comes to the kind of controls there have been on people's ability to express, let alone to organize and mobilize. I think there's also a state of exception when it comes to the price that had to be paid in Syrian prisons for acts of free expression and free speech. This is extremely exceptional to the fabric of Syrian society. So going there and seeing that this condition was gone, after so much struggle, after so much loss, is a situation that is extremely palpable. From the few days I spent there, what was clear to me is that everybody is pretty much uncertain about the future, but there is an undoubted relief that this condition is gone for now, this fear. It literally felt like it's a lower sky, sort of repressing people's chests somehow, and it's just gone. This burden was just gone. It's not all flowery, it's not all rosy. Everybody is uncertain. But the very fact that this fear is gone is very palpable and cannot be taken away from the experience we're living through now in Syria.

York: I love that. Thank you. Okay, let’s go to Egypt a little bit. What can you tell us about the situation for free speech in the context of Egypt? We're coming up on fourteen years since the uprising in 2011 and eleven years since Sisi came to power. And I mean, I guess, contextualize that for our readers who don't know what's happened in Egypt in the past decade or so.

For a quick summary, the genealogy goes as follows. There was a very tight margin through which we managed to operate as journalists, as activists, as people trying to sort of enlarge the space through which we can express ourselves on matters of public concerns in the last years of Mubarak's rule. And this is the time that coincided with the opening up of the internet—back in the time when the internet was also more of a public space, before the overt privatization that we experience in that virtual space as well. Then the Egyptian revolution happened in 2011 and that space further exploded in expression and diversity of voices and people speaking to different issues that had previously been reserved to the hideouts of activist circles. 

Then you had a complete reversal of all of this with the takeover of a military appointed government. Subsequently, with the election of President Sisi in 2014, it became clear that it was a government that believed that the media's role—this is just one example focusing on the media—is to basically support the government in a very sort of 1960s nasserite understanding that there is a national project, that he's leading it, and we are his soldiers. We should basically endorse, support, not criticize, not weaken, basically not say anything differently from him. And you know this, of course, transcends the media. Everybody should be a soldier in a way and also the price of doing otherwise has been hefty, in the sense that a lot of people ended up under prosecution, serving prolonged jail sentences, or even spending prolonged times in pre-trial detention without even getting prosecuted.

So you have this total reversal from an unfolding moment of free speech that sort of exploded for a couple of years starting in 2011, and then everything closing up, closing up, closing up to the point where that margin that I started off talking about at the beginning is almost no longer even there. And, on a personal note, I always ask myself if the margin has really tightened or if one just becomes more scared as they grow older? But the margin has indeed tightened quite extensively. Personally, I'm aging and getting more scared. But another objective indicator is that almost all of my friends and comrades who have been with me on this path are no longer around because they are either in prison or in exile or have just opted out from the whole political apparatus. So that says that there isn't the kind of margin through which we managed to maneuver before the revolution.

 York: Earlier you touched on the privatization of online spaces. Having watched the way tech companies have behaved over the past decade, what do you think that these companies fail to understand about the Egyptian and the regional context?

It goes back to how we understand this ecosystem, politically, from the onset. I am someone who thinks of governments and markets, or governments and corporations, as the main actors in a market, as dialectically interchangeable. Let's say they are here to control, they are here to make gains, and we are here to contest them even though we need them. We need the state, we need the companies. But there is no reason on earth to believe that either of them want our best. I'm putting governments and companies in the same bucket, because I think it's important not to fall for the liberals’ way of thinking that the state has certain politics, but the companies are freer or are just after gains. I do think of them as formidable political edifices that are self-serving. For us, the political game is always how to preserve the space that we've created for ourselves, using some of the leverage from these edifices without being pushed over and over. 

For me, this is a very broad political thing, and I think about them as a duality, because, operating as a media organization in a country like Egypt, I have to deal with the dual repression of those two edifices. To give you a very concrete example, in 2017 the Egyptian government blocked my website, Mada Masr, alongside a few other media websites, shortly before going on and blocking hundreds of websites. All independent media websites, without exception, have been blocked in Egypt alongside sites through which you can download VPN services in order to be able to also access these blocked websites. And that's done by the government, right? So one of the things we started doing when this happened in 2017 is we started saying, “Okay, we should invest in Meta. Or back then it was still Facebook, so we should invest in Facebook more. Because the government monitors you.” And this goes back to the relation, the interchangeability of states and companies. The government would block Mada Masr, but would never block Facebook, because it's bad for business. They care about keeping Facebook up and running. 

It's not Syria back in the time of Assad. It's not Tunisia back in the time of Ben Ali. They still want some degree of openness, so they would keep social media open. So we let go of our poetic triumphalism when we said, we will try to invest in more personalized, communitarian dissemination mechanisms when building our audiences, and we'll just go on Facebook. Because what option do we have? But then what happens is that is another track of censorship in a different way that still blocks my content from being able to reach its audiences through all the algorithmic developments that happened and basically the fact that—and this is not specific to Egypt—they just want to think of themselves as the publishers. They started off by treating us as the publishers and themselves as the platforms, but at this point, they want to be everything. And what would we expect from a big company, a profitable company, besides them wanting to be everything? 

York: I don't disagree at this point. I think that there was a point in time where I would have disagreed. When you work closely with companies, it’s easy to fall into the trap of believing that change is possible because you know good people who work there, people who really are trying their best. But those people are rarely capable of shifting the direction of the company, and are often the ones to leave first.

Let’s shift to talking about our friend, Egyptian political prisoner Alaa Abd El-Fattah. You mentioned the impact that the past 11 years, really the past 14 years, have had on people in Egypt. And, of course, there are many political prisoners, but one of the prisoners that that EFF readers will be familiar with is Alaa. You recently accepted the English PEN Award on his behalf. Can you tell us more about what he has meant to you?

One way to start talking about Alaa is that I really hope that 2025 is the year when he will get released. It's just ridiculous to keep making that one single demand over and over without seeing any change there. So Alaa has been imprisoned on account of his free speech, his attempt to speak freely. And he attempted to speak, you know, extremely freely in the sense that a lot of his expression is his witty sort of engagement with surrounding political events that came through his personal accounts on social media, in additional to the writing that he's been doing for different media platforms, including ours and yours and so on. And in that sense, he's so unmediated, he’s just free. A truly free spot. He has become the icon of the Egyptian revolution, the symbol of revolutionary spirit who you know is fighting for people's right to free speech and, more broadly, their dignity. I guess I'm trying to make a comment, a very basic comment, on abolition and, basically, the lack of utility of prisons, and specifically political prisons. Because the idea is to mute that voice. But what has happened throughout all these years of Alaa’s incarceration is that his voice has only gotten amplified by this very lack, by this very absence, right? I always lament about the fact that I do not know if I would have otherwise become very close to Alaa. Perhaps if he was free and up and running, we wouldn't have gotten this close. I have no idea. Maybe he would have just gone working on his tech projects and me on my journalism projects. Maybe we would have tried to intersect, and we had tried to intersect, but maybe we would have gone on without interacting much. But then his imprisonment created this tethering where I learned so much through his absence. 

Somehow I've become much more who I am in terms of the journalism, in terms of the thinking, in terms of the politics, through his absence, through that lack. So there is something that gets created with this aggressive muting of a voice that should be taken note of. That being said, I don't mean to romanticize absence, because he needs to be free. You know it's, it's becoming ridiculous at this point. His incarceration is becoming ridiculous at this point. 

York: I guess I also have to ask, what would your message be to the UK Government at this point?

Again, it's a test case for what so-called democratic governments can still do to their citizens. There needs to be something more forceful when it comes to demanding Alaa’s release, especially in view of the condition of his mother, who has been on a hunger strike for over 105 days as of the day of this interview. So I can't accept that this cannot be a forceful demand, or this has to go through other considerations pertaining to more abstract bilateral relations and whatnot. You know, just free the man. He's your citizen. You know, this is what's left of what it means to be a democratic government.

York: Who is your free speech hero? 

It’s Alaa. He always warns us of over-symbolizing him or the others. Because he always says, when we over symbolize heroes, they become abstract. And we stop being able to concretize the fights and the resistance. We stop being able to see that this is a universal battle where there are so many others fighting it, albeit a lot more invisible, but at the same time. Alaa, in his person and in what he represents, reminds me of so much courage. A lot of times I am ashamed of my fear. I'm ashamed of not wanting to pay the price, and I still don't want to pay the price. I don't want to be in prison. But at the same time, I look up at someone like Alaa, fearlessly saying what he wants to say, and I’m just always in awe of him. 

The Impact of Age Verification Measures Goes Beyond Porn Sites

As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.

These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.

This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.

In either case, it is critical to recognize that age verification bills could block far more than just pornography.

Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.

This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post. 

Beyond Individual States: Kids Online Safety Act (KOSA)

Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA. 

Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information. 

LGBTQ+ Platform Censorship by Design

While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default. 

This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian. 

Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content

The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down. 

And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact. 

LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.

Call to Action: Digital Rights Are LGBTQ+ Rights

These laws have the potential to harm us all—including the children they are designed to protect. 

As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression. 

The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.

We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.

The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.

Texas Is Enforcing Its State Data Privacy Law. So Should Other States.

22 janvier 2025 à 17:31

States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data without telling customers.

In its complaint, the attorney general’s office alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting unfair and deceptive practices.)

On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.

That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as we have raised repeatedly. Everyone should know when that information's being collected and where it's going.

More state regulators should follow suit and use the privacy laws on their books.

The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.

This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”

In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law. For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all companies respect our privacy.

Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured the violation. It just didn’t.

According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.

Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.

The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step

22 janvier 2025 à 16:30

The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.

Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites.

The FTC complaint zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that deserves to be stopped.

For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to consumer reporting agencies, which gather and sell consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to disable the collection of location data and driving information.

These are all important, solid steps, and these sorts of rules should apply to all carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects, and it is not always possible to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.

All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for.

VICTORY! Federal Court (Finally) Rules Backdoor Searches of 702 Data Unconstitutional

Better late than never: last night a federal district court held that backdoor searches of databases full of Americans’ private communications collected under Section 702 ordinarily require a warrant. The landmark ruling comes in a criminal case, United States v. Hasbajrami, after more than a decade of litigation, and over four years since the Second Circuit Court of Appeals found that backdoor searches constitute “separate Fourth Amendment events” and directed the district court to determine a warrant was required. Now, that has been officially decreed.

In the intervening years, Congress has reauthorized Section 702 multiple times, each time ignoring overwhelming evidence that the FBI and the intelligence community abuse their access to databases of warrantlessly collected messages and other data. The Foreign Intelligence Surveillance Court (FISC), which Congress assigned with the primary role of judicial oversight of Section 702, has also repeatedly dismissed arguments that the backdoor searches violate the Fourth Amendment, giving the intelligence community endless do-overs despite its repeated transgressions of even lax safeguards on these searches.

This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702 allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.

EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.

Hasbajrami involves a U.S. resident who was arrested at New York JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only after his original conviction did the government explain that its case was premised in part on emails between Mr. Hasbajrami and an unnamed foreigner associated with terrorist groups, emails collected warrantless using Section 702 programs, placed in a database, then searched, again without a warrant, using terms related to Mr. Hasbajrami himself.

The district court found that regardless of whether the government can lawfully warrantlessly collect communications between foreigners and Americans using Section 702, it cannot ordinarily rely on a “foreign intelligence exception” to the Fourth Amendment’s warrant clause when searching these communications, as is the FBI’s routine practice. And, even if such an exception did apply, the court found that the intrusion on privacy caused by reading our most sensitive communications rendered these searches “unreasonable” under the meaning of the Fourth Amendment. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.

In light of this ruling, we ask Congress to uphold its responsibility to protect civil rights and civil liberties by refusing to renew Section 702 absent a number of necessary reforms, including an official warrant requirement for querying US persons data and increased transparency. On April 15, 2026, Section 702 is set to expire. We expect any lawmaker worthy of that title to listen to what this federal court is saying and create a legislative warrant requirement so that the intelligence community does not continue to trample on the constitutionally protected rights to private communications. More immediately, the FISC should amend its rules for backdoor searches and require the FBI to seek a warrant before conducting them.

Protecting “Free Speech” Can’t Just Be About Targeting Political Opponents

Par : Joe Mullin
22 janvier 2025 à 10:40

The White House executive order “restoring freedom of speech and ending federal censorship,” published Monday, misses the mark on truly protecting Americans’ First Amendment rights. 

The order calls for an investigation of efforts under the Biden administration to “moderate, deplatform, or otherwise suppress speech,” especially on social media companies. It goes on to order an Attorney General investigation of any government activities “over the last 4 years” that are inconsistent with the First Amendment. The order states in part: 

Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.

But noticeably absent from the Executive Order is any commitment to government transparency. In the Santa Clara Principles, a guideline for online content moderation authored by EFF and other civil society groups, we state that “governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the request." This Executive Order doesn’t come close to embracing such a principle. 

The order is also misguided in its time-limited targeting. Informal government efforts to persuade, cajole, or strong-arm private media platforms, also called “jawboning,” have been an aspect of every U.S. government since at least 2011. Any good-faith inquiry into such pressures would not be limited to a single administration. It’s misleading to suggest the previous administration was the only, or even the primary, source of such pressures. This time limit reeks of political vindictiveness, not a true effort to limit improper government actions. 

To be clear, a look back at past government involvement in online content moderation is a good thing. But an honest inquiry would not be time-limited to the actions of a political opponent, nor limited to only past actions. The public would also be better served by a report that had a clear deadline, and a requirement that the results be made public, rather than sent only to the President’s office. Finally, the investigation would be better placed with an inspector general, not the U.S. Attorney General, which implies possible prosecutions. 

As we have written before, the First Amendment forbids the government from coercing private entities to censor speech. This principle has countered efforts to pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication about user speech is unconstitutional; some are beneficial, like when platforms reach out to government agencies as authoritative sources of information. 

For anyone who may have been excited to see a first-day executive order truly focused on free expression, President Trump’s Jan. 20 order is a disappointment, at best. 

EFF Sends Transition Memo on Digital Policy Priorities to New Administration and Congress

Par : Josh Richman
21 janvier 2025 à 10:30
Topics Include National Security Surveillance, Consumer Privacy, AI, Cybersecurity, and Many More

SAN FRANCISCO—Standing up for technology users in 2025 and beyond requires careful thinking about government surveillance, consumer privacy, artificial intelligence, and encryption, among other topics. To help incoming federal policymakers think through these key issues, the Electronic Frontier Foundation (EFF) has shared a transition memo with the Trump Administration and the 119th U.S. Congress. 

“We routinely work with officials and staff in the White House and Congress on a wide range of policies that will affect digital rights in the coming years,” said EFF Director of Federal Affairs India McKinney. “As the oldest, largest, and most trusted nonpartisan digital rights organization, EFF’s litigators, technologists, and activists have a depth of knowledge and experience that remains unmatched. This memo focuses on how Congress and the Trump Administration can prioritize helping ordinary Americans protect their digital freedom.”  

The 64-page memo covers topics such as surveillance, including warrantless digital dragnets, national security surveillance, face recognition technology, border surveillance, and reproductive justice; encryption and cybersecurity; consumer privacy, including vehicle data, age verification, and digital identification; artificial intelligence, including algorithmic decision-making, transparency, and copyright concerns; broadband access and net neutrality; Section 230’s protections of free speech online; competition; copyright; the Computer Fraud and Abuse Act; and patents. 

EFF also shared a transition memo with the incoming Biden Administration and Congress in 2020. 

“The new Congress and the Trump Administration have an opportunity to make the internet a much better place for users. This memo should serve as a blueprint for how they can do so,” said EFF Executive Director Cindy Cohn. “We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we always work for technology users.” 

For the 2025 transition memo: https://eff.org/document/eff-transition-memo-trump-administration-2025 

For the 2020 transition memo: https://www.eff.org/document/eff-transition-memo-incoming-biden-administration-november-2020

Contact: 
India
McKinney
Director of Federal Affairs
Maddie
Daly
Assistant Director of Federal Affairs

VPNs Are Not a Solution to Age Verification Laws

VPNs are having a moment. 

On January 1st, Florida joined 18 other states in implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub blocked access to users in Florida. Residents in the “Free State of Florida” have now lost access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.

At the same time, Google Trends data showed a spike in searches for VPN access across Florida–presumably because users are trying to access the site via VPNs.  

How Did This Happen?

Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access “adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat. 

Despite experts from across civil society flagging concerns about the impact of these laws on both adults’ and children’s rights, politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3

HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age verification requirements face civil penalties and could be subject to lawsuits from the state. 

Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access PORNHUB?” 

Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which could lead to potential breaches and misuse of that information. In a statement to local news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at risk:

Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law. Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.

The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance online. 

How Do VPNs Play a Role? 

Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. 

A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.

With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us all—including the young people they are designed to protect. 

What Can You Do?

If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.

No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms

Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom Foundation, and others.

Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data

Par : Lena Cohen
17 janvier 2025 à 10:59

If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced its latest policy changes. These changes, seemingly designed to appease the incoming Trump administration, included loosening Meta’s hate speech policy to allow for the targeting of LGBTQ+ people and immigrants. 

If these changes—or Meta’s long history of anti-competitive, censorial, and invasive practices—make you want to cut ties with the company, it’s sadly not as simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.

Meta’s Business Model Relies on Your Personal Data

You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue, demonstrating just how valuable your personal data is to its business. 

How Meta Harvests Your Personal Data

Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data. A 2022 investigation by The Markup found that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel. 

Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including their contact details and browsing history.

Take These Steps to Limit How Meta Profits From Your Personal Data

Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data. 

Update Your Meta Account Settings

Open your Instagram or Facebook app and navigate to the Accounts Center page. 

A screenshot of the Meta Accounts Center page.

If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for both platforms.

Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.

Once you’re on the “Accounts Center” page, make the following changes:

1) Stop Meta from targeting ads based on data it collects about you on other apps and websites: 

Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad settings depending on your location). Click the Activity information from ad partners option, then Review Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when prompted.

A screenshot of the "Activity information from ad partners" setting with the "No" option selected

2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the process.

Back on the Ad preferences page, click the Manage info tab again (called Ad settings depending on your location), then select the Ads shown outside of Meta setting, select Not allowed and then click the “X” button to close the pop-up.

Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.

A screenshot of the "Ads outside Meta" setting with the "Not allowed" option selected

3) Disconnect the data that other companies share with Meta about you from your account:

From the Accounts Center screen, click the Your information and permissions option, followed by Your activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity button. Note: This may take up to 48 hours to take effect.

Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.

A screenshot of the "Manage future activity" setting with the "Disconnect future activity" option selected

While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for other purposes. 

Install Privacy Badger to Block Meta’s Trackers

Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.

Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the only mobile browser that supports Privacy Badger. 

Limit Meta’s Tracking on Your Phone

Take these additional steps on your mobile device:

  • Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
  • Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.

The Real Solution: Strong Privacy Legislation

Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information. 

Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit.

EFF Statement on U.S. Supreme Court's Decision to Uphold TikTok Ban

Par : David Greene
17 janvier 2025 à 10:49

We are deeply disappointed that the Court failed to require the strict First Amendment scrutiny required in a case like this, which would’ve led to the inescapable conclusion that the government's desire to prevent potential future harm had to be rejected as infringing millions of Americans’ constitutionally protected free speech. We are disappointed to see the Court sweep past the undisputed content-based justification for the law – to control what speech Americans see and share with each other – and rule only based on the shaky data privacy concerns.

The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.

Systemic Risk Reporting: A System in Crisis?

16 janvier 2025 à 12:45

The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.  

In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.  

A Short DSA Explainer  

The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.   

For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.  

While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:  

  • We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.  
  • We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and  
  • We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic. 

Introducing Systemic Risks 

The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.  

The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.   

Risk Management Through the Lens of the Romanian Election 

Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.  

To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.” 

TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approachespotentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.  

In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.  

Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations. 

Limitations of the DSA’s Risk Governance Approach 

What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.  

As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.  

Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process. 

Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.  

The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.  

The Way Forward  

The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.  

Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is neededone that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.  

Digital Rights and the New Administration | EFFector 37.1

15 janvier 2025 à 13:06

It's a new year and EFF is here to help you keep up with your New Year's resolution to stay up-to-date on the latest digital rights news with our EFFector newsletter!

This edition of the newsletter covers our tongue-in-cheek "awards" for some of the worst data breaches in 2024, The Breachies; an explanation of "real-time bidding," the most privacy-invasive surveillance system you may have never heard of; and our notes to Meta on how to empower freedom of expression on their platforms. 

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.1 - DIGITAL RIGHTS AND THE NEW ADMINISTRATION

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Police Use of Face Recognition Continues to Wrack Up Real-World Harms

15 janvier 2025 à 11:22

Police have shown, time and time again, that they cannot be trusted with face recognition technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is fully accurate or not, is too dangerous for police use,  and such use ought to be banned.

Now, The Washington Post has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the most likely match spit out by the computer without first doing their own investigation.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami, respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent match found by the computer, they in fact were not the correct match.

This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the hearing.

Apparently not all officers got the memo.

We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face recognition. They include Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number of wrongful arrests, instituting strict new guidelines about the use of face recognition technology.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in ways that violate their own departmental policies.


We continue to urge cities, states, and Congress to ban police use of face recognition technology. We stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our movement. 

EFFecting Change: Digital Rights & the New Administration

14 janvier 2025 à 20:18

Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. 

EFFecting Change Livestream Series:
Digital Rights & the New Administration
Thursday, January 16th
10:00 AM - 11:00 AM Pacific - Check Local Time
This event is LIVE and FREE!

RSVP Today

What direction will your digital rights take under Trump and the 119th Congress? Find out about the topics EFF is watching and the effect they might have on you.

Join our panel of experts as they discuss surveillance, age verification, and consumer privacy. Learn how you can advocate for your digital rights and the resources available to you with our panel featuring EFF Senior Investigative Researcher Beryl Lipton, EFF Senior Staff Technologist Bill Budington, EFF Legislative Director Lee Tien, and EFF Senior Policy Analyst Joe Mullin.

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards

Par : Jason Kelley
14 janvier 2025 à 12:33

On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”  

But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.

Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.

Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users. 

The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’” 

A design for the Most Wanted CEO playing cards

Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition). 

A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.

As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods. 

Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.) 

TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to  update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines. 

These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.

On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”  

Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation,  uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.  

Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.  

Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton

Par : Jason Kelley
13 janvier 2025 à 16:02

The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.  

The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.  

The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.

1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.  

Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why: 

While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.  

First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.  

Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.

Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway. 

Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.  

Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.  

2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare. 

Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.  

There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy. 

3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.  

More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography. 

It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media. 

4. The Supreme Court Has Rightly Struck Down Similar Laws Before.  

In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web. 

Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

5. There is No Safe, Privacy Protecting Age-Verification Technology. 

The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act. 

* * *

 

The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law 

Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights. 

For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review. 

 

❌
❌