Vue normale
A stuff-a-thon is happening at the FSF, Jan. 24, 28
Announcing the winner of the FSF 40 Anniversary Logo Contest
We surpassed our year-end goal of $400,000 USD thanks to you!
FSD meeting recap 2025 01 17
- Electronic Frontier Foundation
- It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy
It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation
One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the fact that they don’t own much of the media they have paid for.
All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright, they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore.
Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection.
As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.
We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are:
- Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account.
- Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.
- Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
- Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information.
- Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek.
- Electronic Frontier Foundation
- EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements
EFF to Michigan Supreme Court: Cell Phone Search Warrants Must Strictly Follow The Fourth Amendment’s Particularity and Probable Cause Requirements
Last week, EFF, along with the Criminal Defense Attorneys of Michigan, ACLU, and ACLU of Michigan, filed an amicus brief in People v. Carson in the Supreme Court of Michigan, challenging the constitutionality of the search warrant of Mr. Carson's smart phone.
In this case, Mr. Carson was arrested for stealing money from his neighbor's safe with a co-conspirator. A few months later, law enforcement applied for a search warrant for Mr. Carson's cell phone. The search warrant enumerated the claims that formed the basis for Mr. Carson's arrest, but the only mention of a cell phone was a law enforcement officer's general assertion that phones are communication devices often used in the commission of crimes. A warrant was issued which allowed the search of the entirety of Mr. Carson's smart phone, with no temporal or category limits on the data to be searched. Evidence found on the phone was then used to convict Mr. Carson.
On appeal, the Court of Appeals made a number of rulings in favor of Mr. Carson, including that evidence from the phone should not have been admitted because the search warrant lacked particularity and was unconstitutional. The government's appeal to the Michigan Supreme Court was accepted and we filed an amicus brief.
In our brief, we argued that the warrant was constitutionally deficient and overbroad, because there was no probable cause for searching the cell phone and that the warrant was insufficiently particular because it failed to limit the search to within a time frame or certain categories of information.
As the U.S. Supreme Court recognized in Riley v. California, electronic devices such as smart phones “differ in both a quantitative and a qualitative sense” from other objects. The devices contain immense storage capacities and are filled with sensitive and revealing data, including apps for everything from banking to therapy to religious practices to personal health. As the refrain goes, whatever the need, “there's an app for that.” This special nature of digital devices requires courts to review warrants to search digital devices with heightened attention to the Fourth Amendment’s probable cause and particularity requirements.
In this case, the warrant fell far short. In order for there to be probable cause to search an item, the warrant application must establish a “nexus” between the incident being investigated and the place to be searched. But the application in this case gave no reason why evidence of the theft would be found on Mr. Carson's phone. Instead, it only stated the allegations leading to Mr. Carson's arrest and boilerplate language about cell phone use among criminals. While those facts may establish probable cause to arrest Mr. Carson, they did not establish probable cause to search Mr. Carson's phone. If it were otherwise, the government would always be able to search the cell phone of someone they had probable cause to arrest, thereby eradicating the independent determination of whether probable cause exists to search something. Without a nexus between the crime and Mr. Carson’s phone, there was no probable cause.
Moreover, the warrant allowed for the search of “any and all data” contained on the cell phone, with no limits whatsoever. This type of "all content" warrants are the exact type of general warrants against which the Fourth Amendment and its state corollaries were meant to protect. Cell phone search warrants that have been upheld have contained temporal constraints and a limit to the categories of data to be searched. Neither limitations—or any other limitations—were in the issued search warrant. The police should have used date limitations in applying for the search warrant, as they do in their warrant applications for other searches in the same investigation. Additionally, the warrant allowed the search of all the information on the phone, the vast majority of which did not—and could not—contain evidence related to the investigation.
As smart phones become more capacious and entail more functions, it is imperative that courts adhere to the narrow construction of warrants for the search of electronic devices to support the basic purpose of the Fourth Amendment to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.
Face Scans to Estimate Our Age: Harmful and Creepy AF
Government must stop restricting website access with laws requiring age verification.
Some advocates of these censorship schemes argue we can nerd our way out of the many harms they cause to speech, equity, privacy, and infosec. Their silver bullet? “Age estimation” technology that scans our faces, applies an algorithm, and guesses how old we are – before letting us access online content and opportunities to communicate with others. But when confronted with age estimation face scans, many people will refrain from accessing restricted websites, even when they have a legal right to use them. Why?
Because quite simply, age estimation face scans are creepy AF – and harmful. First, age estimation is inaccurate and discriminatory. Second, its underlying technology can be used to try to estimate our other demographics, like ethnicity and gender, as well as our names. Third, law enforcement wants to use its underlying technology to guess our emotions and honesty, which in the hands of jumpy officers is likely to endanger innocent people. Fourth, age estimation face scans create privacy and infosec threats for the people scanned. In short, government should be restraining this hazardous technology, not normalizing it through age verification mandates.
Error and discrimination
Age estimation is often inaccurate. It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites. By the way, it will also sometimes mistake adolescents for adults.
Age estimation also is discriminatory. Studies show face scans are more likely to err in estimating the age of people of color and women. Which means that as a tool of age verification, these face scans will have an unfair disparate impact.
Estimating our identity and demographics
Age estimation is a tech sibling of face identification and the estimation of other demographics. To users, all face scans look the same and we shouldn’t allow them to become a normal part of the internet. When we submit to a face scan to estimate our age, a less scrupulous company could flip a switch and use the same face scan, plus a slightly different algorithm, to guess our name or other demographics.
Some companies are in both the age estimation business and the face identification business.
Other developers claim they can use age estimation’s underlying technology – application of an algorithm to a face scan – to estimate our gender (like these venders) and our ethnicity (like these venders). But these scans are likely to misidentify the many people whose faces do not conform to gender and ethnic averages (such as transgender people). Worse, powerful institutions can harm people with this technology. China uses face scans to identify ethnic Uyghurs. Transphobic legislators may try to use them to enforce bathroom bans. For this reason, advocates have sought to prohibit gender estimation face scans.
Estimating our emotions and honesty
Developers claim they can use face estimation’s underlying technology to estimate our emotions (like these venders). But this will always have a high error rate, because people express emotions differently, based on culture, temperament, and neurodivergence. Worse, researchers are trying to use face scans to estimate deception, and even criminality. Mind-reading technologies have a long and dubious history, from phrenology to polygraphs.
Unfortunately, powerful institutions may believe the hype. In 2008, the U.S. Department of Homeland Security disclosed its efforts to use “image analysis” of “facial features” (among other biometrics) to identify “malintent” of people being screened. Other policing agencies are using algorithms to analyze emotions and deception.
When police technology erroneously identifies a civilian as a threat, many officers overreact. For example, ALPR errors recurringly prompt police officers to draw guns on innocent drivers. Some government agencies now advise drivers to keep their hands on the steering wheel during a traffic stop, to reduce the risk that the driver’s movements will frighten the officer. Soon such agencies may be advising drivers not to roll their eyes, because the officer’s smart glasses could misinterpret that facial expression as anger or deception.
Privacy and infosec
The government should not be forcing tech companies to collect even more personal data from users. Companies already collect too much data and have proved they cannot be trusted to protect it.
Age verification face scans create new threats to our privacy and information security. These systems collect a scan of our face and guess our age. A poorly designed system might store this personal data, and even correlate it to the online content that we look at. In the hands of an adversary, and cross-referenced to other readily available information, this information can expose intimate details about us. Our faces are unique, immutable, and constantly on display – creating risk of biometric tracking across innumerable virtual and IRL contexts. Last year, hackers breached an age verification company (among many other companies).
Of course, there are better and worse ways to design a technology. Some privacy and infosec risks might be reduced, for example, by conducting face scans on-device instead of in-cloud, or by deleting everything immediately after a visitor passes the age test. But lower-risk does not mean zero-risk. Clever hackers might find ways to breach even well-designed systems, companies might suddenly change their systems to make them less privacy-protective (perhaps at the urging of government), and employees and contractors might abuse their special access. Numerous states are mandating age verification with varying rules for how to do so; numerous websites are subject to these mandates; and numerous vendors are selling face scanning services. Inevitably, many of these websites and services will fail to maintain the most privacy-preserving systems, because of carelessness or greed.
Also, face scanning algorithms are often trained on data that was collected using questionable privacy methods—whether it be from users with murky-consent or non-users. The government data sets used to test biometric algorithms sometimes come from prisoners and immigrants.
Most significant here, when most people arrive at most age verification checkpoints, they will have no idea whether the face scan system has minimized the privacy and infosec risks. So many visitors will turn away, and forego the content and conversations available on restricted website.
Next steps
Algorithmic face scans are dangerous, whether used to estimate our age, our other demographics, our name, our emotions, or our honesty. Thus, EFF supports a ban on government use of this technology, and strict regulation (including consent and minimization) for corporate use.
At a minimum, government must stop coercing websites into using face scans, as a means of complying with censorious age verification mandates. Age estimation does not eliminate the privacy and security issues that plague all age verification systems. And these face scans cause many people to refrain from accessing websites they have a legal right to access. Because face scans are creepy AF.
Second Circuit Rejects Record Labels’ Attempt to Rewrite the DMCA
In a major win for creator communities, the U.S. Court of Appeals for the Second Circuit has once again handed video streaming site Vimeo a solid win in its long-running legal battle with Capitol Records and a host of other record labels.
The labels claimed that Vimeo was liable for copyright infringement on its site, and specifically that it can’t rely on the Digital Millennium Copyright Act’s safe harbor because Vimeo employees “interacted” with user-uploaded videos that included infringing recordings of musical performances owned by the labels. Those interactions included commenting on, liking, promoting, demoting , or posting them elsewhere on the site. The record labels contended that these videos contained popular songs, and it would’ve been obvious to Vimeo employees that this music was unlicensed.
But as EFF explained in an amicus brief filed in support of Vimeo, even rightsholders themselves mistakenly demand takedowns. Labels often request takedowns of music they don’t own or control, and even request takedowns of their own content. They also regularly target fair uses. When rightsholders themselves cannot accurately identify infringement, courts cannot presume that a service provider can do so, much less a blanket presumption as to hundreds of videos.
In an earlier ruling, the court held that the labels had to show that it would be apparent to a person without specialized knowledge of copyright law that the particular use of the music was unlawful, or prove that the Vimeo workers had expertise in copyright law. The labels argued that Vimeo’s own efforts to educate its employees and user about copyright, among other circumstantial evidence, were enough to meet that burden. The Second Circuit disagreed, finding that:
Vimeo’s exercise of prudence in instructing employees not to use copyrighted music and advising users that use of copyrighted music “generally (but not always) constitutes copyright infringement” did not educate its employees about how to distinguish between infringing uses and fair use.
The Second Circuit also rejected another equally dangerous argument: that Vimeo lost safe harbor protection by receiving a “financial benefit” from infringing activity, such as user-uploaded videos, that the platform had a “right and ability to control.” The labels contended that any website that exercises editorial judgment—for example, by removing, curating, or organizing content—would necessarily have the “right and ability to control” that content. If they were correct, ordinary content moderation would put a platform at risk of crushing copyright liability.
As the Second Circuit put it, the labels’ argument:
would substantially undermine what has generally been understood to be one of Congress’s major objectives in passing the DMCA: encouraging entrepreneurs to establish websites that can offer the public rapid, efficient, and inexpensive means of communication by shielding service providers from liability for infringements placed on the sites by users.
Fortunately, the Second Circuit’s decisions in this case help preserve the safe harbors and the expression and innovation that they make possible. But it should not have taken well over a decade of litigation—and likely several millions of dollars in legal fees—to get there.
Speaking Freely: Lina Attalah
This interview has been edited for length and clarity.*
Jillian York: Welcome, let’s start here. What does free speech or free expression mean to you personally?
Lina Attalah: Being able to think without too many calculations and without fear.
York: What are the qualities that make you passionate about the work that you do, and also about telling stories and utilizing your free expression in that way?
Well, it ties in with your first question. Free speech is basically being able to express oneself without fear and without too many calculations. These are things that are not granted, especially in the context I work in. I know that it does not exist in any absolute way anywhere, and increasingly so now, but even more so in our context, and historically it hasn't existed in our context. So this has also drawn me to try to unearth what is not being said, what is not being known, what is not being shared. I guess the passion came from that lack more than anything else. Perhaps, if I lived in a democracy, maybe I wouldn't have wanted to be a journalist.
York: I’d like to ask you about Syria, since you just traveled there. I know that you're familiar with the context there in terms of censorship and the Internet in particular. What do you see in terms of people's hopes for more expression in Syria in the future?
I think even though we share an environment where freedom of expression has been historically stifled, there is an exception to Syria when it comes to the kind of controls there have been on people's ability to express, let alone to organize and mobilize. I think there's also a state of exception when it comes to the price that had to be paid in Syrian prisons for acts of free expression and free speech. This is extremely exceptional to the fabric of Syrian society. So going there and seeing that this condition was gone, after so much struggle, after so much loss, is a situation that is extremely palpable. From the few days I spent there, what was clear to me is that everybody is pretty much uncertain about the future, but there is an undoubted relief that this condition is gone for now, this fear. It literally felt like it's a lower sky, sort of repressing people's chests somehow, and it's just gone. This burden was just gone. It's not all flowery, it's not all rosy. Everybody is uncertain. But the very fact that this fear is gone is very palpable and cannot be taken away from the experience we're living through now in Syria.
York: I love that. Thank you. Okay, let’s go to Egypt a little bit. What can you tell us about the situation for free speech in the context of Egypt? We're coming up on fourteen years since the uprising in 2011 and eleven years since Sisi came to power. And I mean, I guess, contextualize that for our readers who don't know what's happened in Egypt in the past decade or so.
For a quick summary, the genealogy goes as follows. There was a very tight margin through which we managed to operate as journalists, as activists, as people trying to sort of enlarge the space through which we can express ourselves on matters of public concerns in the last years of Mubarak's rule. And this is the time that coincided with the opening up of the internet—back in the time when the internet was also more of a public space, before the overt privatization that we experience in that virtual space as well. Then the Egyptian revolution happened in 2011 and that space further exploded in expression and diversity of voices and people speaking to different issues that had previously been reserved to the hideouts of activist circles.
Then you had a complete reversal of all of this with the takeover of a military appointed government. Subsequently, with the election of President Sisi in 2014, it became clear that it was a government that believed that the media's role—this is just one example focusing on the media—is to basically support the government in a very sort of 1960s nasserite understanding that there is a national project, that he's leading it, and we are his soldiers. We should basically endorse, support, not criticize, not weaken, basically not say anything differently from him. And you know this, of course, transcends the media. Everybody should be a soldier in a way and also the price of doing otherwise has been hefty, in the sense that a lot of people ended up under prosecution, serving prolonged jail sentences, or even spending prolonged times in pre-trial detention without even getting prosecuted.
So you have this total reversal from an unfolding moment of free speech that sort of exploded for a couple of years starting in 2011, and then everything closing up, closing up, closing up to the point where that margin that I started off talking about at the beginning is almost no longer even there. And, on a personal note, I always ask myself if the margin has really tightened or if one just becomes more scared as they grow older? But the margin has indeed tightened quite extensively. Personally, I'm aging and getting more scared. But another objective indicator is that almost all of my friends and comrades who have been with me on this path are no longer around because they are either in prison or in exile or have just opted out from the whole political apparatus. So that says that there isn't the kind of margin through which we managed to maneuver before the revolution.
York: Earlier you touched on the privatization of online spaces. Having watched the way tech companies have behaved over the past decade, what do you think that these companies fail to understand about the Egyptian and the regional context?
It goes back to how we understand this ecosystem, politically, from the onset. I am someone who thinks of governments and markets, or governments and corporations, as the main actors in a market, as dialectically interchangeable. Let's say they are here to control, they are here to make gains, and we are here to contest them even though we need them. We need the state, we need the companies. But there is no reason on earth to believe that either of them want our best. I'm putting governments and companies in the same bucket, because I think it's important not to fall for the liberals’ way of thinking that the state has certain politics, but the companies are freer or are just after gains. I do think of them as formidable political edifices that are self-serving. For us, the political game is always how to preserve the space that we've created for ourselves, using some of the leverage from these edifices without being pushed over and over.
For me, this is a very broad political thing, and I think about them as a duality, because, operating as a media organization in a country like Egypt, I have to deal with the dual repression of those two edifices. To give you a very concrete example, in 2017 the Egyptian government blocked my website, Mada Masr, alongside a few other media websites, shortly before going on and blocking hundreds of websites. All independent media websites, without exception, have been blocked in Egypt alongside sites through which you can download VPN services in order to be able to also access these blocked websites. And that's done by the government, right? So one of the things we started doing when this happened in 2017 is we started saying, “Okay, we should invest in Meta. Or back then it was still Facebook, so we should invest in Facebook more. Because the government monitors you.” And this goes back to the relation, the interchangeability of states and companies. The government would block Mada Masr, but would never block Facebook, because it's bad for business. They care about keeping Facebook up and running.
It's not Syria back in the time of Assad. It's not Tunisia back in the time of Ben Ali. They still want some degree of openness, so they would keep social media open. So we let go of our poetic triumphalism when we said, we will try to invest in more personalized, communitarian dissemination mechanisms when building our audiences, and we'll just go on Facebook. Because what option do we have? But then what happens is that is another track of censorship in a different way that still blocks my content from being able to reach its audiences through all the algorithmic developments that happened and basically the fact that—and this is not specific to Egypt—they just want to think of themselves as the publishers. They started off by treating us as the publishers and themselves as the platforms, but at this point, they want to be everything. And what would we expect from a big company, a profitable company, besides them wanting to be everything?
York: I don't disagree at this point. I think that there was a point in time where I would have disagreed. When you work closely with companies, it’s easy to fall into the trap of believing that change is possible because you know good people who work there, people who really are trying their best. But those people are rarely capable of shifting the direction of the company, and are often the ones to leave first.
Let’s shift to talking about our friend, Egyptian political prisoner Alaa Abd El-Fattah. You mentioned the impact that the past 11 years, really the past 14 years, have had on people in Egypt. And, of course, there are many political prisoners, but one of the prisoners that that EFF readers will be familiar with is Alaa. You recently accepted the English PEN Award on his behalf. Can you tell us more about what he has meant to you?
One way to start talking about Alaa is that I really hope that 2025 is the year when he will get released. It's just ridiculous to keep making that one single demand over and over without seeing any change there. So Alaa has been imprisoned on account of his free speech, his attempt to speak freely. And he attempted to speak, you know, extremely freely in the sense that a lot of his expression is his witty sort of engagement with surrounding political events that came through his personal accounts on social media, in additional to the writing that he's been doing for different media platforms, including ours and yours and so on. And in that sense, he's so unmediated, he’s just free. A truly free spot. He has become the icon of the Egyptian revolution, the symbol of revolutionary spirit who you know is fighting for people's right to free speech and, more broadly, their dignity. I guess I'm trying to make a comment, a very basic comment, on abolition and, basically, the lack of utility of prisons, and specifically political prisons. Because the idea is to mute that voice. But what has happened throughout all these years of Alaa’s incarceration is that his voice has only gotten amplified by this very lack, by this very absence, right? I always lament about the fact that I do not know if I would have otherwise become very close to Alaa. Perhaps if he was free and up and running, we wouldn't have gotten this close. I have no idea. Maybe he would have just gone working on his tech projects and me on my journalism projects. Maybe we would have tried to intersect, and we had tried to intersect, but maybe we would have gone on without interacting much. But then his imprisonment created this tethering where I learned so much through his absence.
Somehow I've become much more who I am in terms of the journalism, in terms of the thinking, in terms of the politics, through his absence, through that lack. So there is something that gets created with this aggressive muting of a voice that should be taken note of. That being said, I don't mean to romanticize absence, because he needs to be free. You know it's, it's becoming ridiculous at this point. His incarceration is becoming ridiculous at this point.
York: I guess I also have to ask, what would your message be to the UK Government at this point?
Again, it's a test case for what so-called democratic governments can still do to their citizens. There needs to be something more forceful when it comes to demanding Alaa’s release, especially in view of the condition of his mother, who has been on a hunger strike for over 105 days as of the day of this interview. So I can't accept that this cannot be a forceful demand, or this has to go through other considerations pertaining to more abstract bilateral relations and whatnot. You know, just free the man. He's your citizen. You know, this is what's left of what it means to be a democratic government.
York: Who is your free speech hero?
It’s Alaa. He always warns us of over-symbolizing him or the others. Because he always says, when we over symbolize heroes, they become abstract. And we stop being able to concretize the fights and the resistance. We stop being able to see that this is a universal battle where there are so many others fighting it, albeit a lot more invisible, but at the same time. Alaa, in his person and in what he represents, reminds me of so much courage. A lot of times I am ashamed of my fear. I'm ashamed of not wanting to pay the price, and I still don't want to pay the price. I don't want to be in prison. But at the same time, I look up at someone like Alaa, fearlessly saying what he wants to say, and I’m just always in awe of him.
The Impact of Age Verification Measures Goes Beyond Porn Sites
As age verification bills pass across the world under the guise of “keeping children safe online,” governments are increasingly giving themselves the authority to decide what topics are deemed “safe” for young people to access, and forcing online services to remove and block anything that may be deemed “unsafe.” This growing legislative trend has sparked significant concerns and numerous First Amendment challenges, including a case currently pending before the Supreme Court–Free Speech Coalition v. Paxton. The Court is now considering how government-mandated age verification impacts adults’ free speech rights online.
These challenges keep arising because this isn’t just about safety—it’s censorship. Age verification laws target a slew of broadly-defined topics. Some block access to websites that contain some "sexual material harmful to minors," but define the term so loosely that “sexual material” could encompass anything from sex education to R-rated movies; others simply list a variety of vaguely-defined harms. In either instance, lawmakers and regulators could use the laws to target LGBTQ+ content online.
This risk is especially clear given what we already know about platform content policies. These policies, which claim to "protect children" or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies (and/or laws) are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies and bills.
In either case, it is critical to recognize that age verification bills could block far more than just pornography.
Take Oklahoma’s bill, SB 1959, for example. This state age verification law aims to prevent young people from accessing content that is “harmful to minors” and went into effect last November 1st. It incorporates definitions from another Oklahoma statute, Statute 21-1040, which defines material “harmful to minors” as any description or exhibition, in whatever form, of nudity and “sexual conduct.” That same statute then defines “sexual conduct” as including acts of “homosexuality.” Explicitly, then, SB 1959 requires a site to verify someone’s age before showing them content about homosexuality—a vague enough term that it could potentially apply to content from organizations like GLAAD and Planned Parenthood.
This vague definition will undoubtedly cause platforms to over-censor content relating to LGBTQ+ life, health, or rights out of fear of liability. Separately, bills such as SB 1959 might also cause users to self-police their own speech for the same reasons, fearing de-platforming. The law leaves platforms unsure and unable to precisely exclude the minimum amount of content that fits the bill's definition, leading them to over censorship of content that may just also include this very blog post.
Beyond Individual States: Kids Online Safety Act (KOSA)
Laws like the proposed federal Kids Online Safety Act (KOSA) make government officials the arbiters of what young people can see online and will lead platforms to implement invasive age verification measures to avoid the threat of liability. If KOSA passes, it will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech they don’t like. These speech burdens will also affect regular users as platforms mass-delete content in the name of avoiding lawsuits and investigations under KOSA.
Senator Marsha Blackburn, co-sponsor of KOSA, has expressed a priority in “protecting minor children from the transgender [sic] in this culture and that influence.” KOSA, to Senator Blackburn, would address this problem by limiting content in the places “where children are being indoctrinated.” Yet these efforts all fail to protect children from the actual harms of the online world, and instead deny vulnerable young people a crucial avenue of communication and access to information.
LGBTQ+ Platform Censorship by Design
While the censorship of LGBTQ+ content through age verification laws can be represented as an “unintended consequence” in certain instances, barring access to LGBTQ+ content is part of the platforms' design. One of the more pervasive examples is Meta suppressing LGBTQ+ content across its platforms under the guise of protecting younger users from "sexually suggestive content.” According to a recent report, Meta has been hiding posts that reference LGBTQ+ hashtags like #lesbian, #bisexual, #gay, #trans, and #queer for users that turned the sensitive content filter on, as well as showing users a blank page when they attempt to search for LGBTQ+ terms. This leaves teenage users with no choice in what content they see, since the sensitive content filter is turned on for them by default.
This policy change came on the back of a protracted effort by Meta to allegedly protect teens online. In January last year, the corporation announced a new set of “sensitive content” restrictions across its platforms (Instagram, Facebook, and Threads), including hiding content which the platform no longer considered age-appropriate. This was followed later by the introduction of Instagram For Teens to further limit the content users under the age of 18 could see. This feature sets minors’ accounts to the most restrictive levels by default, and teens under 16 can only reverse those settings through a parent or guardian.
Meta has apparently now reversed the restrictions on LGBTQ+ content after calling the issue a “mistake.” This is not good enough. In allowing pro-LGBTQ+ content to be integrated into the sensitive content filter, Meta has aligned itself with those that are actively facilitating a violent and harmful removal of rights for LGBTQ+ people—all under the guise of keeping children and teens safe. Not only is this a deeply flawed strategy, it harms everyone who wishes to express themselves on the internet. These policies are written and enforced discriminatorily and at the expense of transgender, gender-fluid, and nonbinary speakers. They also often convince or require platforms to implement tools that, using the laws' vague and subjective definitions, end up blocking access to LGBTQ+ and reproductive health content.
The censorship of this content prevents individuals from being able to engage with such material online to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With corporations like Meta intervening to decide how people create, speak, and connect, a crucial form of engagement for all kinds of users has been removed and the voices of people with less power are regularly shut down.
And at a time when LGBTQ+ individuals are already under vast pressure from violent homophobic threats offline, these online restrictions have an amplified impact.
LGBTQ+ youth are at a higher risk of experiencing bullying and rejection, often turning to online spaces as outlets for self-expression. For those without family support or who face the threat of physical or emotional abuse at home because of their sexual orientation or gender identity, the internet becomes an essential resource. A report from the Gay, Lesbian & Straight Education Network (GLSEN) highlights that LGBTQ+ youth engage with the internet at higher rates than their peers, often showing greater levels of civic engagement online compared to offline. Access to digital communities and resources is critical for LGBTQ+ youth, and restricting access to them poses unique dangers.
Call to Action: Digital Rights Are LGBTQ+ Rights
These laws have the potential to harm us all—including the children they are designed to protect.
As more U.S. states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. This conglomeration of laws poses significant challenges for users trying to maintain anonymity online and access critical content—whether it’s LGBTQ+ resources, reproductive health information, or otherwise. These policies threaten the very freedoms they purport to protect, stifling conversations about identity, health, and social justice, and creating an environment of fear and repression.
The fight against these laws is not just about defending online spaces; it’s about safeguarding the fundamental rights of all individuals to express themselves and access life-saving information.
We need to stand up against these age verification laws—not only to protect users’ free expression rights, but also to safeguard the free flow of information that is vital to a democratic society. Reach out to your state and federal legislators, raise awareness about the consequences of these policies, and support organizations like the LGBT Tech, ACLU, the Woodhull Freedom Foundation, and others that are fighting for digital rights of young people alongside EFF.
The fight for the safety and rights of LGBTQ+ youth is not just a fight for visibility—it’s a fight for their very survival. Now more than ever, it’s essential for allies, advocates, and marginalized communities to push back against these dangerous laws and ensure that the internet remains a space where all voices can be heard, free from discrimination and censorship.
- Electronic Frontier Foundation
- Texas Is Enforcing Its State Data Privacy Law. So Should Other States.
Texas Is Enforcing Its State Data Privacy Law. So Should Other States.
States need to have and use data privacy laws to bring privacy violations to light and hold companies accountable for them. So, we were glad to see that the Texas Attorney General’s Office has filed its first lawsuit under Texas Data Privacy and Security Act (TDPSA) to take the Allstate Corporation to task for sharing driver location and other driving data without telling customers.
In its complaint, the attorney general’s office alleges that Allstate and a number of its subsidiaries (some of which go by the name “Arity”) “conspired to secretly collect and sell ‘trillions of miles’ of consumers’ ‘driving behavior’ data from mobile devices, in-car devices, and vehicles.” (The defendant companies are also accused of violating Texas’ data broker law and its insurance law prohibiting unfair and deceptive practices.)
On the privacy front, the complaint says the defendant companies created a software development kit (SDK), which is basically a set of tools that developers can create to integrate functions into an app. In this case, the Texas Attorney General says that Allstate and Arity specifically designed this toolkit to scrape location data. They then allegedly paid third parties, such as the app Life360, to embed it in their apps. The complaint also alleges that Allstate and Arity chose to promote their SDK to third-party apps that already required the use of location date, specifically so that people wouldn’t be alerted to the additional collection.
That’s a dirty trick. Data that you can pull from cars is often highly sensitive, as we have raised repeatedly. Everyone should know when that information's being collected and where it's going.
More state regulators should follow suit and use the privacy laws on their books.
The Texas Attorney General’s office estimates that 45 million Americans, including those in Texas, unwittingly downloaded this software that collected their information, including location information, without notice or consent. This violates Texas’ privacy law, which went into effect in July 2024 and requires companies to provide a reasonably accessible notice to a privacy policy, conspicuous notice that they’re selling or processing sensitive data for targeting advertising, and to obtain consumer consent to process sensitive data.
This is a low bar, and the companies named in this complaint still allegedly failed to clear it. As law firm Husch Blackwell pointed out in its write-up of the case, all Arity had to do, for example, to fulfill one of the notice obligations under the TDPSA was to put up a line on their website saying, “NOTICE: We may sell your sensitive personal data.”
In fact, Texas’s privacy law does not meet the minimum of what we’d consider a strong privacy law. For example, the Texas Attorney General is the only one who can file a lawsuit under its states privacy law. But we advocate for provisions that make sure that everyone, not only state attorneys general, can file suits to make sure that all companies respect our privacy.
Texas’ privacy law also has a “right to cure”—essentially a 30-day period in which a company can “fix” a privacy violation and duck a Texas enforcement action. EFF opposes rights to cure, because they essentially give companies a “get-out-jail-free” card when caught violating privacy law. In this case, Arity was notified and given the chance to show it had cured the violation. It just didn’t.
According the complaint, Arity apparently failed to take even basic steps that would have spared it from this enforcement action. Other companies violating our privacy may be more adept at getting out of trouble, but they should be found and taken to task too. That’s why we advocate for strong privacy laws that do even more to protect consumers.
Nineteen states now have some version of a data privacy law. Enforcement has been a bit slower. California has brought a few enforcement actions since its privacy law went into effect in 2020; Texas and New Hampshire are two states that have created dedicated data privacy units in their Attorney General offices, signaling they’re staffing up to enforce their laws. More state regulators should follow suit and use the privacy laws on their books. And more state legislators should enact and strengthen their laws to make sure companies are truly respecting our privacy.
- Electronic Frontier Foundation
- The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step
The FTC’s Ban on GM and OnStar Selling Driver Data Is a Good First Step
The Federal Trade Commission announced a proposed settlement agreeing that General Motors and its subsidiary, OnStar, will be banned from selling geolocation and driver behavior data to credit agencies for five years. That’s good news for G.M. owners. Every car owner and driver deserves to be protected.
Last year, a New York Times investigation highlighted how G.M. was sharing information with insurance companies without clear knowledge from the driver. This resulted in people’s insurance premiums increasing, sometimes without them realizing why that was happening. This data sharing problem was common amongst many carmakers, not just G.M., but figuring out what your car was sharing was often a Sisyphean task, somehow managing to be more complicated than trying to learn similar details about apps or websites.
The FTC complaint zeroed in on how G.M. enrolled people in its OnStar connected vehicle service with a misleading process. OnStar was initially designed to help drivers in an emergency, but over time the service collected and shared more data that had nothing to do with emergency services. The result was people signing up for the service without realizing they were agreeing to share their location and driver behavior data with third parties, including insurance companies and consumer reporting agencies. The FTC also alleged that G.M. didn’t disclose who the data was shared with (insurance companies) and for what purposes (to deny or set rates). Asking car owners to choose between safety and privacy is a nasty tactic, and one that deserves to be stopped.
For the next five years, the settlement bans G.M. and OnStar from these sorts of privacy-invasive practices, making it so they cannot share driver data or geolocation to consumer reporting agencies, which gather and sell consumers’ credit and other information. They must also obtain opt-in consent to collect data, allow consumers to obtain and delete their data, and give car owners an option to disable the collection of location data and driving information.
These are all important, solid steps, and these sorts of rules should apply to all carmakers. With privacy-related options buried away in websites, apps, and infotainment systems, it is currently far too difficult to see what sort of data your car collects, and it is not always possible to opt out of data collection or sharing. In reality, no consumer knowingly agrees to let their carmaker sell their driving data to other companies.
All carmakers should be forced to protect their customers’ privacy, and they should have to do so for longer than just five years. The best way to ensure that would be through a comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent. With a strong privacy law, all car makers—not just G.M.— would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for.
- Electronic Frontier Foundation
- VICTORY! Federal Court (Finally) Rules Backdoor Searches of 702 Data Unconstitutional
VICTORY! Federal Court (Finally) Rules Backdoor Searches of 702 Data Unconstitutional
Better late than never: last night a federal district court held that backdoor searches of databases full of Americans’ private communications collected under Section 702 ordinarily require a warrant. The landmark ruling comes in a criminal case, United States v. Hasbajrami, after more than a decade of litigation, and over four years since the Second Circuit Court of Appeals found that backdoor searches constitute “separate Fourth Amendment events” and directed the district court to determine a warrant was required. Now, that has been officially decreed.
In the intervening years, Congress has reauthorized Section 702 multiple times, each time ignoring overwhelming evidence that the FBI and the intelligence community abuse their access to databases of warrantlessly collected messages and other data. The Foreign Intelligence Surveillance Court (FISC), which Congress assigned with the primary role of judicial oversight of Section 702, has also repeatedly dismissed arguments that the backdoor searches violate the Fourth Amendment, giving the intelligence community endless do-overs despite its repeated transgressions of even lax safeguards on these searches.
This decision sheds light on the government’s liberal use of what is essential a “finders keepers” rule regarding your communication data. As a legal authority, FISA Section 702 allows the intelligence community to collect a massive amount of communications data from overseas in the name of “national security.” But, in cases where one side of that conversation is a person on US soil, that data is still collected and retained in large databases searchable by federal law enforcement. Because the US-side of these communications is already collected and just sitting there, the government has claimed that law enforcement agencies do not need a warrant to sift through them. EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.
EFF argued for over a decade that this is unconstitutional, and now a federal court agrees with us.
Hasbajrami involves a U.S. resident who was arrested at New York JFK airport in 2011 on his way to Pakistan and charged with providing material support to terrorists. Only after his original conviction did the government explain that its case was premised in part on emails between Mr. Hasbajrami and an unnamed foreigner associated with terrorist groups, emails collected warrantless using Section 702 programs, placed in a database, then searched, again without a warrant, using terms related to Mr. Hasbajrami himself.
The district court found that regardless of whether the government can lawfully warrantlessly collect communications between foreigners and Americans using Section 702, it cannot ordinarily rely on a “foreign intelligence exception” to the Fourth Amendment’s warrant clause when searching these communications, as is the FBI’s routine practice. And, even if such an exception did apply, the court found that the intrusion on privacy caused by reading our most sensitive communications rendered these searches “unreasonable” under the meaning of the Fourth Amendment. In 2021 alone, the FBI conducted 3.4 million warrantless searches of US person’s 702 data.
In light of this ruling, we ask Congress to uphold its responsibility to protect civil rights and civil liberties by refusing to renew Section 702 absent a number of necessary reforms, including an official warrant requirement for querying US persons data and increased transparency. On April 15, 2026, Section 702 is set to expire. We expect any lawmaker worthy of that title to listen to what this federal court is saying and create a legislative warrant requirement so that the intelligence community does not continue to trample on the constitutionally protected rights to private communications. More immediately, the FISC should amend its rules for backdoor searches and require the FBI to seek a warrant before conducting them.
- Electronic Frontier Foundation
- Protecting “Free Speech” Can’t Just Be About Targeting Political Opponents
Protecting “Free Speech” Can’t Just Be About Targeting Political Opponents
The White House executive order “restoring freedom of speech and ending federal censorship,” published Monday, misses the mark on truly protecting Americans’ First Amendment rights.
The order calls for an investigation of efforts under the Biden administration to “moderate, deplatform, or otherwise suppress speech,” especially on social media companies. It goes on to order an Attorney General investigation of any government activities “over the last 4 years” that are inconsistent with the First Amendment. The order states in part:
Under the guise of combatting “misinformation,” “disinformation,” and “malinformation,” the Federal Government infringed on the constitutionally protected speech rights of American citizens across the United States in a manner that advanced the Government’s preferred narrative about significant matters of public debate.
But noticeably absent from the Executive Order is any commitment to government transparency. In the Santa Clara Principles, a guideline for online content moderation authored by EFF and other civil society groups, we state that “governments and other state actors should themselves report their involvement in content moderation decisions, including data on demands or requests for content to be actioned or an account suspended, broken down by the legal basis for the request." This Executive Order doesn’t come close to embracing such a principle.
The order is also misguided in its time-limited targeting. Informal government efforts to persuade, cajole, or strong-arm private media platforms, also called “jawboning,” have been an aspect of every U.S. government since at least 2011. Any good-faith inquiry into such pressures would not be limited to a single administration. It’s misleading to suggest the previous administration was the only, or even the primary, source of such pressures. This time limit reeks of political vindictiveness, not a true effort to limit improper government actions.
To be clear, a look back at past government involvement in online content moderation is a good thing. But an honest inquiry would not be time-limited to the actions of a political opponent, nor limited to only past actions. The public would also be better served by a report that had a clear deadline, and a requirement that the results be made public, rather than sent only to the President’s office. Finally, the investigation would be better placed with an inspector general, not the U.S. Attorney General, which implies possible prosecutions.
As we have written before, the First Amendment forbids the government from coercing private entities to censor speech. This principle has countered efforts to pressure intermediaries like bookstores and credit card processors to limit others’ speech. But not every communication about user speech is unconstitutional; some are beneficial, like when platforms reach out to government agencies as authoritative sources of information.
For anyone who may have been excited to see a first-day executive order truly focused on free expression, President Trump’s Jan. 20 order is a disappointment, at best.
- Electronic Frontier Foundation
- EFF Sends Transition Memo on Digital Policy Priorities to New Administration and Congress
EFF Sends Transition Memo on Digital Policy Priorities to New Administration and Congress
SAN FRANCISCO—Standing up for technology users in 2025 and beyond requires careful thinking about government surveillance, consumer privacy, artificial intelligence, and encryption, among other topics. To help incoming federal policymakers think through these key issues, the Electronic Frontier Foundation (EFF) has shared a transition memo with the Trump Administration and the 119th U.S. Congress.
“We routinely work with officials and staff in the White House and Congress on a wide range of policies that will affect digital rights in the coming years,” said EFF Director of Federal Affairs India McKinney. “As the oldest, largest, and most trusted nonpartisan digital rights organization, EFF’s litigators, technologists, and activists have a depth of knowledge and experience that remains unmatched. This memo focuses on how Congress and the Trump Administration can prioritize helping ordinary Americans protect their digital freedom.”
The 64-page memo covers topics such as surveillance, including warrantless digital dragnets, national security surveillance, face recognition technology, border surveillance, and reproductive justice; encryption and cybersecurity; consumer privacy, including vehicle data, age verification, and digital identification; artificial intelligence, including algorithmic decision-making, transparency, and copyright concerns; broadband access and net neutrality; Section 230’s protections of free speech online; competition; copyright; the Computer Fraud and Abuse Act; and patents.
EFF also shared a transition memo with the incoming Biden Administration and Congress in 2020.
“The new Congress and the Trump Administration have an opportunity to make the internet a much better place for users. This memo should serve as a blueprint for how they can do so,” said EFF Executive Director Cindy Cohn. “We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we always work for technology users.”
For the 2025 transition memo: https://eff.org/document/eff-transition-memo-trump-administration-2025
For the 2020 transition memo: https://www.eff.org/document/eff-transition-memo-incoming-biden-administration-november-2020
Everyone is asking the wrong questions about TikTok
Whoever owns ByteDance, the fundamental problem remains the same: users never really know what data is collected about them, and they don't know how the software manipulates that data when deciding what they are shown next. The problem can only be solved if users can learn, verify, and understand how that software works.
VPNs Are Not a Solution to Age Verification Laws
VPNs are having a moment.
On January 1st, Florida joined 18 other states in implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub blocked access to users in Florida. Residents in the “Free State of Florida” have now lost access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.
At the same time, Google Trends data showed a spike in searches for VPN access across Florida–presumably because users are trying to access the site via VPNs.
How Did This Happen?
Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access “adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.
Despite experts from across civil society flagging concerns about the impact of these laws on both adults’ and children’s rights, politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3.
HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age verification requirements face civil penalties and could be subject to lawsuits from the state.
Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access PORNHUB?”
Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which could lead to potential breaches and misuse of that information. In a statement to local news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at risk:
Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.
This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law. Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.
The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance online.
How Do VPNs Play a Role?
Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech.
A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.
With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic.
The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us all—including the young people they are designed to protect.
What Can You Do?
If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.
No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms.
Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom Foundation, and others.
Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data
If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced its latest policy changes. These changes, seemingly designed to appease the incoming Trump administration, included loosening Meta’s hate speech policy to allow for the targeting of LGBTQ+ people and immigrants.
If these changes—or Meta’s long history of anti-competitive, censorial, and invasive practices—make you want to cut ties with the company, it’s sadly not as simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.
Meta’s Business Model Relies on Your Personal Data
You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue, demonstrating just how valuable your personal data is to its business.
How Meta Harvests Your Personal Data
Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data. A 2022 investigation by The Markup found that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel.
Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including their contact details and browsing history.
Take These Steps to Limit How Meta Profits From Your Personal Data
Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data.
Update Your Meta Account Settings
Open your Instagram or Facebook app and navigate to the Accounts Center page.
- You’ll find a link to Accounts Center on the Settings pages of both apps. If you have trouble finding Accounts Center, check Meta’s help pages for Facebook and Instagram.
- If you use a web browser instead of Meta’s apps, visit accountscenter.facebook.com or accountscenter.instagram.com.
If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for both platforms.
Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.
Once you’re on the “Accounts Center” page, make the following changes:
1) Stop Meta from targeting ads based on data it collects about you on other apps and websites:
Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad settings depending on your location). Click the Activity information from ad partners option, then Review Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when prompted.
2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the process.
Back on the Ad preferences page, click the Manage info tab again (called Ad settings depending on your location), then select the Ads shown outside of Meta setting, select Not allowed and then click the “X” button to close the pop-up.
Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.
3) Disconnect the data that other companies share with Meta about you from your account:
From the Accounts Center screen, click the Your information and permissions option, followed by Your activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity button. Note: This may take up to 48 hours to take effect.
Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.
While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for other purposes.
Install Privacy Badger to Block Meta’s Trackers
Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.
Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the only mobile browser that supports Privacy Badger.
Limit Meta’s Tracking on Your Phone
Take these additional steps on your mobile device:
- Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
- Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.
The Real Solution: Strong Privacy Legislation
Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information.
Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit.
EFF Statement on U.S. Supreme Court's Decision to Uphold TikTok Ban
We are deeply disappointed that the Court failed to require the strict First Amendment scrutiny required in a case like this, which would’ve led to the inescapable conclusion that the government's desire to prevent potential future harm had to be rejected as infringing millions of Americans’ constitutionally protected free speech. We are disappointed to see the Court sweep past the undisputed content-based justification for the law – to control what speech Americans see and share with each other – and rule only based on the shaky data privacy concerns.
The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.