Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 17 janvier 2025Libre anglophone

VPNs Are Not a Solution to Age Verification Laws

VPNs are having a moment. 

On January 1st, Florida joined 18 other states in implementing an age verification law that burdens Floridians' access to sites that host adult content, including pornography websites like Pornhub. In protest to these laws, PornHub blocked access to users in Florida. Residents in the “Free State of Florida” have now lost access to the world's most popular adult entertainment website and 16th-most-visited site of any kind in the world.

At the same time, Google Trends data showed a spike in searches for VPN access across Florida–presumably because users are trying to access the site via VPNs.  

How Did This Happen?

Nearly two years ago, Louisiana enacted a law that started a wave across neighboring states in the U.S. South: Act 440. This wave of legislation has significantly impacted how residents in these states access “adult” or “sexual” content online. Florida, Tennessee, and South Carolina are now among the list of nearly half of U.S. states where users can no longer access many major adult websites at all, while others require verification due to the restrictive laws that are touted as child protection measures. These laws introduce surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat. 

Despite experts from across civil society flagging concerns about the impact of these laws on both adults’ and children’s rights, politicians in Florida decided to push ahead and enact one of the most contentious age verification mandates earlier this year in HB 3

HB 3 is a part of the state’s ongoing efforts to regulate online content, and requires websites that host “adult material” to implement a method of verifying the age of users before they can access the site. Specifically, it mandates that adult websites require users to submit a form of government-issued identification, or use a third-party age verification system approved by the state. The law also bans anyone under 14 from accessing or creating a social media account. Websites that fail to comply with the law's age verification requirements face civil penalties and could be subject to lawsuits from the state. 

Pornhub, to its credit, understands these risks. In response to the implementation of age verification laws in various states, the company has taken a firm stand by blocking access to users in regions where such laws are enforced. Before the laws’ implementation date, Florida users were greeted with this message: “You will lose access to PornHub in 12 days. Did you know that your government wants you to give your driver’s license before you can access PORNHUB?” 

Pornhub then restricted access to Florida residents on January 1st, 2025—right when HB 3 was set to take effect. The platform expressed concerns that the age verification requirements would compromise user privacy, pointing out that these laws would force platforms to collect sensitive personal data, such as government-issued identification, which could lead to potential breaches and misuse of that information. In a statement to local news, Aylo, Pornhub’s parent company, said that they have “publicly supported age verification for years” but they believe this law puts users’ privacy at risk:

Unfortunately, the way many jurisdictions worldwide, including Florida, have chosen to implement age verification is ineffective, haphazard, and dangerous. Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy. Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

This is not speculation. We have seen how this scenario plays out in the United States. In Louisiana last year, Pornhub was one of the few sites to comply with the new law. Since then, our traffic in Louisiana dropped approximately 80 percent. These people did not stop looking for porn. They just migrated to darker corners of the internet that don't ask users to verify age, that don't follow the law, that don't take user safety seriously, and that often don't even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.

The company’s response reflects broader concerns over privacy and digital rights, as many fear that these measures are a step toward increased government surveillance online. 

How Do VPNs Play a Role? 

Within this context, it is no surprise that Google searches for VPNs in Florida have skyrocketed. But as more states and countries pass age verification laws, it is crucial to recognize the broader implications these measures have on privacy, free speech, and access to information. While VPNs may be able to disguise the source of your internet activity, they are not foolproof—nor should they be necessary to access legally protected speech. 

A VPN routes all your network traffic through an "encrypted tunnel" between your devices and the VPN server. The traffic then leaves the VPN to its ultimate destination, masking your original IP address. From a website's point of view, it appears your location is wherever the VPN server is. A VPN should not be seen as a tool for anonymity. While it can protect your location from some companies, a disreputable VPN service might deliberately collect personal information or other valuable data. There are many other ways companies may track you while you use a VPN, including GPS, web cookies, mobile ad IDs, tracking pixels, or fingerprinting.

With varying mandates across different regions, it will become increasingly difficult for VPNs to effectively circumvent these age verification requirements because each state or country may have different methods of enforcement and different types of identification checks, such as government-issued IDs, third-party verification systems, or biometric data. As a result, VPN providers will struggle to keep up with these constantly changing laws and ensure users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. 

The ever-growing conglomeration of age verification laws poses significant challenges for users trying to maintain anonymity online, and have the potential to harm us all—including the young people they are designed to protect. 

What Can You Do?

If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy–a valuable resource for anyone looking to use these tools.

No one should have to hand over their driver’s license just to access free websites. EFF has long fought against mandatory age verification laws, from the U.S. to Canada and Australia. And under the context of weakening rights for already vulnerable communities online, politicians around the globe must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms

Dozens of bills currently being debated by state and federal lawmakers could result in dangerous age verification mandates. We will resist them. We must stand up against these types of laws, not just for the sake of free expression, but to protect the free flow of information that is essential to a free society. Contact your state and federal legislators, raise awareness about the unintended consequences of these laws, and support organizations that are fighting for digital rights and privacy protections alongside EFF, such as the ACLU, Woodhull Freedom Foundation, and others.

Mad at Meta? Don't Let Them Collect and Monetize Your Personal Data

Par : Lena Cohen
17 janvier 2025 à 10:59

If you’re fed up with Meta right now, you’re not alone. Google searches for deleting Facebook and Instagram spiked last week after Meta announced its latest policy changes. These changes, seemingly designed to appease the incoming Trump administration, included loosening Meta’s hate speech policy to allow for the targeting of LGBTQ+ people and immigrants. 

If these changes—or Meta’s long history of anti-competitive, censorial, and invasive practices—make you want to cut ties with the company, it’s sadly not as simple as deleting your Facebook account or spending less time on Instagram. Meta tracks your activity across millions of websites and apps, regardless of whether you use its platforms, and it profits from that data through targeted ads. If you want to limit Meta’s ability to collect and profit from your personal data, here’s what you need to know.

Meta’s Business Model Relies on Your Personal Data

You might think of Meta as a social media company, but its primary business is surveillance advertising. Meta’s business model relies on collecting as much information as possible about people in order to sell highly-targeted ads. That’s why Meta is one of the main companies tracking you across the internet—monitoring your activity far beyond its own platforms. When Apple introduced changes to make tracking harder on iPhones, Meta lost billions in revenue, demonstrating just how valuable your personal data is to its business. 

How Meta Harvests Your Personal Data

Meta’s tracking tools are embedded in millions of websites and apps, so you can’t escape the company’s surveillance just by avoiding or deleting Facebook and Instagram. Meta’s tracking pixel, found on 30% of the world’s most popular websites, monitors people’s behavior across the web and can expose sensitive information, including financial and mental health data. A 2022 investigation by The Markup found that a third of the top U.S. hospitals had sent sensitive patient information to Meta through its tracking pixel. 

Meta’s surveillance isn’t limited to your online activity. The company also encourages businesses to send them data about your offline purchases and interactions. Even deleting your Facebook and Instagram accounts won’t stop Meta from harvesting your personal data. Meta in 2018 admitted to collecting information about non-users, including their contact details and browsing history.

Take These Steps to Limit How Meta Profits From Your Personal Data

Although Meta’s surveillance systems are pervasive, there are ways to limit how Meta collects and uses your personal data. 

Update Your Meta Account Settings

Open your Instagram or Facebook app and navigate to the Accounts Center page. 

A screenshot of the Meta Accounts Center page.

If your Facebook and Instagram accounts are linked on your Accounts Center page, you only have to update the following settings once. If not, you’ll have to update them separately for Facebook and Instagram. Once you find your way to the Accounts Center, the directions below are the same for both platforms.

Meta makes it harder than it should be to find and update these settings. The following steps are accurate at the time of publication, but Meta often changes their settings and adds additional steps. The exact language below may not match what Meta displays in your region, but you should have a setting controlling each of the following permissions.

Once you’re on the “Accounts Center” page, make the following changes:

1) Stop Meta from targeting ads based on data it collects about you on other apps and websites: 

Click the Ad preferences option under Accounts Center, then select the Manage Info tab (this tab may be called Ad settings depending on your location). Click the Activity information from ad partners option, then Review Setting. Select the option for No, don’t make my ads more relevant by using this information and click the “Confirm” button when prompted.

A screenshot of the "Activity information from ad partners" setting with the "No" option selected

2) Stop Meta from using your data (from Facebook and Instagram) to help advertisers target you on other apps. Meta’s ad network connects advertisers with other apps through privacy-invasive ad auctions—generating more money and data for Meta in the process.

Back on the Ad preferences page, click the Manage info tab again (called Ad settings depending on your location), then click on Activity information from ad partners and then select the Ads shown outside of Meta option, select Not allowed and then click the “X” button to close the pop-up.

Depending on your location, this setting will be called Ads from ad partners on the Manage info tab.

A screenshot of the "Ads outside Meta" setting with the "Not allowed" option selected

3) Disconnect the data that other companies share with Meta about you from your account:

From the Accounts Center screen, click the Your information and permissions option, followed by Your activity off Meta technologies, then Manage future activity. On this screen, choose the option to Disconnect future activity, followed by the Continue button, then confirm one more time by clicking the Disconnect future activity button. Note: This may take up to 48 hours to take effect.

Note: This will also clear previous activity, which might log you out of apps and websites you’ve signed into through Facebook.

A screenshot of the "Manage future activity" setting with the "Disconnect future activity" option selected

While these settings limit how Meta uses your data, they won’t necessarily stop the company from collecting it and potentially using it for other purposes. 

Install Privacy Badger to Block Meta’s Trackers

Privacy Badger is a free browser extension by EFF that blocks trackers—like Meta’s pixel—from loading on websites you visit. It also replaces embedded Facebook posts, Like buttons, and Share buttons with click-to-activate placeholders, blocking another way that Meta tracks you. The next version of Privacy Badger (coming next week) will extend this protection to embedded Instagram and Threads posts, which also send your data to Meta.

Visit privacybadger.org to install Privacy Badger on your web browser. Currently, Firefox on Android is the only mobile browser that supports Privacy Badger. 

Limit Meta’s Tracking on Your Phone

Take these additional steps on your mobile device:

  • Disable your phone’s advertising ID to make it harder for Meta to track what you do across apps. Follow EFF’s instructions for doing this on your iPhone or Android device.
  • Turn off location access for Meta’s apps. Meta doesn’t need to know where you are all the time to function, and you can safely disable location access without affecting how the Facebook and Instagram apps work. Review this setting using EFF’s guides for your iPhone or Android device.

The Real Solution: Strong Privacy Legislation

Stopping a company you distrust from profiting off your personal data shouldn’t require tinkering with hidden settings and installing browser extensions. Instead, your data should be private by default. That’s why we need strong federal privacy legislation that puts you—not Meta—in control of your information. 

Without strong privacy legislation, Meta will keep finding ways to bypass your privacy protections and monetize your personal data. Privacy is about more than safeguarding your sensitive information—it’s about having the power to prevent companies like Meta from exploiting your personal data for profit.

EFF Statement on U.S. Supreme Court's Decision to Uphold TikTok Ban

Par : David Greene
17 janvier 2025 à 10:49

We are deeply disappointed that the Court failed to require the strict First Amendment scrutiny required in a case like this, which would’ve led to the inescapable conclusion that the government's desire to prevent potential future harm had to be rejected as infringing millions of Americans’ constitutionally protected free speech. We are disappointed to see the Court sweep past the undisputed content-based justification for the law – to control what speech Americans see and share with each other – and rule only based on the shaky data privacy concerns.

The United States’ foreign foes easily can steal, scrape, or buy Americans’ data by countless other means. The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy – only comprehensive consumer privacy legislation can achieve that goal. Shutting down communications platforms or forcing their reorganization based on concerns of foreign propaganda and anti-national manipulation is an eminently anti-democratic tactic, one that the US has previously condemned globally.

Systemic Risk Reporting: A System in Crisis?

16 janvier 2025 à 12:45

The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.  

In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.  

A Short DSA Explainer  

The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.   

For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.  

While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:  

  • We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.  
  • We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and  
  • We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic. 

Introducing Systemic Risks 

The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.  

The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.   

Risk Management Through the Lens of the Romanian Election 

Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.  

To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.” 

TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approachespotentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.  

In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.  

Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations. 

Limitations of the DSA’s Risk Governance Approach 

What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.  

As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.  

Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process. 

Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.  

The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.  

The Way Forward  

The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.  

Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is neededone that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.  

Digital Rights and the New Administration | EFFector 37.1

15 janvier 2025 à 13:06

It's a new year and EFF is here to help you keep up with your New Year's resolution to stay up-to-date on the latest digital rights news with our EFFector newsletter!

This edition of the newsletter covers our tongue-in-cheek "awards" for some of the worst data breaches in 2024, The Breachies; an explanation of "real-time bidding," the most privacy-invasive surveillance system you may have never heard of; and our notes to Meta on how to empower freedom of expression on their platforms. 

You can read the full newsletter here, and even get future editions directly to your inbox when you subscribe! Additionally, we've got an audio edition of EFFector on the Internet Archive, or you can view it by clicking the button below:

LISTEN ON YouTube

EFFECTOR 37.1 - DIGITAL RIGHTS AND THE NEW ADMINISTRATION

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Police Use of Face Recognition Continues to Wrack Up Real-World Harms

15 janvier 2025 à 11:22

Police have shown, time and time again, that they cannot be trusted with face recognition technology (FRT). It is too dangerous, invasive, and in the hands of law enforcement, a perpetual liability. EFF has long argued that face recognition, whether it is fully accurate or not, is too dangerous for police use,  and such use ought to be banned.

Now, The Washington Post has proved one more reason for this ban: police claim to use FRT just as an investigatory lead, but in practice officers routinely ignore protocol and immediately arrest the most likely match spit out by the computer without first doing their own investigation.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

The report also tells the stories of two men who were unknown to the public until now: Christopher Galtin and Jason Vernau. They were wrongfully arrested in St. Louis and Miami, respectively, after being misidentified by face recognition. In both cases, the men were jailed despite readily available evidence that would have shown that, despite the apparent match found by the computer, they in fact were not the correct match.

This is infuriating. Just last year, the Assistant Chief of Police for the Miami Police Department, the department that wrongfully arrested Jason Vernau, testified before Congress that his department does not arrest people based solely on face recognition and without proper followup investigations. “Matches are treated like an anonymous tip,” he said during the hearing.

Apparently not all officers got the memo.

We’ve seen this before. Many times. Galtin and Vernau join a growing list of those known to have been wrongfully arrested around the United States based on police use of face recognition. They include Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, Robert Williams, and Porcha Woodruff. It is no coincidence that all six of these people, and now adding Christopher Galtin to that list, are Black. Scholars and activists have been raising the alarm for years that, in addition to a huge amount of police surveillance generally being directed at Black communities, face recognition specifically has a long history of having a lower rate of accuracy when it comes to identifying people with darker complexions. The case of Robert Williams in Detroit resulted in a lawsuit which ended in the Detroit police department, which had used FRT to justify a number of wrongful arrests, instituting strict new guidelines about the use of face recognition technology.

Cities across the United States have decided to join the growing movement to ban police use of face recognition because this technology is simply too dangerous in the hands of police.

Even in a world where the technology is 100% accurate, police still should not be trusted with it. The temptation for police to fly a drone over a protest and use face recognition to identify the crowd would be too great and the risks to civil liberties too high. After all, we already see that police are cutting corners and using their technology in ways that violate their own departmental policies.


We continue to urge cities, states, and Congress to ban police use of face recognition technology. We stand ready to assist. As intrepid tech journalists and researchers continue to do their jobs, increased evidence of these harms will only increase the urgency of our movement. 

EFFecting Change: Digital Rights & the New Administration

14 janvier 2025 à 20:18

Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech. 

EFFecting Change Livestream Series:
Digital Rights & the New Administration
Thursday, January 16th
10:00 AM - 11:00 AM Pacific - Check Local Time
This event is LIVE and FREE!

RSVP Today

What direction will your digital rights take under Trump and the 119th Congress? Find out about the topics EFF is watching and the effect they might have on you.

Join our panel of experts as they discuss surveillance, age verification, and consumer privacy. Learn how you can advocate for your digital rights and the resources available to you with our panel featuring EFF Senior Investigative Researcher Beryl Lipton, EFF Senior Staff Technologist Bill Budington, EFF Legislative Director Lee Tien, and EFF Senior Policy Analyst Joe Mullin.

We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.

Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.

Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards

Par : Jason Kelley
14 janvier 2025 à 12:33

On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”  

But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.

Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.

Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users. 

The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’” 

A design for the Most Wanted CEO playing cards

Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition). 

A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.

As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods. 

Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.) 

TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to  update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines. 

These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.

On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”  

Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation,  uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.  

Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.  

Five Things to Know about the Supreme Court Case on Texas’ Age Verification Law, Free Speech Coalition v Paxton

Par : Jason Kelley
13 janvier 2025 à 16:02

The Supreme Court will hear arguments on Wednesday in a case that will determine whether states can violate adults’ First Amendment rights to access sexual content online by requiring them to verify their age.  

The case, Free Speech Coalition v. Paxton, could have far-reaching effects for every internet users’ free speech, anonymity, and privacy rights. The Supreme Court will decide whether a Texas law, HB1181, is constitutional. HB 1811 requires a huge swath of websites—many that would likely not consider themselves adult content websites—to implement age verification.  

The plaintiff in this case is the Free Speech Coalition, the nonprofit non-partisan trade association for the adult industry, and the Defendant is Texas, represented by Ken Paxton, the state’s Attorney General. But this case is about much more than adult content or the adult content industry. State and federal lawmakers across the country have recently turned to ill-conceived, unconstitutional, and dangerous censorship legislation that would force websites to determine the identity of users before allowing them access to protected speech—in some cases, social media. If the Supreme Court were to side with Texas, it would open the door to a slew of state laws that frustrate internet users First Amendment rights and make them less secure online. Here's what you need to know about the upcoming arguments, and why it’s critical for the Supreme Court to get this case right.

1. Adult Content is Protected Speech, and It Violates the First Amendment for a State to Require Age-Verification to Access It.  

Under U.S. law, adult content is protected speech. Under the Constitution and a history of legal precedent, a legal restriction on access to protected speech must pass a very high bar. Requiring invasive age verification to access protected speech online simply does not pass that test. Here’s why: 

While other laws prohibit the sale of adult content to minors and result in age verification via a government ID or other proof-of-age in physical spaces, there are practical differences that make those disclosures less burdensome or even nonexistent compared to online prohibitions. Because of the sheer scale of the internet, regulations affecting online content sweep in millions of people who are obviously adults, not just those who visit physical bookstores or other places to access adult materials, and not just those who might perhaps be seventeen or under.  

First, under HB 1181, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors—even to access the other two-thirds of material that is not adult content.  

Second, while there are a variety of methods for verifying age online, the Texas law generally forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. This is the most common method of online age verification today, and the law doesn't set out a specific method for websites to verify ages. But fifteen million adult U.S. citizens do not have a driver’s license, and over two million have no form of photo ID. Other methods of age verification, such as using online transactional data, would also exclude a large number of people who, for example, don’t have a mortgage.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed.

Less accurate methods, such as “age estimation,” which are usually based solely on an image or video of their face alone, have their own privacy concerns. These methods are unable to determine with any accuracy whether a large number of people—for example, those over seventeen but under twenty-five years old—are the age they claim to be. These technologies are unlikely to satisfy the requirements of HB 1181 anyway. 

Third, even for people who are able to verify their age, the law still deters adult users from speaking and accessing lawful content by undermining anonymous internet browsing. Courts have consistently ruled that anonymity is an aspect of the freedom of speech protected by the First Amendment.  

Lastly, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier.  

2. HB1181 Requires Every Adult in Texas to Verify Their Age to See Legally Protected Content, Creating a Privacy and Data Security Nightmare. 

Once information is shared to verify a user’s age, there’s no real way for a website visitor to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. Age verification systems are surveillance systems. Users must trust that the website they visit, or its third-party verification service, both of which could be fly-by-night companies with no published privacy standards, are following these rules. While many users will simply not access the content as a result—see the above point—others may accept the risk, at their peril.  

There is real risk that website employees will misuse the data, or that thieves will steal it. Data breaches affect nearly everyone in the U.S. Last year, age verification company AU10TIX encountered a breach, and there’s no reason to suspect this issue won’t grow if more websites are required, by law, to use age verification. The more information a website collects, the more chances there are for it to get into the hands of a marketing company, a bad actor, or someone who has filed a subpoena for it.  

The personal data disclosed via age verification is extremely sensitive, and unlike a password, often cannot easily (or ever) be changed. The law amplifies the security risks because it applies to such sensitive websites, potentially allowing a website or bad actor to link this personal information with the website at issue, or even with the specific types of adult content that a person views. This sets up a dangerous regime that would reasonably frighten many users away viewing the site in the first place. Given the regularity of data breaches of less sensitive information, HB1811 creates a perfect storm for data privacy. 

3. This Decision Could Have a Huge Impact on Other States with Similar Laws, as Well as Future Laws Requiring Online Age Verification.  

More than a third of U.S. states have introduced or enacted laws similar to Texas’ HB1181. This ruling could have major consequences for those laws and for the freedom of adults across the country to safely and anonymously access protected speech online, because the precedent the Court sets here could apply to both those and future laws. A bad decision in this case could be seen as a green light for federal lawmakers who are interested in a broader national age verification requirement on online pornography. 

It’s also not just adult content that’s at risk. A ruling from the Court on HB1181 that allows Texas violate the First Amendment here could make it harder to fight state and federal laws like the Kids Online Safety Act which would force users to verify their ages before accessing social media. 

4. The Supreme Court Has Rightly Struck Down Similar Laws Before.  

In 1997, the Supreme Court struck down, in a 7-2 decision, a federal online age-verification law in Reno v. American Civil Liberties Union. In that landmark free speech case the court ruled that many elements of the Communications Decency Act violated the First Amendment, including part of the law making it a crime for anyone to engage in online speech that is "indecent" or "patently offensive" if the speech could be viewed by a minor. Like HB1181, that law would have resulted in many users being unable to view constitutionally protected speech, as many websites would have had to implement age verification, while others would have been forced to shut down.  

Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

The CDA fight was one of the first big rallying points for online freedom, and EFF participated as both a plaintiff and as co-counsel. When the law first passed, thousands of websites turned their backgrounds black in protest. EFF launched its "blue ribbon" campaign and millions of websites around the world joined in support of free speech online. Even today, you can find the blue ribbon throughout the Web. 

Since that time, both the Supreme Court and many other federal courts have correctly recognized that online identification mandates—no matter what method they use or form they take—more significantly burden First Amendment rights than restrictions on in-person access to adult materials. Because courts have consistently held that similar age verification laws are unconstitutional, the precedent is clear. 

5. There is No Safe, Privacy Protecting Age-Verification Technology. 

The same constitutional problems that the Supreme Court identified in Reno back in 1997 have only metastasized. Since then, courts have found that “[t]he risks of compelled digital verification are just as large, if not greater” than they were nearly 30 years ago. Think about it: no matter what method someone uses to verify your age, to do so accurately, they must know who you are, and they must retain that information in some way or verify it again and again. Different age verification methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of dangerous in one way to dangerous in a different way. For more information about the dangers of various methods, you can read our comments to the New York State Attorney General regarding the implementation of the SAFE for Kids Act. 

* * *

 

The Supreme Court Should Uphold Online First Amendment Rights and Strike Down This Unconstitutional Law 

Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment. Age-verification laws like this one reach into fully every U.S. adult household. We look forward to the court striking down this unconstitutional law and once again affirming these important online free speech rights. 

For more information on this case, view our amicus brief filed with the Supreme Court. For a one-pager on the problems with age verification, see here. For more information on recent state laws dealing with age verification, see Fighting Online ID Mandates: 2024 In Review. For more information on how age verification laws are playing out around the world, see Global Age Verification Measures: 2024 in Review. 

 

Meta’s New Content Policy Will Harm Vulnerable Users. If It Really Valued Free Speech, It Would Make These Changes

Earlier this week, when Meta announced changes to their content moderation processes, we were hopeful that some of those changes—which we will address in more detail in this post—would enable greater freedom of expression on the company’s platforms, something for which we have advocated for many years. While Meta’s initial announcement primarily addressed changes to its misinformation policies and included rolling back over-enforcement and automated tools that we have long criticized, we expressed hope that “Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ+ speech, political dissidence, and sex work.”

Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy.

However, shortly after our initial statement was published, we became aware that rather than addressing those historically over-moderated subjects, Meta was taking the opposite tack and —as reported by the Independent—was making targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. 

It was our mistake to formulate our responses and expectations on what is essentially a marketing video for upcoming policy changes before any of those changes were reflected in their documentation. We prefer to focus on the actual impacts of online censorship felt by people, which tends to be further removed from the stated policies outlined in community guidelines and terms of service documents. Facebook has a clear and disturbing track record of silencing and further marginalizing already oppressed peoples, and then being less than forthright about their content moderation policy. These first changes to actually surface in Facebook's community standards document seem to be in the same vein.

Specifically, Meta’s hateful conduct policy now contains the following:

  • People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech. 

But the implementation of this policy shows that it is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging the legitimacy of LGBTQ+ rights. For example, 

  • While allegations of mental illness against people based on their protected characteristics remain a tier 2 violation, the revised policy now allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism [sic] and homosexuality.”
  • The revised policy now specifies that Meta allows speech advocating gender-based and sexual orientation-based-exclusion from military, law enforcement, and teaching jobs, and from sports leagues and bathrooms.
  • The revised policy also removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics.

These changes reveal that Meta seems less interested in freedom of expression as a principle and more focused on appeasing the incoming U.S. administration, a concern we mentioned in our initial statement with respect to the announced move of the content policy team from California to Texas to address “appearances of bias.” Meta said it would be making some changes to reflect that these topics are “the subject of frequent political discourse and debate” and can be said “on TV or the floor of Congress.” But if that is truly Meta’s new standard, we are struck by how selectively it is being rolled out, and particularly allowing more anti-LGBTQ+ speech.

We continue to stand firmly against hateful anti-trans content remaining on Meta’s platforms, and strongly condemn any policy change directly aimed at enabling hate toward vulnerable communities—both in the U.S. and internationally.

Real and Sincere Reforms to Content Moderation Can Both Promote Freedom of Expression and Protect Marginalized Users

In its initial announcement, Meta also said it would change how policies are enforced to reduce mistakes, stop reliance on automated systems to flag every piece of content, and add staff to review appeals. We believe that, in theory, these are positive measures that should result in less censorship of expression for which Meta has long been criticized by the global digital rights community, as well as by artists, sex worker advocacy groups, LGBTQ+ advocates, Palestine advocates, and political groups, among others.

But we are aware that these problems, at a corporation with a history of biased and harmful moderation like Meta, need a careful, well-thought-out, and sincere fix that will not undermine broader freedom of expression goals.

For more than a decade, EFF has been critical of the impact that content moderation at scale—and automated content moderation in particular—has on various groups. If Meta is truly interested in promoting freedom of expression across its platforms, we renew our calls to prioritize the following much-needed improvements instead of allowing more hateful speech.

Meta Must Invest in Its Global User Base and Cover More Languages 

Meta has long failed to invest in providing cultural and linguistic competence in its moderation practices often leading to inaccurate removal of content as well as a greater reliance on (faulty) automation tools. This has been apparent to us for a long time. In the wake of the 2011 Arab uprisings, we documented our concerns with Facebook’s reporting processes and their effect on activists in the Middle East and North Africa. More recently, the need for cultural competence in the industry generally was emphasized in the revised Santa Clara Principles.

Over the years, Meta’s global shortcomings became even more apparent as its platforms were used to promote hate and extremism in a number of locales. One key example is the platform’s failure to moderate anti-Rohingya sentiment in Myanmar—the direct result of having far too few Burmese-speaking moderators (in 2015, as extreme violence and violent sentiment toward the Rohingya was well underway, there were just two such moderators).

If Meta is indeed going to roll back the use of automation to flag and action most content and ensure that appeals systems work effectively, which will solve some of these problems, it must also invest globally in qualified content moderation personnel to make sure that content from countries outside of the United States and in languages other than English is fairly moderated. 

Reliance on Automation to Flag Extremist Content Allows for Flawed Moderation

We have long been critical of Meta’s over-enforcement of terrorist and extremist speech, specifically of the impact it has on human rights content. Part of the problem is Meta’s over-reliance on moderation to flag extremist content. A 2020 document reviewing moderation across the Middle East and North Africa claimed that algorithms used to detect terrorist content in Arabic incorrectly flag posts 77 percent of the time

More recently, we have seen this with Meta’s automated moderation to remove the phrase “from the river to the sea.” As we argued in a submission to the Oversight Board—with which the Board also agreed—moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.

Another example of this problem that has overlapped with Meta’s shortcomings with respect to linguistic competence is in relation to the term “shaheed,” which translates most closely to “martyr” and is used by Arabic speakers and many non-Arabic-speaking Muslims elsewhere in the world to refer primarily (though not exclusively) to individuals who have died in the pursuit of ideological causes. As we argued in our joint submission with ECNL to the Meta Oversight Board, use of the term is context-dependent, but Meta has used automated moderation to indiscriminately remove instances of the word. In their policy advisory opinion, the Oversight Board noted that any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

Marginalized communities that experience persecution offline often face disproportionate censorship online. It is imperative that Meta recognize the responsibilities it has to its global user base in upholding free expression, particularly of communities that may otherwise face censorship in their home countries.

Sexually-Themed Content Remains Subject to Discriminatory Over-censorship

Our critique of Meta’s removal of sexually-themed content goes back more than a decade. The company’s policies on adult sexual activity and nudity affect a wide range of people and communities, but most acutely impact LGBTQ+ individuals and sex workers. Typically aimed at keeping sites “family friendly” or “protecting the children,” these policies are often unevenly enforced, often classifying LGBTQ+ content as “adult” or “harmful” when similar heterosexual content isn’t. These policies were often written and enforced discriminatorily and at the expense of gender-fluid and nonbinary speakers—we joined in the We the Nipple campaign aimed at remedying this discrimination.

In the midst of ongoing political divisions, issues like this have a serious impact on social media users. 

Most nude content is legal, and engaging with such material online provides individuals with a safe and open framework to explore their identities, advocate for broader societal acceptance and against hate, build communities, and discover new interests. With Meta intervening to become the arbiters of how people create and engage with nudity and sexuality—both offline and in the digital space—a crucial form of engagement for all kinds of users has been removed and the voices of people with less power have regularly been shut down. 

Over-removal of Abortion Content Stifles User Access to Essential Information 

The removal of abortion-related posts on Meta platforms containing the word ‘kill’ have failed to meet the criteria for restricting users’ right to freedom of expression. Meta has regularly over-removed abortion related content, hamstringing its user’s ability to voice their political beliefs. The use of automated tools for content moderation leads to the biased removal of this language, as well as essential information. In 2022, Vice reported that a Facebook post stating "abortion pills can be mailed" was flagged within seconds of it being posted.

At a time when bills are being tabled across the U.S. to restrict the exchange of abortion-related information online, reproductive justice and safe access to abortion, like so many other aspects of managing our healthcare, is fundamentally tied to our digital lives. And with corporations deciding what content is hosted online, the impact of this removal is exacerbated. 

What was benign data online is effectively now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. Meta must adhere to its responsibility to respect international human rights law, and ensure that any abortion-related content removal be both necessary and proportionate.

Meta’s symbolic move of its content team from California to Texas, a state that is aiming to make the distribution of abortion information illegal, also raises serious concerns that Meta will backslide on this issue—in line with local Texan state law banning abortion—rather than make improvements. 

Meta Must Do Better to Provide Users With Transparency 

EFF has been critical of Facebook’s lack of transparency for a long time. When it comes to content moderation the company’s transparency reports lack many of the basics: how many human moderators are there, and how many cover each language? How are moderators trained? The company’s community standards enforcement report includes rough estimates of how many pieces of content of which categories get removed, but does not tell us why or how these decisions are taken.

Meta makes billions from its own exploitation of our data, too often choosing their profits over our privacy—opting to collect as much as possible while denying users intuitive control over their data. In many ways this problem underlies the rest of the corporation’s harms—that its core business model depends on collecting as much information about users as possible, then using that data to target ads, as well as target competitors

That’s why EFF, with others, launched the Santa Clara Principles on how corporations like Meta can best obtain meaningful transparency and accountability around the increasingly aggressive moderation of user-generated content. And as platforms like Facebook, Instagram, and X continue to occupy an even bigger role in arbitrating our speech and controlling our data, there is an increased urgency to ensure that their reach is not only stifled, but reduced.

Flawed Approach to Moderating Misinformation with Censorship 

Misinformation has been thriving on social media platforms, including Meta. As we said in our initial statement, and have written before, Meta and other platforms should use a variety of fact-checking and verification tools available to it, including both community notes and professional fact-checkers, and have robust systems in place to check against any flagging that results from it. 

Meta and other platforms should also employ media literacy tools such as encouraging users to read articles before sharing them, and to provide resources to help their users assess reliability of information on the site. We have also called for Meta and others to stop privileging governmental officials by providing them with greater opportunities to lie than other users.

While we expressed some hope on Tuesday, the cynicism expressed by others seems warranted now. Over the years, EFF and many others have worked to push Meta to make improvements. We've had some success with its "Real Names" policy, for example, which disproportionately affected the LGBTQ community and political dissidents. We also fought for, and won improvements on, Meta's policy  on allowing images of breastfeeding, rather than marking them as "sexual content." If Meta truly values freedom of expression, we urge it to redirect its focus to empowering historically marginalized speakers, rather than empowering only their detractors.

EFF Statement on Meta's Announcement of Revisions to Its Content Moderation Processes

Update: After this blog post was published (addressing Meta's blog post here), we learned Meta also revised its public "Hateful Conduct" policy in ways EFF finds concerning. We address these changes in this blog post, published January 9, 2025.

In general, EFF supports moves that bring more freedom of expression and transparency to platforms—regardless of their political motivation. We’re encouraged by Meta's recognition that automated flagging and responses to flagged content have caused all sorts of mistakes in moderation. Just this week, it was reported that some of those "mistakes" were heavily censoring LGBTQ+ content. We sincerely hope that the lightened restrictions announced by Meta will apply uniformly, and not just to hot-button U.S. political topics. 

Censorship, broadly, is not the answer to misinformation. We encourage social media companies to employ a variety of non-censorship tools to address problematic speech on their platforms and fact-checking can be one of those tools. Community notes, essentially crowd-sourced fact-checking, can be a very valuable tool for addressing misinformation and potentially give greater control to users. But fact-checking by professional organizations with ready access to subject-matter expertise can be another. This has proved especially true in international contexts where they have been instrumental in refuting, for example, genocide denial. 

So, even if Meta is changing how it uses and preferences fact-checking entities, we hope that Meta will continue to look to fact-checking entities as an available tool. Meta does not have to, and should not, choose one system to the exclusion of the other. 

Importantly, misinformation is only one of many content moderation challenges facing Meta and other social media companies. We hope Meta will also look closely at its content moderation practices with regards to other commonly censored topics such as LGBTQ speech, political dissidence, and sex work.  

Meta’s decision to move its content teams from California to “help reduce the concern that biased employees are overly censoring content” seems more political than practical. There is of course no population that is inherently free from bias and by moving to Texas, the “concern” will likely not be reduced, but just relocated from perceived “California bias” to perceived “Texas bias.” 

Content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well, involving millions of difficult decisions. On the one hand, Meta has been over-moderating some content for years, resulting in the suppression of valuable political speech. On the other hand, Meta's previous rules have offered protection from certain types of hateful speech, harassment, and harmful disinformation that isn't illegal in the United States. We applaud Meta’s efforts to try to fix its over-censorship problem but will watch closely to make sure it is a good-faith effort and rolled out fairly and not merely a political maneuver to accommodate the upcoming U.S. administration change. 

Sixth Circuit Rules Against Net Neutrality; EFF Will Continue to Fight

Last week, the Sixth U.S. Circuit Court of Appeals ruled against the FCC, rejecting its authority to classify broadband as a Title II “telecommunications service.” In doing so, the court removed net neutrality protections for all Americans and  took away the FCC’s ability to meaningfully regulate internet service providers.

This ruling fundamentally gets wrong the reality of internet service we all live with every day. Nearly 80% of Americans view broadband access to be as important as water and electricity. It is no longer an extra, non-necessary “information service,” as it was seen 40 years ago, but it is a vital medium of communication in everyday life. Business, health services, education, entertainment, our social lives, and more have increasingly moved online. By ruling that broadband “information service” and not a “telecommunications service” this court is saying that the ISPs that control your broadband access will continue to face little to no oversight for their actions.

This is intolerable.

Net neutrality is the principle that ISPs treat all data that travels over their network equally, without improper discrimination in favor of particular apps, sites, or services. At its core, net neutrality is a principle of equity and protector of innovation—that, at least online, large monopolistic ISPs don’t get to determine winners and losers. Net neutrality ensures that users determine their online experience, not ISPs. As such, it is fundamental to user choice, access to information, and free expression online.

By removing protections against actions like blocking, throttling, and paid prioritization, the court gives those willing and able to pay ISPs an advantage over those who are not. It privileges large legacy corporations that have partnerships with the big ISPs, and it means that newer, smaller, or niche services will have trouble competing, even if they offer a superior service. It means that ISPs can throttle your service–or that of, say, a fire department fighting the largest wildfire in state history. They can block a service they don’t like. In addition to charging you for access to the internet, they can charge services and websites for access to you, artificially driving up costs. And where most Americans have little choice in home broadband providers, it means these ISPs will be able to exercise their monopoly power not just on the price you pay for access, but how you access and engage with information as well.

Moving forward, now more than ever it becomes important for individual states to pass their own net neutrality laws, or defend the ones they have on the books. California passed a gold standard net neutrality law in 2018 that has survived judicial scrutiny. It is up to us to ensure it remains in place.

Congress can also end this endless whiplash of reclassification and decide, once and for all, by passing a law classifying broadband internet services firmly under Title II. Such proposals have been introduced before; they ought to be introduced again.

This is a bad ruling for Team Internet, but we are resilient.  EFF–standing with users, innovators, creators, public interest advocates, librarians, educators, and everyone else who relies on the open internet–will continue to champion the principles of net neutrality and work toward an equitable and open internet for all.

2024 End-of-Year Fundraiser Succeeds: over $480k to support software freedom

We thank both donors who offered this historic $204,877 match & those who gave to help exceed the challenge

In late November, SFC, with the help of a group of generous individuals who pledged match gifts large and small, posted a huge challenge to our donors. We were so thankful for the donors who came together to offer others a match challenge of $204,877 — which was substantially larger than any of our match challenges in history.

Last Call: The Combined Federal Campaign Pledge Period Closes on January 15!

7 janvier 2025 à 11:38

The pledge period for the Combined Federal Campaign (CFC) closes on Wednesday, January 15! If you're a U.S. federal employee or retiree, now is the time to make your pledge and support EFF’s work to protect your rights online. 

If you haven’t before, giving to EFF through the CFC is quick and easy! Just head on over to GiveCFC.org and click “DONATE.” Then you can search for EFF using our CFC ID 10437 and make a pledge via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can also choose to increase your support there as well! 

The CFC is the world’s largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year members of this community raised nearly $34,000 to support EFF’s initiatives advocating for privacy and free expression online. That support has helped us: 

Federal employees and retirees have a tremendous impact on our democracy and the future of civil liberties and human rights online. By making a pledge through the CFC, you can shape a future where your privacy and free speech rights are protected. Make your pledge today using EFF’s CFC ID 10437

EFF Goes to Court to Uncover Police Surveillance Tech in California

Which surveillance technologies are California police using? Are they buying access to your location data? If so, how much are they paying? These are basic questions the Electronic Frontier Foundation is trying to answer in a new lawsuit called Pen-Link v. County of San Joaquin Sheriff’s Office.

EFF filed a motion in California Superior Court to join—or intervene in—an existing lawsuit to get access to documents we requested. The private company Pen-Link sued the San Joaquin Sheriff’s Office to block the agency from disclosing to EFF the unredacted contracts between them, claiming the information is a trade secret. We are going to court to make sure the public gets access to these records.

The public has a right to know the technology that law enforcement buys with taxpayer money. This information is not a trade secret, despite what private companies try to claim.

How did this case start?

As part of EFF’s transparency mission, we sent public records requests to California law enforcement agencies—including the San Joaquin Sheriff’s Office—seeking information about law enforcements’ use of technology sold by two companies: Pen-Link and its subsidiary, Cobwebs Technologies.

The Sheriff’s Office gave us 40 pages of redacted documents. But at the request of Pen-Link, the Sheriff’s Office redacted the descriptions and prices of the products, services, and subscriptions offered by Pen-Link and Cobwebs.

Pen-Link then filed a lawsuit to permanently block the Sheriff’s Office from making the information public, claiming its prices and descriptions are trade secrets. Among other things, Pen-Link requires its law enforcement customers to sign non-disclosure agreements to not reveal use of the technology without the company’s consent. In addition to thwarting transparency, this raises serious questions about defendants’ rights to obtain discovery in criminal cases.

“Customer and End Users are prohibited from disclosing use of the Deliverables, names of Cobwebs' tools and technologies, the existence of this agreement or the relationship between Customers and End Users and Cobwebs to any third party, without the prior written consent of Cobwebs,” according to Cobwebs’ Terms.

Unfortunately, these kinds of terms are not new.

EFF is entering the lawsuit to make sure the records get released to the public. Pen-Link’s lawsuit is known as a “reverse” public records lawsuit because it seeks to block, rather than grant access to public records. It is a rare tool traditionally only used to protect a person’s constitutional right to privacy—not a business’ purported trade secrets. In addition to defending against the “reverse” public records lawsuit, we are asking the court to require the Sheriff’s Office to give us the un-redacted records.

Who is Pen-Link and Cobwebs Technologies?

Pen-Link and its subsidiary Cobwebs Technologies are private companies that sell products and services to law enforcement. Pen-Link has been around for years and may be best known as a company that helps law enforcement execute wiretaps after a court grants approval. In 2023, Pen-Link acquired the company Cobwebs Technologies.

The redacted documents indicate that San Joaquin County was interested in Cobwebs’ “Web Intelligence Investigation Platform.” In other cases, this platform has included separate products like WebLoc, Tangles, or a “face processing subscription.” WebLoc is a platform that provides law enforcement with a vast amount of location data sourced from large data sets. Tangles uses AI to glean intelligence from the “open, deep and dark web.” Journalists at multiple news outlets have chronicled this technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. The company has also provided proxy social media accounts for undercover investigations, which led Meta to name it a surveillance-for-hire company and to delete hundreds of accounts associated with the platform. Cobwebs has had multiple high-value contracts with federal agencies like Immigration and Customs Enforcement (ICE) and the Internal Revenue Service (IRS) and state entities, like the Texas Department of Public Safety and the West Virginia Fusion Center. EFF classifies this type of product as a “Third Party Investigative Platform,” a category that we began documenting in the Atlas of Surveillance project earlier this year.

What’s next?

Before EFF officially joins the case, the court must grant our motion, then we can file our petition and brief the case. A favorable ruling would grant the public access to these documents and show law enforcement contractors that they can’t hide their surveillance tech behind claims of trade secrets.

For communities to have informed conversations and make reasonable decisions about powerful surveillance tools being used by their governments, our right to information under public records laws must be honored. The costs and descriptions of government purchases are common data points, regularly subject to disclosure under public records laws.

Allowing PenLink to keep this information secret would dangerously diminish the public’s right to government transparency and help facilitate surveillance of U.S. residents. In the past, our public records work has exposed similar surveillance technology. In 2022, EFF produced a large exposé on Fog Data Science, the secretive company selling mass surveillance to local police.

The case number is STK-CV-UWM-0016425. Read more here: 

EFF's Motion to Intervene
EFF's Points and Authorities
Trujillo Declaration & EFF's Cross-Petition
Pen-Link's Original Complaint
Redacted documents produced by County of San Joaquin Sheriff’s Office

Karen Sandler interviews Cory Doctorow

Our Executive Director Karen Sandler recently sat down with Cory Doctorow to talk about software right to repair, the utility and history of DMCA exemptions, and some of the differences between the way laws take effect in different places around the world. Doctorow is widely known for his speculative fiction touching on issues of technology, activism, and post-scarcity economics. We were so excited for this conversation, many on SFC staff are fans and had a great time preparing for the conversation.

Embroidery and resilient software freedom in 2025

Sage Sharp's in progress embroidery of the SFC tree logo

CC-BY-NA 4.0 Sage Sharp

I spent most of 2024 recovering from a spine injury after a car accident. I’d love to share my new insight into free software accessibility, and how both free software and embroidery helped me build resiliency. I’ve been working on a special embroidery that I’ll send to a donor who gives to Software Freedom Conservancy on January 8. We hope if you are able to give you’ll consider donating!

❌
❌