Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy

We released a new version of Privacy Badger 1 that updates how we fight “link tracking” across a number of Google products. With this update Privacy Badger removes tracking from links in Google Docs, Gmail, Google Maps, and Google Images results. Privacy Badger now also removes tracking from links added after scrolling through Google Search results.

Link tracking is a creepy surveillance tactic that allows a company to follow you whenever you click on a link to leave its website. As we wrote in our original announcement of Google link tracking protection, Google uses different techniques in different browsers. The techniques also vary across Google products. One common link tracking approach surreptitiously redirects the outgoing request through the tracker’s own servers. There is virtually no benefit 2 for you when this happens. The added complexity mostly just helps Google learn more about your browsing.

It's been a few years since our original release of Google link tracking protection. Things have changed in the meantime. For example, Google Search now dynamically adds results as you scroll the page ("infinite scroll" has mostly replaced distinct pages of results). Google Hangouts no longer exists! This made it a good time for us to update Privacy Badger’s first party tracking protections.

Privacy Badger’s extension popup window showing that link tracking protection is active for the currently visited site.

You can always check to see what Privacy Badger has done on the site you’re currently on by clicking on Privacy Badger’s icon in your browser toolbar. Whenever link tracking protection is active, you will see that reflected in Privacy Badger’s popup window.

We'll get into the technical explanation about how this all works below, but the TL;DR is that this is just one way that Privacy Badger continues to create a less tracking- and tracker-riddled internet experience.

More Details

This update is an overhaul of how Google link tracking removal works. Trying to get it all done inside a “content script” (a script we inject into Google pages) was becoming increasingly untenable. Privacy Badger wasn’t catching all cases of tracking and was breaking page functionality. Patching to catch the missed tracking with the content script was becoming unreasonably complex and likely to break more functionality.

Going forward, Privacy Badger will still attempt to replace tracking URLs on pages with the content script, but will no longer try to prevent links from triggering tracking beacon requests. Instead, it will block all such requests in the network layer.

Often the link destination is replaced with a redirect URL in response to interaction with the link. Sometimes Privacy Badger catches this mutation in the content script and fixes the link in time. Sometimes the page uses a more complicated approach to covertly open a redirect URL at the last moment, which isn’t caught in the content script. Privacy Badger works around these cases by redirecting the redirect to where you actually want to go in the network layer.

Google’s Manifest V3 (MV3) removes the ability to redirect requests using the flexible webRequest API that Privacy Badger uses now. MV3 replaces blocking webRequest with the limited by design Declarative Net Request (DNR) API. Unfortunately, this means that MV3 extensions are not able to properly fix redirects at the network layer at this time. We would like to see this important functionality gap resolved before MV3 becomes mandatory for all extensions.

Privacy Badger still attempts to remove tracking URLs with the content script so that you can always see and copy to clipboard the links you actually want, as opposed to mangled links you don’t. For example, without this feature, you may expect to copy “https://example.com”, but you will instead get something like “https://www.google.com/url?q=https://example.com/&sa=D&source=editors&ust=1692976254645783&usg=AOvVaw1LT4QOoXXIaYDB0ntz57cf”.

To learn more about this update, and to see a breakdown of the different kinds of Google link tracking, visit the pull request on GitHub.

Let us know if you have any feedback through email, or, if you have a GitHub account, through our GitHub issue tracker.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

  • 1. Privacy Badger version 2023.9.12
  • 2. No benefit outside of removing the referrer information, which can be accomplished without resorting to obnoxious redirects.

We Want YOU (U.S. Federal Employees) to Stand for Digital Freedoms

19 septembre 2023 à 15:02

It's that time of the year again! U.S. federal employees and retirees can support the digital freedom movement through the Combined Federal Campaign (CFC).

The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Last year, 175 members of the CFC community raised over $34,000 for EFF's lawyers, activists, and technologists fighting for digital freedoms online. But, in a year with many threats popping up to our rights online, we need your support now more than ever.

Giving to EFF through the CFC is easy! Just head over to GiveCFC.org and use our ID 10437. Once there, click DONATE to give via payroll deduction, credit/debit, or an e-check. If you have a renewing pledge, you can increase your support as well! Scan the QR code below to easily make a pledge or go to GiveCFC.org!

CFC logo with "GIVE HAPPY" text and QR code to GiveCFC.orgThis year's campaign theme—GIVE HAPPY—shows that when U.S. federal employees and retirees give together, they make a meaningful difference to a countless number of individuals throughout the world. They ensure that organizations like EFF can continue working towards our goals even during challenging times.

With support from those who pledged through the CFC last year, EFF has:

  • Authored amicus briefs in multiple court cases, leading a federal judge to find that device searches at the U.S. border require a warrant.
  • Forced the San Francisco Board of Supervisors to reverse a decision and stop police from equipping robots with deadly weapons.
  • Made great strides in passing protections for the right to repair your tech, with the combined strength of innovation advocates around the country.
  • Convinced Apple to finally abandon its device-scanning plan and encrypt iCloud storage for the good of all its customers.

Federal employees and retirees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Support EFF’s work by using our CFC ID 10437 when you make a pledge today!

Today The UK Parliament Undermined The Privacy, Security, And Freedom Of All Internet Users 

Par : Joe Mullin
19 septembre 2023 à 15:50

The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide

A Backdoor That Undermines Encryption

A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content.This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor. 

These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years

It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety. 

The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world. 

Censorship and Age-Gating

Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions. 

The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary. 

In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users. 

This Bill Would Revive The Worst Patents On Software—And Human Genes  

Par : Joe Mullin
21 septembre 2023 à 13:34

The majority of high-tech patent lawsuits are brought by patent trolls—companies that exist not to provide products or services, but primarily have a business using patents to threaten others’ work. 

Some politicians are proposing to make that bad situation worse. Rather than taking the problem of patent trolling seriously, they want to encourage more bad patents, and make life easier—and more profitable—for the worst patent abusers. 

Take Action

Congress Must Not Bring Back The Worst Computer Patents

The Patent Eligibility Restoration Act, S. 2140, (PERA), sponsored by Senators Thom Tillis (R-NC) and Chris Coons (D-DE) would be a huge gift to patent trolls, a few tech firms that aggressively license patents, and patent lawyers. For everyone else, it will be a huge loss. That’s why we’re opposing it, and asking our supporters to speak out as well. 

Patent trolling is still a huge, multi-billion dollar problem that’s especially painful for small businesses and everyday internet users. But, in the last decade, we’ve made modest progress placing limits on patent trolling. The Supreme Court’s 2014 decision in Alice v. CLS Bank barred patents that were nothing more than abstract ideas with computer jargon added in. Using the Alice test, federal courts have kicked out a rogue’s gallery of hundreds of the worst patents. 

Under Alice’s clear rules, courts threw out ridiculous patents on “matchmaking”, online picture menus, scavenger hunts, and online photo contests. The nation’s top patent court, the Federal Circuit, actually approved a patent on watching an ad online twice before the Alice rules finally made it clear that patents like that cannot be allowed. The patents on “bingo on a computer?” Gone under Alice. Patents on loyalty programs (on a computer)? Gone. Patents on upselling (with a computer)? All gone

Alice isn’t perfect, but it has done a good job saving internet users from some of the worst patent claims. At EFF, we have  collected stories of people whose careers, hobbies, or small companies were “Saved by Alice.” It’s hard to believe that anyone would want to invite such awful patents back into our legal system—but that’s exactly what PERA does. 

PERA’s attempt to roll back progress goes beyond computer technology. For almost 30 years, some biotech and pharmaceutical companies actually applied for, and were granted, patents on naturally occuring human genes. As a consequence, companies were able to monopolize diagnostic tests that relied on naturally occurring genes in order to help predict diseases such as breast cancer, making such testing far more expensive. The ACLU teamed up with doctors to confront this horrific practice, and sued. That lawsuit led to a historic victory in 2013 when the Supreme Court disallowed patents on human genes found in nature. 

If PERA passes, it will explicitly overturn that ruling, allowing human genes to be patented once again. 

That’s why we’re going to fight against this bill, just as we fought off a very similar one last year. Put simply: it’s wrong to let anyone patent basic internet use. It hurts innovation, and it hurts free speech. Nor will we stand idly when threatened with patents on the building blocks of human life—a nightmarish concept that should be relegated to sci-fi shows. 

Take Action

Some Things Shouldn't Be Patented

This Bill Destroys The Best Legal Defense Against Bad Patents 

It’s critical that Alice allows patents to be thrown out under Section 101 of the patent law, before patent trolls can force their opponents into expensive litigation discovery. This is the most efficient and correct way for courts to throw out patents that never should have been issued in the first place. If the patent can’t pass the test under Alice, it’s really not much of an “invention” at all. 

But the effectiveness of the Alice test has meant that some patent trolls and IP lawyers aren’t making as much money. That’s why they want to insist that other areas of law should be used to knock out bad patents, like the ones requiring patents to be novel and non-obvious. 

This position is willfully blind to the true business model of patent trolling. The patent trolls know their patents are terrible—that’s why they often don’t want them tested in court at all. Many of the worst patent holders, such as Landmark Technology or some Leigh Rothschild entities, make it a point to never even get very far in litigation. The cases rarely get to claim construction (an early step in patent litigation, where a judge decides what the patent claims mean), much less to a full jury trial. Instead, they simply leverage the high cost of litigation. When it’s hard and expensive for defendants to file a motion challenging a patent, the patents often don’t even get properly tested. Then trolling companies get to use the judicial system for harassment, making their settlement demands cheaper than fighting back. For the rare defendant that fights back, they can drop the case. 

This Bill Has No Serious Safeguards

The bill eliminates the Alice test and every other judicial limitation on abstract patents that has formed over the decades. After ripping down this somewhat effective gate on the worst patents, it replaces it with a safeguard that’s nearly useless. 

On page 4 of the bill, it states that: 

“performing dance moves, offering marriage proposals, and the like shall not be eligible for patent coverage, and adding a non-essential reference to a computer by merely stating, for example, ‘‘do it on a computer’’ shall not establish such eligibility.” 

The addition of “do it on a computer” patents is an interesting change to last year’s version of the same bill, since that’s a specific phrase we used to critique the bill in our blog post last year

After Alice, EFF and others rightly celebrated courts’ ability to knock out most “do it on a computer” patents. But “do it on a computer” isn’t language that actually gets used in patents; it’s a description of a whole style of patent. And this bill specifically allows for such patents. It states that any process that “cannot be practically performed without the use of a machine (including a computer)” will be eligible for a patent. 

This language would mean that many of the most ridiculous patents that have been knocked out under Alice in the past decade will survive. They all describe the use of processors, “communications modules” and other jargon that requires computers. That means patents on an online photo contest, or displaying an object online, or tracking packages, or making an online menu—could once again become part of patent troll portfolios. All will be more effective at extorting everyday people and real innovators making actual products. 

“To See Your Own Blood, Your Own Genes”

From the 1980s until the 2013 Myriad decision, the U.S. Patent and Trademark Office granted patents on human genomic sequences. If researchers “isolated” the gene—a necessary part of analysis—they would then get a patent that described isolating, or purified, as a human process, and insist they weren’t getting a patent on the natural world itself.

But this concept of patenting an “isolated” gene was simply a word game, and a distinction without a difference. With the genetic patent in hand, the patent-holder could demand royalty payments from any kind of test or treatment involving that gene. And that’s exactly what Myriad Genetic did when they patented the BRCA1 and BRCA2 gene sequences, which are important indicators for the prevalence of breast or ovarian cancer. 

Myriad’s patents significantly increased the cost of those tests to U.S. patients. The company even sent some doctors cease and desist letters, saying the doctors could not perform simple tests on their own patients—even looking at the gene sequences without Myriad’s permission would constitute patent infringement. 

This behavior caused pathologists, scientists, and patients to band together with ACLU lawyers and challenge Myriad’s patents. They litigated all the way to the Supreme Court, and won. “A naturally occurring DNA segment is a product of nature and not patent eligible merely because it has been isolated,” the Supreme Court stated in Association for Molecular Pathology v. Myriad Genetics

A practice like granting and enforcing patents on human genes should truly be left in the dustbin of history. It’s shocking that pro-patent lobbyists have convinced these Senators to introduce legislation seeking to reinstate such patents. Last month, the President of the College of American Pathologists published an op-ed reminding lawmakers and the public about the danger of patenting the human genome, calling gene patents “dangerous to the public welfare.”  

As Lisbeth Ceriani, a breast cancer survivor and a plaintiff in the Myriad case said, “It’s a basic human right to see your own blood, your own genes.”

We can’t allow patents that allow internet users to be extorted for using the internet to express themselves, or do business. And we won’t allow our bodies to be patented. Tell Congress this bill is going nowhere. 

Take Action

Reject Human Gene Patents

Don’t Fall for the Intelligence Community’s Monster of the Week Justifications

22 septembre 2023 à 17:37

In the beloved episodic television shows of yesteryear, the antagonists were often “monsters of the week”: villains who would show up for one episode and get vanquished by the heroes just in time for them to fight the new monster in the following episode. Keeping up with the Intelligence Community and law enforcement’s justifications for invasive, secretive, and uncontrollable surveillance powers and authorities is a bit like watching  one of these shows. This week, they could say they need it to fight drugs or other cross-border contraband. Next week, they might need it to fight international polluters or revert to the tried-and-true national security justifications. The fight over the December 31, 2023 expiration of Section 702 of the Foreign Intelligence Surveillance Act is no exception to the Monster of the Week phenomenon.

Section 702 is a surveillance authority that allows the National Security Agency to collect communications from all over the world. Although the authority supposedly prohibits targeting people on U.S. soil, people in the United States communicate with people overseas all the time and routinely have their communications collected and stored under this program. This results in a huge pool of “incidentally” collected communications from Americans which the Federal Bureau of Investigation eagerly exploits by searching through without a warrant. These unconstitutional “backdoor” searches have happened millions of times and have continued despite a number of attempts by courts and Congress to rein in the illegal practice.

Take action

TELL congress: End 702 Absent serious reforms

Now, Section 702 is set to expire at the end of December. The Biden administration and intelligence community, eager to renew their embattled and unpopular surveillance powers, is searching for whatever sufficiently important policy concern that’s in the news—no matter how disconnected from Section 702’s original purpose—might convince  lawmakers to let them keep all their invasive tools. Justifying the continuation of Section 702 could take the form of vetting immigrants, stopping drug trafficking, or the original and most tried-and-true justification: national security. As the National Security Advisor Jake Sullivan wrote in July 2023, “Thanks to intelligence obtained under this authority, the United States has been able to understand and respond to threats posed by the People’s Republic of China, rally the world against Russian atrocities in Ukraine, locate and eliminate terrorists intent on causing harm to America, enable the disruption of fentanyl trafficking, mitigate the Colonial Pipeline ransomware attack, and much more.” Searching for the monster-du-jour that will scare the public into once again ceding their constitutional right to private communications is what the Intelligence Community does, and has done, for decades.

Fentanyl may be the IC’s current nemesis, but the argumentation behind it is weak. As one recent op-ed in the Hill noted, “Commonsense reforms to protect Americans’ privacy would not make the law less effective in addressing international drug trafficking or other foreign threats. To the contrary, it is the administration’s own intransigence on such reforms that has put reauthorization at risk.

Since even before 2001, citing the need for new surveillance powers in order to secure the homeland has been a nearly foolproof way of silencing dissenters and creating hard-to-counter arguments for enhanced authorities. These surveillance programs are then so shrouded in secrecy that it becomes impossible to know how they’re being used, if they’re effective, or whether they’ve been abused.

With the pressure to renew Section 702 looming, we know the White House is feeling the pressure of our campaign to restore the privacy of our communications. No matter what bogeyman they present to us to justify its clean renewal, we have to keep the pressure up. You can use this easy tool to contact your members of Congress and tell them: absent major reforms, let 702 expire!

Take action

TELL congress: End 702 Absent serious reforms

The U.S. Government’s Database of Immigrant DNA Has Hit Scary, Astronomical Proportions

The FBI recently released its proposed budget for 2024, and its request for a massive increase in funding for its DNA database should concern us all. The FBI is asking for an additional $53 million in funding to aid in the collection, organization, and maintenance of its Combined DNA Index System (CODIS) database in the wake of a 2020 Trump Administration rule that requires the Department of Homeland Security to collect DNA from anyone in immigration detention. The database approximately houses the genetic information on over 21 million people, adding an average of 92,000 DNA samples a month in the last year alone–over 10 times the historical sample volume. The FBI’s increased budget request demonstrates that the federal government has, in fact, made good on its projection of collecting over 750,000 new samples annually from immigrant detainees for CODIS. This type of forcible DNA collection and long-term hoarding of genetic identifiers not only erodes civil liberties by exposing individuals to unnecessary and unwarranted government scrutiny, but it also demonstrates the government’s willingness to weaponize biometrics in order to surveil vulnerable communities.

After the Supreme Court’s decision in Maryland v. King (2013), which upheld a Maryland statute to collect DNA from individuals arrested for a violent felony offense, states have rapidly expanded DNA collection to encompass more and more offenses—even when DNA is not implicated in the nature of the offense. For example, in Virginia, the ACLU and other advocates fought against a bill that would have added obstruction of justice and shoplifting as offenses for which DNA could be collected. The federal government’s expansion of DNA collection from all immigrant detainees is the most drastic effort to vacuum up as much genetic information as possible, based on false assumptions linking crime to immigration status despite ample evidence to the contrary.

As we’ve previously cautioned, this DNA collection has serious consequences. Studies have shown that increasing the number of profiles in DNA databases doesn’t solve more crimes. A 2010 RAND report instead stated that the ability of police to solve crimes using DNA is “more strongly related to the number of crime-scene samples than to the number of offender profiles in the database.” Moreover, inclusion in a DNA database increases the likelihood that an innocent person will be implicated in a crime. 

Lastly, this increased DNA collection exacerbates the existing racial disparities in our criminal justice system by disproportionately impacting communities of color. Black and Latino men are already overrepresented in DNA databases. Adding nearly a million new profiles of immigrant detainees annually—who are almost entirely people of color, and the vast majority of whom are Latine—will further skew the 21 million profiles already in CODIS.

We are all at risk when the government increases its infrastructure and capacity for collecting and storing vast quantities of invasive data. With the resources to increase the volume of samples collected, and an ever-broadening scope of when and how law enforcement can collect genetic material from people, we are one step closer to a future in which we all are vulnerable to mass biometric surveillance. 

Digital Rights Updates with EFFector 35.12

25 septembre 2023 à 14:25

With so much happening in the digital rights movement, it can be difficult to keep up. But EFF has you covered with our EFFector newsletter, containing a collection of the latest headlines! The latest issue is out now and covers a new update to our Privacy Badger browser extension, the fight to require law enforcement gather a warrant before using a drone to spy on a home, and EFF's victory helping free the law with public resource.

Learn more about all of the latest news by reading the full newsletter here, or you can even listen to an audio version of the newsletter below!

Listen on YouTube

EFFector 35.12 | Freeing the Law with Public Resource

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

EFF at FIFAfrica 2023

25 septembre 2023 à 15:42

EFF is excited to be in Dar es Salaam, Tanzania for this year's iteration of the Forum on Internet Freedom in Africa (FIFAfrica), organized by CIPESA (Collaboration on International ICT Policy for East and Southern Africa) between 27-29 September 2023.

FIFAfrica is a landmark event in the region that convenes an array of stakeholders from across internet governance and online rights to discuss and collaborate on opportunities for advancing privacy, protecting free expression, and enhancing the free flow of information online. FIFAfrica also offers a space to identify new and important digital rights issues, as well as exploring avenues to engage with these debates across national, regional, and global spaces.

We hope you have an opportunity to connect with us at the panels listed below. In addition to these, EFF will be attending many other events at FIFAfrica. We look forward to meeting you there!

THURSDAY 28 SEPTEMBER 

Combatting Disinformation for Democracy 

2pm to 3:30pm local time 
Location: Hyatt Hotel - Kibo 

Hosted by: CIPESA

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nompilo Simanje, Africa Advocacy and Partnerships Lead, International Press Institute 
  • Obioma Okonkwo, Head, Legal Department, Media Rights Agenda
  • Daniel O’Maley, Senior Digital Governance Specialist, Center for International Media Assistance 

In an age of falsehoods, facts, and freedoms marked by the rapid spread of information and the proliferation of digital platforms, the battle against disinformation has never been more critical. This session brings together experts and practitioners at the forefront of this fight, exploring the pivotal roles that media, fact checkers, and technology play in upholding truth and combating the spread of false narratives. 

This panel will delve into the multifaceted challenges posed by disinformation campaigns, examining their impact on societies, politics, and public discourse. Through an engaging discussion, the session will spotlight innovative strategies, cutting-edge technologies, and collaborative initiatives employed by media organizations, tech companies, and civil society to safeguard the integrity of information.

FRIDAY 29 SEPTEMBER

Platform Accountability in Africa: Content Moderation and Political Transitions

11am to 12:30pm local time
Location: Hyatt Hotel - Kibo 

Hosted by: Meta Oversight Board, CIPESA, Open Society Foundations 

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nerima Wako, Executive Director, SIASA PLACE
  • Abigail Bridgman, Deputy Vice President, Content Review and Policy, Meta Oversight Board 
  • Afia Asantewaa Asare-Kyei, Member, Meta Oversight Board

Social media platforms are often criticized for failing to address significant and seemingly preventable harms stemming from online content. This is especially true during volatile political transitions, where disinformation, violence incitement, and hate speech on the basis of gender, religion, ethnicity, and other characteristics, are highly associated with increased real-life harms.

This session will discuss best practices for combating harmful online content through the lens of the most urgent and credible threats to political transitions on the African continent. With critical general, presidential, and legislative elections fast approaching, as well as the looming threat of violent political transitions, the panelists will highlight current trends of online content, the impact of harmful content, and chart a path forward for the different stakeholders. The session will also assess the various roles that different institutions, stakeholders, and experts can play to strike the balance between addressing harms and respecting the human rights of users under such a context.

EFF, ACLU and 59 Other Organizations Demand Congress Protect Digital Privacy and Free Speech

26 septembre 2023 à 16:50

Earlier this week, EFF joined the ACLU and 59 partner organizations to send a letter to Senate Majority Leader Chuck Schumer urging the Senate to reject the STOP CSAM Act. This bill threatens encrypted communications and free speech online, and would actively harm LGBTQ+ people, people seeking reproductive care, and many others. EFF has consistently opposed this legislation. This bill has unacceptable consequences for free speech, privacy, and security that will affect how we connect, communicate, and organize.

TAKE ACTION

TELL CONGRESS NOT TO OUTLAW ENCRYPTED APPS

The STOP CSAM Act, as amended, would lead to censorship of First Amendment protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality. Even today, without this bill, platforms regularly remove content that has vague ties to sex or sexuality for fear of liability. This would only increase if STOP CSAM incentivized apps and websites to exercise a heavier hand at content moderation.

If enacted, the STOP CSAM Act will also make it more difficult to communicate using end-to-end encryption. End-to-end encrypted communications cannot be read by anyone but the sender or recipient — that means authoritarian governments, malicious third parties, and the platforms themselves can’ read user messages. Offering encrypted services could open apps and websites up to liability, because a court could find that end-to-end encryption services are likely to be used for CSAM, and that merely offering them is reckless.

Congress should not pass this law, which will undermine security and free speech online. Existing law already requires online service providers who have actual knowledge of CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC), a quasi-government entity that works closely  with law enforcement agencies. Congress and the FTC have many tools already at their disposal to tackle CSAM, some of which are not used. 

EFF's Comment to the Meta Oversight Board on Polish Anti-Trans Facebook Post 

27 septembre 2023 à 11:33

EFF recently submitted comments in response to the Meta Oversight Board’s request for input on a Facebook post in Polish from April 2023 that targeted trans people. The Oversight Board was created by Meta in 2020 as an appellate body and has 22 members from around the world who review contested content moderation decisions made by the platform.  

Our comments address how Facebook’s automated systems failed to prioritize content for human review. From our observations—and the research of many within the digital rights community—this is a common deficiency made worse during the pandemic, when Meta decreased the number of workers moderating content on its platforms. In this instance, the content was eventually sent for human review and was still assessed to be non-violating and therefore not escalated further. Facebook kept the content online despite 11 different users reporting the content 12 times and only removed the content once the Oversight Board decided to take the case for review. 

As EFF has demonstrated, Meta has at times over-removed legal LGBTQ+ related content whilst simultaneously keeping content online that depicts hate speech toward the LGBTQ+ community. This is often because the content—as in this specific case—is not an explicit depiction of such hate speech, but rather a message that is embedded in a wider context that automated content moderation tools and inadequately trained human moderators are simply not equipped to consider. These tools do not have the ability to recognize nuance or the context of statements, and human reviewers are not provided the training to remove content that depicts hate speech beyond a basic slur. 

This incident serves as part of the growing body of evidence that Facebook’s systems are inadequate in detecting seriously harmful content, particularly that which targets marginalized and vulnerable communities. Our submission looks at the various reasons for these shortcomings and makes the case that Facebook should have removed the content—and should keep it offline.

Read the full submission in the PDF below.

How To Turn Off Google’s “Privacy Sandbox” Ad Tracking—and Why You Should

28 septembre 2023 à 13:42

Google has rolled out "Privacy Sandbox," a Chrome feature first announced back in 2019 that, among other things, exchanges third-party cookies—the most common form of tracking technology—for what the company is now calling "Topics." Topics is a response to pushback against Google’s proposed Federated Learning of Cohorts (FLoC), which we called "a terrible idea" because it gave Google even more control over advertising in its browser while not truly protecting user privacy. While there have been some changes to how this works since 2019, Topics is still tracking your internet use for Google’s behavioral advertising.

If you use Chrome, you can disable this feature through a series of three confusing settings.

With the version of the Chrome browser released in September 2023, Google tracks your web browsing history and generates a list of advertising "topics" based on the web sites you visit. This works as you might expect. At launch there are almost 500 advertising categories—like "Student Loans & College Financing," "Parenting," or "Undergarments"—that you get dumped into based on whatever you're reading about online. A site that supports Privacy Sandbox will ask Chrome what sorts of things you're supposedly into, and then display an ad accordingly. 

The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Google calls this “enhanced ad privacy,” perhaps leaning into the idea that starting in 2024 they plan to “phase out” the third-party cookies that many advertisers currently use to track people. But the company will still gobble up your browsing habits to serve you ads, preserving its bottom line in a world where competition on privacy is pushing it to phase out third-party cookies. 

Google plans to test Privacy Sandbox throughout 2024. Which means that for the next year or so, third-party cookies will continue to collect and share your data in Chrome.

The new Topics improves somewhat over the 2019 FLoC. It does not use the FLoC ID, a number that many worried would be used to fingerprint you. The ad-targeting topics are all public on GitHub, hopefully avoiding any clearly sensitive categories such as race, religion, or sexual orientation. Chrome's ad privacy controls, which we detail below, allow you to see what sorts of interest categories Chrome puts you in, and remove any topics you don't want to see ads for. There's also a simple means to opt out, which FLoC never really had during testing.

Other browsers, like Firefox and Safari, baked in privacy protections from third-party cookies in 2019 and 2020, respectively. Neither of those browsers has anything like Privacy Sandbox, which makes them better options if you'd prefer more privacy. 

Google referring to any of this as “privacy” is deceiving. Even if it's better than third-party cookies, the Privacy Sandbox is still tracking, it's just done by one company instead of dozens. Instead of waffling between different tracking methods, even with mild improvements, we should work towards a world without behavioral ads.

But if you're sticking to Chrome, you can at least turn these features off.

How to Disable Privacy Sandbox

Screenshot of Chrome browser with "enhanced ad privacy in Chrome" page Depending on when you last updated Chrome, you may have already received a pop-up asking you to agree to “Enhanced ad privacy in Chrome.” If you just clicked the big blue button that said “Got it” to make the pop-up go away, you opted yourself in. But you can still get back to the opt out page easily enough by clicking the Three-dot icon (⋮) > Settings > Privacy & Security > Ad Privacy page. Here you'll find this screen with three different settings:

  • Ad topics: This is the fundamental component of Privacy Sandbox that generates a list of your interests based on the websites you visit. If you leave this enabled, you'll eventually get a list of all your interests, which are used for ads, as well as the ability to block individual topics. The topics roll over every four weeks (up from weekly in the FLOCs proposal) and random ones will be thrown in for good measure. You can disable this entirely by setting the toggle to "Off."
  • Site-suggested ads: This confusingly named toggle is what allows advertisers to do what’s called "remarketing" or "retargeting," also known as “after I buy a sofa, every website on the internet advertises that same sofa to me.” With this feature, site one gives information to your Chrome instance (like “this person loves sofas”) and site two, which runs ads, can interact with Chrome such that a sofa ad will be shown, even without site two learning that you love sofas. Disable this by setting the toggle to "Off."
  • Ad measurement: This allows advertisers to track ad performance by storing data in your browser that's then shared with other sites. For example, if you see an ad for a pair of shoes, the site would get information about the time of day, whether the ad was clicked, and where it was displayed. Disable this by setting the toggle to "Off."

If you're on Chrome, Firefox, Edge, or Opera, you should also take your privacy protections a step further with our own Privacy Badger, a browser extension that blocks third-party trackers that use cookies, fingerprinting, and other sneaky methods. On Chrome, Privacy Badger also disables the Topics API by default.

EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

Par : Sophia Cope
28 septembre 2023 à 16:16

Intern Muhammad Essa contributed to this post.

EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a government agency’s social media pages. The district court’s decision is problematic because it undermines our right to freely express opinions on issues of public importance using a modern and accessible way to communicate with government representatives.

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH), arguing that NIH blocks their comments against animal testing in scientific research on the agency’s Facebook and Instagram pages, thus violating of the First Amendment. NIH provides funding for research that involves testing on animals from rodents to primates.

NIH claims to apply a general rule prohibiting public comments that are “off topic” to the agency’s social media posts—yet the agency implements this rule by employing keyword filters that include words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop. These words are commonly found in comments that express a viewpoint that is against animal testing and sympathetic to animal rights.

First Amendment law makes it clear that when a government agency opens a forum for public participation, such as the interactive spaces of the agency’s social media pages, it is prohibited from censoring a particular viewpoint in that forum. Any speech restrictions that it may apply must be viewpoint-neutral, meaning that the restrictions should apply equally to all viewpoints related to a topic, not just to the viewpoint that the agency disagrees with.

EFF’s brief argues that courts must approach with scepticism a government agency’s claim that its “off topic” speech restriction is viewpoint-neutral and is only intended to exclude irrelevant comments. How such a rule is implemented could reveal that it is in fact a guise for unconstitutional viewpoint discrimination. This is the case here and the district court erred in ruling for the government.

For example, EFF’s brief argues that NIH’s automated keyword filters are imprecise—they are incapable of accurately implementing an “off topic” rule because they are incapable of understanding context and nuance, which is necessary when comparing a comment to a post. Also, NIH’s keyword filters and the agency’s manual enforcement of the “off topic” rule are highly underinclusive—that is, other people's comments that are “off topic” to a post are often allowed to remain on the agency’s social media pages. Yet PETA’s comments against animal testing are reliably censored.

Imprecise and underinclusive enforcement of the “off topic” rule suggests that NIH’s rule is not viewpoint-neutral but is really a means to block PETA activists from engaging with the agency online.

EFF’s brief urges the D.C. Circuit to reject the district court’s erroneous holding and rule in favor of the plaintiffs. This would protect everyone’s right to express their opinions freely online. The free exchange of opinions informs public policy and is a crucial characteristic of a democratic society. A genuine representative government must not be afraid of public criticism.

The Federal Government’s Privacy Watchdog Concedes: 702 Must Change

28 septembre 2023 à 17:41

The Privacy and Civil Liberties Oversight Board (PCLOB) has released its much-anticipated report on Section 702, a legal authority that allows the government to collect a massive amount of digital communications around the world and in the U.S. The PCLOB agreed with EFF and organizations across the political spectrum that the program requires significant reforms if it is to be renewed before its December 31, 2023 expiration. Of course, EFF believes that Congress should go further–including letting the program expire–in order to restore the privacy being denied to anyone whose communications cross international boundaries. 

PCLOB is an organization within the federal government appointed to monitor the impact of national security and law enforcement programs and techniques on civil liberties and privacy. Despite this mandate, the board has a history of tipping the scales in favor of the privacy annihilating status quo. This history is exactly why the recommendations in their new report are such a big deal: the report says Congress should require individualized authorization from the Foreign Intelligence Surveillance Court (FISC) for any searches of 702 databases for U.S. persons. Oversight, even by the secretive FISC, would be a departure from the current system, in which the Federal Bureau of Investigation can, without warrant or oversight, search for communications to or from anyone of the millions of people in the United States whose communications have been  vacuumed up by the mass surveillance program.

The report also recommends a permanent end to the legal authority that allows “abouts” collection, a search that allows the government to look at digital communications between two “non-targets”–people who are not the subject of the investigation–as long as they are talking “about” a specific individual.  The Intelligence Community voluntarily ceased this collection after increasing skepticism about its legality from the FISC. We agree with the PCLOB that it’s time to put the final nail in the coffin of this unconstitutional mass collection. 

Section 702 allows the National Security Agency to collect communications from all over the world. Although the authority supposedly prohibits targeting people on U.S. soil, people in the United States communicate with people overseas all the time and routinely have their communications collected and stored under this program. This results in a huge pool of what the government calls “incidentally” collected communications from Americans which the FBI and other federal law enforcement organizations eagerly exploit by searching without a warrant. These unconstitutional “backdoor” searches have happened millions of times and have continued despite a number of attempts by courts and Congress to rein in the illegal practice.

Along with over a dozen organizations, including ACLU, Center for Democracy in Technology, Demand Progress, Freedom of the Press Foundation, Project on Government Oversight, Brennan Center, EFF lent its voice to the request that the following reforms be the bare minimum for precondition for any re-authorization of Section 702: 

  • Requiring the government to obtain a warrant before searching the content of Americans’ communications collected under intelligence authorities;
  • Establishing legislative safeguards for surveillance affecting Americans that is conducted overseas under Executive Order 12333–an authority that raises many of the same concerns as Section 702, as previously noted by PCLOB members;
  • Closing the data broker loophole, through which intelligence and law enforcement agencies purchase Americans’ sensitive location, internet, and other data without any legal process or accountability;
  • Bolstering judicial review in FISA-related proceedings, including by shoring up the government’s obligation to give notice when information derived from FISA is used against a person accused of a crime; and
  • Codifying reasonable limits on the scope of intelligence surveillance.

Use this handy tool to tell your elected officials: No reauthorization of 702 without drastic reform:

Take action

TELL congress: End 702 Absent serious reforms

Get Real, Congress: Censoring Search Results or Recommendations Is Still Censorship

Par : Jason Kelley
28 septembre 2023 à 18:29

Updated October 20, 2023: Removed two sentences for clarity. 

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

For the past two years, Congress has been trying to revise the Kids Online Safety Act (KOSA) to address criticisms from EFF, human and digital rights organizations, LGBTQ groups, and others, that the core provisions of the bill will censor the internet for everyone and harm young people. All of those changes fail to solve KOSA’s inherent censorship problem: As long as the “duty of care” remains in the bill, it will still force platforms to censor perfectly legal content. (You can read our analyses here and here.)

Despite never addressing this central problem, some members of Congress are convinced that a new change will avoid censoring the internet: KOSA’s liability is now theoretically triggered only for content that is recommended to users under 18, rather than content that they specifically search for. But that’s still censorship—and it fundamentally misunderstands how search works online. 

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults

As a reminder, under KOSA, a platform would be liable for not “acting in the best interests of a [minor] user.” To do this, a platform would need to “tak[e] reasonable measures in its design and operation of products and services to prevent and mitigate” a long list of societal ills, including anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. As we have said, this will be used to censor what young people and adults can see on these platforms. The bills’ coauthors agree, writing that KOSA “will make platforms legally responsible for preventing and mitigating harms to young people online, such as content promoting suicide, eating disorders, substance abuse, bullying, and sexual exploitation.” 

Our concern, and the concern of others, is that this bill will be used to censor legal information and restrict the ability for minors to access it, while adding age verification requirements that will push adults off the platforms as well. Additionally, enforcement provisions in KOSA give power to state attorneys general to decide what is harmful to minors, a recipe for disaster that will exacerbate efforts already underway to restrict access to information online (and offline). The result is that platforms will likely feel pressured to remove enormous amounts of information to protect themselves from KOSA’s crushing liability—even if that information is not harmful.

The ‘Limitation’ section of the bill is intended to clarify that KOSA creates liability only for content that the platform recommends. In our reading, this is meant to refer to the content that a platform shows a user that doesn’t come from an account the user follows, is not content the user searches for, and is not content that the user deliberately visits (such as by clicking a URL). In full, the ‘Limitation’ section states that the law is not meant to prevent or preclude “any minor from deliberately and independently searching for, or specifically requesting, content,” nor should it prevent the “platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.” 

In layman’s terms, minors will supposedly still have the freedom to follow accounts, search for, and request any type of content, but platforms won’t have the freedom to share some types of content to them. Again, that fundamentally misunderstands how social media works—and it’s still censorship. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Courts Have Agreed: Recommendations are Protected

If, as the bills’ authors write, they want to hold platforms accountable for “knowingly driving toxic, addicting, and dangerous content” to young people, why stop at search—which can also show toxic, addicting, or dangerous content? We think this section was added for two reasons. 

First, members of Congress have attacked social media platforms’ use of automated tools to present content for years, claiming that it causes any number of issues ranging from political strife to mental health problems. The evidence supporting those claims is unclear (and the reverse may be true). 

Second, and perhaps more importantly, the authors of the bill likely believe pinning liability on recommendations will allow them to square a circle and get away with censorship while complying with the First Amendment. It will not.

Platforms’ ability to “filter, screen, allow, or disallow content;” “pick [and] choose” content; and make decisions about how to “display,” “organize,” or “reorganize” content is protected by 47 U.S.C. § 230 (“Section 230”), and the First Amendment. (We have written about this in various briefs, including this one.) This “Limitation” in KOSA doesn’t make the bill any less censorious. 

Search Results Are Recommendations

Practically speaking, there is also no clear distinction between “recommendations” and “search results.” The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is. Accuracy and relevance in search results are algorithmically generated, and any modern search method uses an automated process to determine the search results and the order in which they are presented, which it then recommends to the user. 

KOSA’s authors also assume, incorrectly, that content on social media can easily be organized, tagged, or described in the first place, such that it can be shown when someone searches for it, but not otherwise. But content moderation at infinite scale will always fail, in part because whether content fits into a specific bucket is often subjective in the first place.

The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is.

For example: let’s assume that using KOSA, an attorney general in a state has made it clear that a platform that recommends information related to transgender healthcare will be sued for increasing the risk of suicide in young people. (Because trans people are at a higher risk of suicide, this is one of many ways that we expect an attorney general could torture the facts to censor content—by claiming that correlation is causation.) 

If a young person in that state searches social media for “transgender healthcare,” does this mean that the platform can or cannot show them any content about “transgender healthcare” as a result? How can a platform know which content is about transgender healthcare, much less whether the content matches the attorney general’s views on the subject, or whether they have to abide by that interpretation in search results? What if the user searches for “banned healthcare?” What if they search for “trans controversy?” (Most people don’t search for the exact name of the piece of content they want to find, and most pieces of content on social media aren’t “named” at all.) 

In this example, and in an enormous number of other cases, platforms can’t know in advance what content a person is searching for—and will, at the risk of showing something controversial that the person did not intend to find, remove it entirely—from recommendations as well as search results. If liability exists for showing it, platforms will remove users’ ability to access all content that relates to a dangerous topic rather than risk showing it in the occasional instance when they can determine, for certain, that is what the user is looking for. This blunt response will not only harm children who need access to information, but adults who also may seek the same content online.

“Nerd Harder” to Remove Content Will Never Work

Third, as we have written before, it is impossible for platforms to know what types of content they would be liable for recommending (or showing in search results) in the first place. Because there is no definition of harmful or depressing content that doesn’t include a vast amount of protected expression, almost any content could fit into the categories that platforms would have to censor.  This would include truthful news about what’s going on in the world, such as wars, gun violence, and climate change. 

This Limitation section will have no meaningful effect on the censorial nature of the law. If KOSA passes, the only real option for platforms would be to institute age verification and ban minors entirely, or to remove any ‘recommendations’ and ‘search’ functions almost entirely for minors. As we’ve said repeatedly, these efforts will also impact adult users who either lack the ability to prove they are not minors or are deterred from doing so. Most smaller platforms would be pressured to ban minors entirely, while larger ones, with more money for content moderation and development, would likely block them from finding enormous swathes of content unless they have the exact URL to locate it. In that way, KOSA’s censorship would further entrench the dominant social media platforms.

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 

TAKE ACTION

TELL CONGRESS YOU WON'T ACCEPT INTERNET CENSORSHIP

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

Watch EFF's Talks from DEF CON 31

28 septembre 2023 à 19:16

EFF had a blast at DEF CON 31! Thank you to everyone who came and supported EFF at the membership booth, participated in our contests, and checked out our various talks. We had a lot of things going on this year, and it was great to see so many new and familiar faces.

This year was our biggest DEF CON yet, with over 900 attendees starting or renewing an EFF membership at the conference. Thank you! Your support is the reason EFF can push for initiatives like protecting encrypted messaging, fighting back against illegal surveillance, and defending your right to tinker and hack the devices you own. Of course if you missed us at DEF CON, you can still become an EFF member and grab some new gear when you make a donation today!

Now you can catch up on the EFF talks from DEF CON 31! Below is a playlist with the various talks EFF participated in that covers topics from digital surveillance, the world's dumbest cyber mercenaries, the UN Cybercrime Treaty, and more. Check them out here:

Watch EFF Talks from DEF CON 31

Thank you to everyone in the infosec community who supports our work. DEF CON 32 will come sooner than we all expect, so hopefully we'll see you there next year!

The Growing Threat of Cybercrime Law Abuse: LGBTQ+ Rights in MENA and the UN Cybercrime Draft Convention

This is Part II  of a series examining the proposed UN Cybercrime Treaty in the context of LGBTQ+ communities. Part I looks at the draft Convention’s potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

In the digital age, the rights of the LGBTQ+ community in the Middle East and North Africa (MENA) are gravely threatened by expansive cybercrime and surveillance legislation. This reality leads to systemic suppression of LGBTQ+ identities, compelling individuals to censor themselves for fear of severe reprisal. This looming threat becomes even more pronounced in countries like Iran, where same-sex conduct is punishable by death, and Egypt, where merely raising a rainbow flag can lead to being arrested and tortured.

Enter the proposed UN Cybercrime Convention. If ratified in its present state, the convention might not only bolster certain countries' domestic surveillance powers to probe actions that some nations mislabel as crimes, but it could also strengthen and validate international collaboration grounded in these powers. Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures for acts that are in stark contradiction with international human rights law. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. 

The draft convention, in Article 35, permits each country to define its own crimes under domestic laws when requesting assistance from other nations in cross-border policing and evidence collection. In certain countries, many of these criminal laws might be based on subjective moral judgments that suppress what is considered free expression in other nations, rather than adhering to universally accepted standards.

Indeed, international cooperation is permissible for crimes that carry a penalty of four years of imprisonment or more; there's a concerning move afoot to suggest reducing this threshold to merely three years. This is applicable whether the alleged offense is cyber or not. Such provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in some jurisdictions. 

While some countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. The draft treaty allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the MLAT system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the checks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. And where countries do share the same subjective values and problematically criminalize the same conduct, this draft treaty seemingly provides a justification for their cooperation.

One of the more recently introduced pieces of legislation that exemplifies these issues is the Cybercrime Law of 2023 in Jordan. Introduced as part of King Abdullah II’s modernization reforms to increase political participation across Jordan, this law was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. In addition to this new law, the pre-existing cybercrime law in Jordan has already been used against LGBTQ+ people, and this new law expands its capacity to do so. This law, with its overly broad and vaguely defined terms, will severely restrict individual human rights across that country and will become a tool for prosecuting innocent individuals for their online speech. 

Article 13 of the Jordan law expansively criminalizes a wide set of actions tied to online content branded as “pornographic,” from its creation to distribution. The ambiguity in defining what is pornographic could inadvertently suppress content that merely expresses various sexualities, mistakenly deeming them as inappropriate. This goes beyond regulating explicit material; it can suppress genuine expressions of identity. The penalty for such actions entails a period of no less than six months of imprisonment. 

Meanwhile, the nebulous wording in Article 14 of Jordan's laws—terms like “expose public morals,” “debauchery,” and “seduction”—is equally concerning. Such vague language is ripe for misuse, potentially curbing LGBTQ+ content by erroneously associating diverse sexual orientation with immorality. Both articles, in their current form, cast shadows on free expression and are stark reminders that such provisions can lead to over-policing online content that is not harmful at all. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. Deputy Leader of the Opposition, Saleh al Armouti, went further and claimed that “Jordan will become a big jail.” 

Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their rights to freedom of expression and privacy. Article 12 of the Cybercrime Law prohibits the use of Virtual Private Networks (VPNs) and other proxies, with at least six months imprisonment or a fine for violations. 

This will force people in Jordan to choose between engaging in free online expression or keeping their personal identity private. More specifically, this will negatively impact LGBTQ+ people and human rights defenders in Jordan who particularly rely on VPNs and anonymity to protect themselves online. The impact of Article 12 is exacerbated by the fact that there is no comprehensive data privacy legislation in Jordan to protect people’s rights during cyber attacks and data breaches.  

This is not the first time Jordan has limited access to information and content online. In December 2022, Jordanian authorities blocked TikTok to prevent the dissemination of live updates and information during the workers’ protests in the country's south, and authorities there previously had blocked Clubhouse as well

This crackdown on free speech has particularly impacted journalists, such as the recent arrest of Jordanian journalist Heba Abu Taha for criticizing Jordan’s King over his connections with Israel. Given that online platforms like TikTok and Twitter are essential for activists, organizers, journalists, and everyday people around the world to speak truth to power and fight for social justice, the restrictions placed on free speech by Jordan’s new Cybercrime Law will have a detrimental impact on political activism and community building across Jordan.

People across Jordan have protested the law and the European Union has  expressed concern about how the law could limit freedom of expression online and offline. In August, EFF and 18 other civil society organizations wrote to the King of Jordan, calling for the rejection of the country’s draft cybercrime legislation. With the law now in effect, we urge Jordan to repeal the Cybercrime Law 2023.

Jordan’s Cybercrime Law has been said to be a “true copy” of the United Arab Emirates (UAE) Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes. This law replaced its predecessor, which had been used to stifle expression critical of the government or its policies—and was used to sentence human rights defender Ahmed Mansoor to 10 years in prison. 

The UAE’s new cybercrime law further restricts the already heavily-monitored online space and makes it harder for ordinary citizens, as well as journalists and activists, to share information online. More specifically, Article 22 mandates prison sentences of between three and 15 years for those who use the internet to share “information not authorized for publishing or circulating liable to harm state interests or damage its reputation, stature, or status.” 

In September 2022, Tunisia passed its new cybercrime law in Decree-Law No. 54 on “combating offenses relating to information and communication systems.” The wide-ranging decree has been used to stifle opposition free speech, and mandates a five-year prison sentence and a fine for the dissemination of “false news” or information that harms “public security.” In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets and individuals for their opposition to government policies or officials. 

The first criminal investigation under Decree-Law 54 saw the arrest of student Ahmed Hamada in October 2022 for operating a Facebook page that reported on clashes between law enforcement and residents of a neighborhood in Tunisia. 

Similar tactics are being used in Egypt, where the 2018 cybercrime law, Law No. 175/2018, contains broad and vague provisions to silence dissent, restrict privacy rights, and target LGBTQ+ individuals. More specifically, Articles 25 and 26 have been used by the authorities to crackdown on content that allegedly violates “family values.” 

Since its enactment, these provisions have also been used to target LGBTQ+ individuals across Egypt, particularly regarding the publication or sending of pornography under Article 8, as well as illegal access to an information network under Article 3. For example, in March 2022 a court in Egypt charged singers Omar Kamal and Hamo Beeka with “violating family values” for dancing and singing in a video uploaded to YouTube. In another example, police have used cybercrime laws to prosecute LGBTQ+ individuals for using dating apps such as Grindr.

And in Saudi Arabia, national authorities have used cybercrime regulations and counterterrorism legislation to prosecute online activism and stifle dissenting opinions. Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online—for composing a tweet, liking a Facebook post, or writing a blog post. And while Saudi Arabia has no specific law concerning gender identity and sexual orientation, authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.” 

These provisions have been used to prosecute individuals for peaceful actions, particularly since the Arab Spring in 2011. More recently, in August 2022, Salma al-Shehab was sentenced to 34 years in prison with a subsequent 34-year travel ban for her alleged “crime” of sharing content in support of prisoners of conscience and women human rights defenders.

These cybercrime laws demonstrate that if the proposed UN Cybercrime Convention is ratified in its current form with its broad scope, it would authorize domestic surveillance for the investigation of any offenses, as those in Articles 12, 13, and 14 of Jordan's law. Additionally, the convention could authorize international cooperation for investigation of crimes penalized with three or four years of imprisonment, as seen in countries such as the UAE, Tunisia, Egypt, and Saudi Arabia.

As Canada warned (at minute 01:56 ) at the recent negotiation session, these expansive provisions in the Convention permit states to unilaterally define and broaden the scope of criminal conduct, potentially paving the way for abuse and transnational repression. While the Convention may incorporate some procedural safeguards, its far-reaching scope raises profound questions about its compatibility with the key tenets of human rights law and the principles enshrined in the UN Charter. 

The root problem lies not in the severity of penalties, but in the fact that some countries criminalize behaviors and expression that are protected under international human rights law and the UN Charter. This is alarming, given that numerous laws affecting the LGBTQ+ community carry penalties within these ranges, making the potential for misuse of such cooperation profound.

In a nutshell, the proposed UN treaty amplifies the existing threats to the LGBTQ+ community. It endorses a framework where nations can surveil benign activities such as sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

Online, the lack of legal protection of subscriber data threatens the anonymity of the community, making them vulnerable to identification and subsequent persecution. The mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their identities being disclosed, putting them at significant risk.

Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be intercepted and turned against them. 

To all member states: This is a pivotal moment. This is our opportunity to ensure the digital future is one where rights are championed, not compromised. Pledge to protect the rights of all, especially those communities like the LGBTQ+ that are most vulnerable. The international community must unite in its commitment to ensure that the proposed convention serves as an instrument of protection, not persecution.



Cities Should Act NOW to Ban Predictive Policing...and Stop Using ShotSpotter, Too

Sound Thinking, the company behind ShotSpotter—an acoustic gunshot detection technology that is rife with problems—is reportedly buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab—in 2018. This consolidation of harmful and flawed technologies means it’s even more critical for cities to move swiftly to ban the harmful tactics of both of these technologies.

ShotSpotter is currently linked to over 100 law enforcement agencies in the U.S. PredPol, on the other hand, was used in around 38 cities in 2021 (this may be much higher now). Shotspotter’s acquisition of Hunchlab already lead the company to claim that the tools work “hand in hand;” a 2018 press release made clear that predictive policing would be offered as an add-on product, and claimed that the integration of the two would “enable it to update predictive models and patrol missions in real time.” When companies like Sound Thinking and Geolitica merge and bundle their products, it becomes much easier for cities who purchase one harmful technology to end up deploying a suite of them without meaningful oversight, transparency, or control by elected officials or the public. Axon, for instance, was criticized by academics, attorneys, activists, and its own ethics board for their intention to put tasers on indoor drones. Now the company has announced its acquisition of Sky-Hero, which makes small tactical UAVS–a sign that they may be willing to restart the drone taser program that led a good portion of their ethics board to resign. Mergers can be a sign of future ambitions.

In some ways, these tools do belong together. Both predictive policing and gunshot recognition are severely flawed and dangerous to marginalized groups. Hopefully, this bundling will make resisting them easier as well.

As we have written, studies have found that Shotspotter’s technology is inaccurate, and its alerts sometimes result in the deployment of armed police who are expecting armed resistance to a location where there is none, but where innocent residents could become targets of suspicion as a result.

PredPol’s claim is that algorithms can predict crime. This is blatantly false. But that myth has helped propel the predictive policing industry to massive profits; it's projected to be worth over $5 billion by the end of 2023. This false promise creates the illusion that police departments who buy predictive policing tech are being proactive about tackling crime. But the truth is, predictive policing just perpetuates centuries of inequalities in policing and exacerbates racial violence against Black, Latine, and other communities of color.

Predictive policing is a self-fulfilling prophecy. If police focus their efforts in one neighborhood, most of their arrests are likely to be in that neighborhood, leading the data to reflect that area as a hotbed of criminal activity, which can be used to justify even more police surveillance. Predictive policing systems are often designed to incorporate only reported crimes, which means that neighborhoods and communities where the police are called more often might see a higher likelihood of having predictive policing technology concentrate resources there. This cycle results in  further victimization of communities that are already mass policed—namely, communities of color, unhoused individuals, and immigrants—by using the cloak of scientific legitimacy and the supposedly unbiased nature of data.

Some cities have already banned predictive policing to protect their residents. The EU is also considering a ban, and federal elected officials have raised concerns on the dangers of the technology. Sen. Ron Wyden penned a probing letter to Attorney General Merrick Garland asking about how the technology is being used. And big cities and major customers of Shotspotter have been canceling their contracts as well, and now, the U.S. Justice Department has been asked to investigate how cities use the technology, because there is “substantial evidence” it is deployed disproportionately in majority-minority neighborhoods.

Skepticism about the efficacy and ethics of both of these technologies are on the rise, and as these companies consolidate, we must engage in more robust organizing to counter them. At the moment of this alarming merger we say–ban predictive policing! And stop using dangerous, inaccurate gunshot detection technology! The fact that these flawed tools reside in just one company is all the more reason to act swiftly. 

GAO Report Shows the Government Uses Face Recognition with No Accountability, Transparency, or Training

Federal agents are using face recognition software without training, policies, or oversight, according to the Government Accountability Office (GAO).

The government watchdog issued yet another report this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology.

The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents.

Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them.

Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.

Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk. The technology facilitates covert mass surveillance of the places we frequent and the people we know. It can be used to make judgments about how we feel and behave. Mass adoption of face recognition means being able to track people automatically as they go about their day visiting doctors, lawyers, houses of worship, as well as friends and family. It also means that law enforcement could, for example, fly a drone over a protest against police violence and walk away with a list of everyone in attendance. Either instance would create a chilling effect wherein people would be hesitant to attend protests or visit certain friends or romantic partners knowing there would be a permanent record of it.

GAO has issued multiple reports on federal agencies’ use of face recognition and, in each, they have found that agencies don’t track system access or reliably train their agents. The office has repeatedly outlined recommendations for how federal agencies should develop guidance for face recognition use that takes into account the civil rights and privacy issues created by the technology. GAO’s latest report makes clear that law enforcement agencies continue to fail to heed these warnings.

Face recognition is intended to facilitate tracking and indexing individuals for future and real-time reference, a system that can be easily abused. Even if it were 100% accurate — and it isn’t — face recognition would still be too invasive and threatening to our civil rights and civil liberties to use. The federal government should immediately put guardrails around who can use it for what and cease its use of this technology altogether.

The State of Chihuahua Is Building a 20-Story Tower in Ciudad Juarez to Surveil 13 Cities–and Texas Will Also Be Watching

EFF Special Advisor Paul Tepper and EFF intern Michael Rubio contributed research to this report.

Chihuahua state officials and a notorious Mexican security contractor broke ground last summer on the Torre Centinela (Sentinel Tower), an ominous, 20-story high-rise in downtown Ciudad Juarez that will serve as the central node of a new AI-enhanced surveillance regime. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project will be unlike anything seen before along the U.S.-Mexico border.

And that's saying a lot, considering the last 30-plus years of surging technology on the U.S side of the border. 

The Torre Centinela will stand in a former parking lot next to the city's famous bullring, a mere half-mile south of where migrants and asylum seekers have camped and protested at the Paso del Norte International Bridge leading to El Paso. But its reach goes much further: the Torre Centinela is just one piece of the Plataforma Centinela (Sentinel Platform), an aggressive new technology strategy developed by Chihuahua's Secretaria de Seguridad Pública Estatal (Secretary of State Public Security or SSPE) in collaboration with the company Seguritech.

With its sprawling infrastructure, the Plataforma Centinela will create an atmosphere of surveillance and data-streams blanketing the entire region. The plan calls for nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more.

If the project comes together as advertised in the Avengers-style trailer that SSPE released to influence public opinion, law enforcement personnel on site will be surrounded by wall-to-wall monitors (140 meters of screens per floor), while 2,000 officers in the field will be able to access live intelligence through handheld tablets.

Texas law enforcement will also have "eyes on this side of the border" via the Plataforma Centinela, Chihuahua Governor Maru Campos publicly stated last year. Texas Governor Greg Abbott signed a memorandum of understanding confirming the partnership.

Plataforma Centinela will transform public life and threaten human rights in the borderlands in ways that aren't easy to assess. Regional newspapers and local advocates–especially Norte Digital and Frente Político Ciudadano para la Defensa de los Derechos Humanos (FPCDDH)--have raised significant concerns about the project, pointing to a low likelihood of success and high potential for waste and abuse.

"It is a myopic approach to security; the full emphasis is placed on situational prevention, while the social causes of crime and violence are not addressed," FPCDDH member and analyst Victor M. Quintana tells EFF, noting that the Plataforma Centinela's budget is significantly higher than what the state devotes to social services. "There are no strategies for the prevention of addiction, neither for rebuilding the fabric of society nor attending to dropouts from school or young people at risk, which are social causes of insecurity."

Instead of providing access to unfiltered information about the project, the State of Chihuahua has launched a public relations blitz. In addition to press conferences and the highly-produced cinematic trailer, SSPE recently hosted a "Pabellón Centinel" (Sentinel Pavillion), a family-friendly carnival where the public was invited to check out a camera wall and drones, while children played with paintball guns, drove a toy ATV patrol vehicle around a model city, and colored in illustrations of a data center operator.

Behind that smoke screen, state officials are doing almost everything they can to control the narrative around the project and avoid public scrutiny.

According to news reports, the SSPE and the Secretaría de Hacienda (Finance Secretary) have simultaneously deemed most information about the project as classified and left dozens of public records requests unanswered. The Chihuahua State Congress also rejected a proposal to formally declassify the documents and stymied other oversight measures, including a proposed audit. Meanwhile, EFF has submitted public records requests to several Texas agencies and all have claimed they have no records related to the Plataforma Centinela.

This is all the more troubling considering the relationship between the state and Seguritech, a company whose business practices in 22 other jurisdictions have been called into question by public officials.

What we can be sure of is that the Plataforma Centinela project may serve as proof of concept of the kind of panopticon surveillance governments can get away with in both North America and Latin America.

What Is the Plataforma Centinela?

High-tech surveillance centers are not a new phenomenon on the Mexican side of the border. These facilities tend to use "C" distinctions to explain their functions and purposes. EFF has mapped out dozens of these in the six Mexican border states.

A screen capture of a Google Map of Mexican C-Centers

Click to explore the map. Google's Privacy Policy applies.

They include:

  • C4 (Centro de Comunicación, Cómputo, Control y Comando) (Center for Communications, Calculation, Control, and Command), 
  • C5 (Centro de Coordinación Integral, de Control, Comando, Comunicación y Cómputo del Estado) (Center for Integral Coordination for Control, Command, Communications, and State Calculation), 
  • C5i (Centro de Control, Comando, Comunicación, Cómputo, Coordinación e Inteligencia) (Center for Control, Command, Communication, Calculation, Coordination and Intelligence).

Typically, these centers focus as a cross between a 911 call center and a real-time crime center, with operators handling emergency calls, analyzing crime data, and controlling a network of surveillance cameras via a wall bank of monitors. In some cases, the Cs may be presented in different order or stand for slightly different words. For example, some C5s might alternately stand for "Centros de Comando, Control, Comunicación, Cómputo y Calidad" (Centers for Command, Control, Communication, Computation and Quality). These facilities also exist in other parts of Mexico. The number of Cs often indicate scale and responsibilities, but more often than not, it seems to be a political or marketing designation.

The Plataforma Centinela however, goes far beyond the scope of previous projects and in fact will be known as the first C7 (Centro de Comando, Cómputo, Control, Coordinación, Contacto Ciudadano, Calidad, Comunicaciones e Inteligencia Artificial) (Center for Command, Calculation, Control, Coordination, Citizen Contact, Quality, Communications and Artificial Intelligence). The Torre Centinela in Ciudad Juarez will serve as the nerve center, with more than a dozen sub-centers throughout the state. 

According to statistics that Gov. Campos disclosed as part of negotiations with Texas and news reports, the Plataforma Centinela will include: 

    • 1,791 automated license plate readers. These are cameras that photograph vehicles and their license plates, then upload that data along with the time and location where the vehicles were seen to a massive searchable database. Law enforcement can also create lists of license plates to track specific vehicles and receive alerts when those vehicles are seen. 
    • 4,800 fixed cameras. These are your run-of-the-mill cameras, positioned to permanently surveil a particular location from one angle.  
    • 3,065 pan-tilt-zoom (PTZ) cameras. These are more sophisticated cameras. While they are affixed to a specific location, such as a street light or a telephone pole, these cameras can be controlled remotely. An operator can swivel the camera around 360-degrees and zoom in on subjects. 
    • 2,000 tablets. Officers in the field will be issued handheld devices for accessing data directly from the Plataforma Centinela
    • 102 security arches. This is a common form of surveillance in Mexico, but not the United States. These are structures built over highways and roads to capture data on passing vehicles and their passengers. 
    • 74 drones (Unmanned Aerial Vehicles/UAVs). While the Chihuahua government has not disclosed what surveillance payload will be attached to these drones, it is common for law enforcement drones to deploy video, infrared, and thermal imaging technology.
    • 40 mobile video surveillance trailers. While details on these systems are scant, it is likely these are camera towers that can be towed to and parked at targeted locations. 
    • 15 anti-drone systems. These systems are designed to intercept and disable drones operated by criminal organizations.
    • Face recognition. The project calls for the application of "biometric filters" to be applied to camera feeds "to assist in the capture of cartel leaders," and the collection of migrant biometrics. Such a system would require scanning the faces of the general public.
    • Artificial intelligence. So far, the administration has thrown around the term AI without fully explaining how it will be used. However, typically law enforcement agencies have used this technology to "predict" where crime might occur, identify individuals mostly likely to be connected to crime, and to surface potential connections between suspects that would not have been obvious to a human observer. However, all these technologies have a propensity for making errors or exacerbating existing bias. 

As of May, 60% of the Plataforma Centinela camera network had been installed, with an expected completion date of December, according to Norte Digital. However, the cameras were already being used in criminal investigations. 

All combined, this technology amounts to an unprecedented expansion of the surveillance state in Latin America, as SSPE brags in its promotional material. The threat to privacy may also be unprecedented: creating cities where people can no longer move freely in their communities without being watched, scanned, and tagged.

But that's assuming the system functions as advertised—and based on the main contractor's history, that's anything but guaranteed. 

Who Is Seguritech?

The Plataforma Centinela project is being built by the megacorporation Seguritech, which has signed deals with more than a dozen government entities throughout Mexico. As of 2018, the company received no-bid contracts in at least 10 Mexican states and cities, which means it was able to sidestep the accountability process that requires companies to compete for projects.

And when it comes to the Plataforma Centinela, the company isn't simply a contractor: It will actually have ownership over the project, the Torre Centinela, and all its related assets, including cameras and drones, until August 2027.

That's what SSPE Secretary Gilberto Loya Chávez told the news organization Norte Digital, but the terms of the agreement between Seguritech and Chihuahua's administration are not public. The SSPE's Transparency Committee decided to classify the information "concerning the procedures for the acquisition of supplies, goods, and technology necessary for the development, implementation, and operation of the Platforma Centinela" for five years.

In spite of the opacity shrouding the project, journalists have surfaced some information about the investment plan. According to statements from government officials, the Plataforma Centinela will cost 4.2 billion pesos, with Chihuahua's administration paying regular installments to the company every three months (Chihuahua's governor had previously said that these would be yearly payments in the amount of 700 million to 1 billion pesos per year). According to news reports, when the payments are completed in 2027, the ownership of the platform's assets and infrastructure are expected to pass from Seguritech to the state of Chihuahua.

The Plataforma Centinela project marks a new pinnacle in Seguritech's trajectory as a Mexican security contractor. Founded in 1995 as a small business selling neighborhood alarms, SeguriTech Privada S.A de C.V. became a highly profitable brand, and currently operates in five areas: security, defense, telecommunications, aeronautics, and construction. According to Zeta Tijuana, Seguritech also secures contracts through its affiliated companies, including Comunicación Segura (focused on telecommunications and security) and Picorp S.A. de C.V. (focused on architecture and construction, including prisons and detention centers). Zeta also identified another SecuriTech company, Tres10 de C.V., as the contractor named in various C5i projects.

Thorough reporting by Mexican outlets such as Proceso, Zeta Tijuana, Norte Digital, and Zona Free paint an unsettling picture of Seguritech's activities over the years.

Former President Felipe Calderón's war on drug trafficking, initiated during his 2006-2012 term, marked an important turning point for surveillance in Mexico. As Proceso reported, Seguritech began to secure major government contracts beginning in 2007, receiving its first billion-peso deal in 2011 with Sinaloa's state government. In 2013, avoiding the bidding process, the company secured a 6-billion peso contract assigned by Eruviel Ávila, then governor of the state of México (or Edomex, not to be confused with the country of Mexico). During Enrique Peña Nieto's years as Edomex's governor, and especially later, as Mexico's president, Seguritech secured its status among Mexico's top technology contractors.

According to Zeta Tijuana, during the six years that Peña Nieto served as president (2012-2018), the company monopolized contracts for the country's main surveillance and intelligence projects, specifically the C5i centers. As Zeta Tijuana writes:

"More than 10 C5i units were opened or began construction during Peña Nieto's six-year term. Federal entities committed budgets in the millions, amid opacity, violating parliamentary processes and administrative requirements. The purchase of obsolete technological equipment was authorized at an overpriced rate, hiding information under the pretext of protecting national security."

Zeta Tijuana further cites records from the Mexican Institute of Industrial Property showing that Seguritech registered the term "C5i" as its own brand, an apparent attempt to make it more difficult for other surveillance contractors to provide services under that name to the government.

Despite promises from government officials that these huge investments in surveillance would improve public safety, the country’s number of violent deaths increased during Peña Nieto's term in office.

"What is most shocking is how ineffective Seguritech's system is," says Quintana, the spokesperson for FPCDDH. By his analysis, Quintana says, "In five out of six states where Seguritech entered into contracts and provided security services, the annual crime rate shot up in proportions ranging from 11% to 85%."

Seguritech has also been criticized for inflated prices, technical failures, and deploying obsolete equipment. According to Norte Digital, only 17% of surveillance cameras were working by the end of the company's contract with Sinaloa's state government. Proceso notes the rise of complaints about the malfunctioning of cameras in Cuauhtémoc Delegation (a borough of Mexico City) in 2016. Zeta Tijuana reported on the disproportionate amount the company charged for installing 200 obsolete 2-megapixel cameras in 2018.

Seguritech's track record led to formal complaints and judicial cases against the company. The company has responded to this negative attention by hiring services to take down and censor critical stories about its activities published online, according to investigative reports published as part of the Global Investigative Journalism Network's Forbidden Stories project.

Yet, none of this information dissuaded Chihuahua's governor, Maru Campos, from closing a new no-bid contract with Seguritech to develop the Plataforma Centinela project. 

 A Cross-Border Collaboration 


The Plataforma Centinela project presents a troubling escalation in cross-border partnerships between states, one that cuts out each nation's respective federal governments.  In April 2022, the states of Texas and Chihuahua signed a memorandum of understanding to collaborate on reducing "cartels' human trafficking and smuggling of deadly fentanyl and other drugs" and to "stop the flow of migrants from over 100 countries who illegally enter Texas through Chihuahua."

A slide describing the "New Border Model"

While much of the agreement centers around cargo at the points of entry, the document also specifically calls out the various technologies that make up the Plataforma Centinela. In attachments to the agreement, Gov. Campos promises Chihuahua is "willing to share that information with Texas State authorities and commercial partners directly."

During a press conference announcing the MOU, Gov. Abbot declared, “Governor Campos has provided me with the best border security plan that I have seen from any governor from Mexico.” He held up a three-page outline and a slide, which were also provided to the public, but also referenced the existence of "a much more extensive detailed memo that explains in nuance" all the aspects of the program.

Abbott went on to read out a summary of Plataforma Centinela, adding, "This is a demonstration of commitment from a strong governor who is working collaboratively with the state of Texas."

Then Campos, in response to a reporter's question, added: "We are talking about sharing information and intelligence among states, which means the state of Texas will have eyes on this side of the border." She added that the data collected through the Plataforma Centinela will be analyzed by both the states of Chihuahua and Texas.

Abbott provided an example of one way the collaboration will work: "We will identify hotspots where there will be an increase in the number of migrants showing up because it's a location chosen by cartels to try to put people across the border at that particular location. The Chihuahua officials will work in collaboration with the Texas Department of Public Safety, where DPS has identified that hotspot and the Chihuahua side will work from a law enforcement side to disrupt that hotspot."

In order to learn more about the scope of the project, EFF sent public records requests to several Texas agencies, including the Governor's Office, the Texas Department of Public Safety, the Texas Attorney General's Office, the El Paso County Sheriff, and the El Paso Police Department. Not one of the agencies produced records related to the Plataforma Centinela project.

Meanwhile, Texas is further beefing up its efforts to use technology at the border, including by enacting new laws that formally allow the Texas National Guard and State Guard to deploy drones at the border and authorize the governor to enter compacts with other states to share intelligence and resource to build "a comprehensive technological surveillance system" on state land to deter illegal activity at the border. In addition to the MOU with Chihuahua, Abbott also signed similar agreements with the states of Nuevo León and Coahuila in 2022. 

Two Sides, One Border

The Plataforma Centinela has enormous potential to violate the rights of one of the largest cross-border populations along the U.S.-Mexico border. But while law enforcement officials are eager to collaborate and traffic data back and forth, advocacy efforts around surveillance too often are confined to their respective sides.

The Spanish-language press in Mexico has devoted significant resources to investigating the Plataforma Centinela and raising the alarm over its lack of transparency and accountability, as well as its potential for corruption. Yet, the project has received virtually no attention or scrutiny in the United States. 

Fighting back against surveillance of cross-border communities requires cross-border efforts. EFF supports the efforts of advocacy groups in Ciudad Juarez and other regions of Chihuahua to expose the mistakes the Chihuahua government is making with the Plataforma Centinela and call out its mammoth surveillance approach for failing to address the root social issues. We also salute the efforts by local journalists to hold the government accountable. However, U.S-based journalists, activists, and policymakers—many of whom have done an excellent job surfacing criticism of Customs and Border Protection's so-called virtual wall—must also turn their attention to the massive surveillance that is building up on the Mexican side.

In reality, there really is no Mexican surveillance and U.S. surveillance. It’s one massive surveillance monster that, ironically, in the name of border enforcement, recognizes no borders itself. 

❌
❌