Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Un kit pédagogique proposé par Exodus Privacy

Par : Framasoft
8 juin 2023 à 01:42

À l’heure où dans une dérive policière inquiétante on criminalise les personnes qui veulent protéger leur vie privée, il est plus que jamais important que soient diffusées à une large échelle les connaissances et les pratiques qui permettent de prendre conscience des enjeux et de préserver la confidentialité. Dans cette démarche, l’association Exodus Privacy joue un rôle important en rendant accessible l’analyse des trop nombreux pisteurs qui parasitent nos ordiphones. Cette même association propose aujourd’hui un nouvel outil ou plutôt une boîte à outils tout aussi intéressante…

Bonjour, Exodus Privacy. Chez Framasoft, on vous connaît bien et on vous soutient mais pouvez-vous rappeler à nos lecteurs et lectrices en quoi consiste l’activité de votre association ?

Oui, avec plaisir ! L’association Exodus Privacy a pour but de permettre au plus grand nombre de personnes de mieux protéger sa vie privée sur son smartphone. Pour cela, on propose des outils d’analyse des applications issues du Google Play store ou de F-droid qui permettent de savoir notamment si des pisteurs s’y cachent. On propose donc une application qui permet d’analyser les différentes applications présentes sur son smartphone et une plateforme d’analyse en ligne.

Logo d'exodus privacy, c'est un E

Logo d’Exodus Privacy

Alors ça ne suffisait pas de fournir des outils pour ausculter les applications et d’y détecter les petits et gros espions ? Vous proposez maintenant un outil pédagogique ? Expliquez-nous ça…
Depuis le début de l’association, on anime des ateliers et des conférences et on est régulièrement sollicité·es pour intervenir. Comme on est une petite association de bénévoles, on ne peut être présent·es partout et on s’est dit qu’on allait proposer un kit pour permettre aux personnes intéressées d’animer un atelier « smartphones et vie privée » sans avoir besoin de nous !

Selon vous, dans quels contextes le kit peut-il être utilisé ? Vous vous adressez plutôt aux formatrices ou médiateurs de profession, aux bénévoles d’une asso qui veulent proposer un atelier ou bien directement aux membres de la famille Dupuis-Morizeau ?
Clairement, on s’adresse à deux types de publics : les médiateur·ices numériques professionnel·les qui proposent des ateliers pour leurs publics, qu’ils et elles soient en bibliothèque, en centre social ou en maison de quartier, mais aussi les bénévoles d’associations qui proposent des actions autour de la protection de l’intimité numérique.

Bon en fait qu’est-ce qu’il y a dans ce kit, et comment on peut s’en servir ?
Dans ce kit, il y a tout pour animer un atelier d’1h30 destiné à un public débutant ou peu à l’aise avec le smartphone : un déroulé détaillé pour la personne qui anime, un diaporama, une vidéo pédagogique pour expliquer les pisteurs et une fiche qui permet aux participant·es de repartir avec un récapitulatif de ce qui a été abordé pendant l’atelier.

Par exemple, on propose, à partir d’un faux téléphone, dont on ne connaît que les logos des applications, de deviner des éléments sur la vie de la personne qui possède ce téléphone. On a imaginé des méthodes d’animation ludiques et participatives, mais chacun·e peut adapter en fonction de ses envies et de son aisance !

un faux téléphone pour acquérir de vraies compétences en matière de vie privée

un faux téléphone pour acquérir de vraies compétences en matière de vie privée

Comment l’avez-vous conçu ? Travail d’une grosse équipe ou d’un petit noyau d’acharnés ?
Nous avons été au total 2-3 bénévoles dans l’association à créer les contenus, dont MeTaL_PoU qui a suivi/piloté le projet, Héloïse de NetFreaks qui s’est occupée du motion-design de la vidéo et _Lila* de la création graphique et de la mise en page. Tout s’est fait à distance ! À chaque réunion mensuelle de l’association, on faisait un point sur l’avancée du projet, qui a mis plus longtemps que prévu à se terminer, sûrement parce qu’on n’avait pas totalement bien évalué le temps nécessaire et qu’une partie du projet reposait sur du bénévolat. Mais on est fier·es de le publier maintenant !

Vous l’avez déjà bêta-testé ? Premières réactions après tests ?
On a fait tester un premier prototype à des médiateur·ices numériques. Les retours ont confirmé que l’atelier fonctionne bien, mais qu’il y avait quelques détails à modifier, notamment des éléments qui manquaient de clarté. C’est l’intérêt des regards extérieurs : au sein d’Exodus Privacy, des choses peuvent nous paraître évidentes alors qu’elles ne le sont pas du tout !

aspi espion qui aspire les données avec l'œil de la surveillance

Aspi espion qui aspire vos données privées en vous surveillant du coin de l’œil

 

Votre kit est disponible pour tout le monde ? Sous quelle licence ? C’est du libre ?
Il est disponible en CC-BY-SA, et c’est du libre, comme tout ce qu’on fait ! Il n’existe pour le moment qu’en français, mais rien n’empêche de contribuer pour l’améliorer !

Tout ça représente un coût, ça justifie un appel aux dons ?
Nous avons eu de la chance : ce projet a été financé en intégralité par la Fondation AFNIC pour un numérique inclusif et on les remercie grandement pour ça ! Le coût de ce kit est quasi-exclusivement lié à la rémunération des professionnel·les ayant travaillé sur le motion design, la mise en page et la création graphique.

Est-ce que vous pensez faire un peu de communication à destination des publics visés, par exemple les médiateur-ices numériques de l’Éducation Nationale, des structures d’éducation populaire comme le CEMEA  etc. ?

Mais oui, c’est prévu : on est déjà en contact avec le CEMEA et l’April notamment. Il y a également une communication prévue au sein des ProfDoc. et ce sera diffusé au sein des réseaux de MedNum.

Le travail d’Exodus Privacy va au delà de ce kit et il est important de le soutenir ! Pour découvrir les actions de cette formidable association et y contribuer, c’est sur leur site web : https://exodus-privacy.eu.org/fr/page/contribute On souhaite un franc succès et une large diffusion à ce nouvel outil. Merci pour ça et pour toutes leurs initiatives !

un personnage vêtu de gris assis sur un banc est presque entièrement abrité derrière un parapluie gris. le banc est sur l'herbe, au bord d'un trottoir pavé

« Privacy » par doegox, licence CC BY-SA 2.0.

Take action! Protect end-to-end encryption

23 juin 2023 à 14:05
How do we counter the dangers resulting from the ongoing, worldwide legislation like chat control, the EARN IT Act, and the so-called "Online Safety Bill" that threatens end-to-end encryption and privacy in general? Take action! Write a letter to the appropriate agencies to let them know that you value your privacy and the privacy of the people around you, and remind them of their duty to protect it.

New Privacy Badger Prevents Google From Mangling More of Your Links and Invading Your Privacy

We released a new version of Privacy Badger 1 that updates how we fight “link tracking” across a number of Google products. With this update Privacy Badger removes tracking from links in Google Docs, Gmail, Google Maps, and Google Images results. Privacy Badger now also removes tracking from links added after scrolling through Google Search results.

Link tracking is a creepy surveillance tactic that allows a company to follow you whenever you click on a link to leave its website. As we wrote in our original announcement of Google link tracking protection, Google uses different techniques in different browsers. The techniques also vary across Google products. One common link tracking approach surreptitiously redirects the outgoing request through the tracker’s own servers. There is virtually no benefit 2 for you when this happens. The added complexity mostly just helps Google learn more about your browsing.

It's been a few years since our original release of Google link tracking protection. Things have changed in the meantime. For example, Google Search now dynamically adds results as you scroll the page ("infinite scroll" has mostly replaced distinct pages of results). Google Hangouts no longer exists! This made it a good time for us to update Privacy Badger’s first party tracking protections.

Privacy Badger’s extension popup window showing that link tracking protection is active for the currently visited site.

You can always check to see what Privacy Badger has done on the site you’re currently on by clicking on Privacy Badger’s icon in your browser toolbar. Whenever link tracking protection is active, you will see that reflected in Privacy Badger’s popup window.

We'll get into the technical explanation about how this all works below, but the TL;DR is that this is just one way that Privacy Badger continues to create a less tracking- and tracker-riddled internet experience.

More Details

This update is an overhaul of how Google link tracking removal works. Trying to get it all done inside a “content script” (a script we inject into Google pages) was becoming increasingly untenable. Privacy Badger wasn’t catching all cases of tracking and was breaking page functionality. Patching to catch the missed tracking with the content script was becoming unreasonably complex and likely to break more functionality.

Going forward, Privacy Badger will still attempt to replace tracking URLs on pages with the content script, but will no longer try to prevent links from triggering tracking beacon requests. Instead, it will block all such requests in the network layer.

Often the link destination is replaced with a redirect URL in response to interaction with the link. Sometimes Privacy Badger catches this mutation in the content script and fixes the link in time. Sometimes the page uses a more complicated approach to covertly open a redirect URL at the last moment, which isn’t caught in the content script. Privacy Badger works around these cases by redirecting the redirect to where you actually want to go in the network layer.

Google’s Manifest V3 (MV3) removes the ability to redirect requests using the flexible webRequest API that Privacy Badger uses now. MV3 replaces blocking webRequest with the limited by design Declarative Net Request (DNR) API. Unfortunately, this means that MV3 extensions are not able to properly fix redirects at the network layer at this time. We would like to see this important functionality gap resolved before MV3 becomes mandatory for all extensions.

Privacy Badger still attempts to remove tracking URLs with the content script so that you can always see and copy to clipboard the links you actually want, as opposed to mangled links you don’t. For example, without this feature, you may expect to copy “https://example.com”, but you will instead get something like “https://www.google.com/url?q=https://example.com/&sa=D&source=editors&ust=1692976254645783&usg=AOvVaw1LT4QOoXXIaYDB0ntz57cf”.

To learn more about this update, and to see a breakdown of the different kinds of Google link tracking, visit the pull request on GitHub.

Let us know if you have any feedback through email, or, if you have a GitHub account, through our GitHub issue tracker.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

  • 1. Privacy Badger version 2023.9.12
  • 2. No benefit outside of removing the referrer information, which can be accomplished without resorting to obnoxious redirects.

The U.S. Government’s Database of Immigrant DNA Has Hit Scary, Astronomical Proportions

The FBI recently released its proposed budget for 2024, and its request for a massive increase in funding for its DNA database should concern us all. The FBI is asking for an additional $53 million in funding to aid in the collection, organization, and maintenance of its Combined DNA Index System (CODIS) database in the wake of a 2020 Trump Administration rule that requires the Department of Homeland Security to collect DNA from anyone in immigration detention. The database approximately houses the genetic information on over 21 million people, adding an average of 92,000 DNA samples a month in the last year alone–over 10 times the historical sample volume. The FBI’s increased budget request demonstrates that the federal government has, in fact, made good on its projection of collecting over 750,000 new samples annually from immigrant detainees for CODIS. This type of forcible DNA collection and long-term hoarding of genetic identifiers not only erodes civil liberties by exposing individuals to unnecessary and unwarranted government scrutiny, but it also demonstrates the government’s willingness to weaponize biometrics in order to surveil vulnerable communities.

After the Supreme Court’s decision in Maryland v. King (2013), which upheld a Maryland statute to collect DNA from individuals arrested for a violent felony offense, states have rapidly expanded DNA collection to encompass more and more offenses—even when DNA is not implicated in the nature of the offense. For example, in Virginia, the ACLU and other advocates fought against a bill that would have added obstruction of justice and shoplifting as offenses for which DNA could be collected. The federal government’s expansion of DNA collection from all immigrant detainees is the most drastic effort to vacuum up as much genetic information as possible, based on false assumptions linking crime to immigration status despite ample evidence to the contrary.

As we’ve previously cautioned, this DNA collection has serious consequences. Studies have shown that increasing the number of profiles in DNA databases doesn’t solve more crimes. A 2010 RAND report instead stated that the ability of police to solve crimes using DNA is “more strongly related to the number of crime-scene samples than to the number of offender profiles in the database.” Moreover, inclusion in a DNA database increases the likelihood that an innocent person will be implicated in a crime. 

Lastly, this increased DNA collection exacerbates the existing racial disparities in our criminal justice system by disproportionately impacting communities of color. Black and Latino men are already overrepresented in DNA databases. Adding nearly a million new profiles of immigrant detainees annually—who are almost entirely people of color, and the vast majority of whom are Latine—will further skew the 21 million profiles already in CODIS.

We are all at risk when the government increases its infrastructure and capacity for collecting and storing vast quantities of invasive data. With the resources to increase the volume of samples collected, and an ever-broadening scope of when and how law enforcement can collect genetic material from people, we are one step closer to a future in which we all are vulnerable to mass biometric surveillance. 

EFF at FIFAfrica 2023

25 septembre 2023 à 15:42

EFF is excited to be in Dar es Salaam, Tanzania for this year's iteration of the Forum on Internet Freedom in Africa (FIFAfrica), organized by CIPESA (Collaboration on International ICT Policy for East and Southern Africa) between 27-29 September 2023.

FIFAfrica is a landmark event in the region that convenes an array of stakeholders from across internet governance and online rights to discuss and collaborate on opportunities for advancing privacy, protecting free expression, and enhancing the free flow of information online. FIFAfrica also offers a space to identify new and important digital rights issues, as well as exploring avenues to engage with these debates across national, regional, and global spaces.

We hope you have an opportunity to connect with us at the panels listed below. In addition to these, EFF will be attending many other events at FIFAfrica. We look forward to meeting you there!

THURSDAY 28 SEPTEMBER 

Combatting Disinformation for Democracy 

2pm to 3:30pm local time 
Location: Hyatt Hotel - Kibo 

Hosted by: CIPESA

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nompilo Simanje, Africa Advocacy and Partnerships Lead, International Press Institute 
  • Obioma Okonkwo, Head, Legal Department, Media Rights Agenda
  • Daniel O’Maley, Senior Digital Governance Specialist, Center for International Media Assistance 

In an age of falsehoods, facts, and freedoms marked by the rapid spread of information and the proliferation of digital platforms, the battle against disinformation has never been more critical. This session brings together experts and practitioners at the forefront of this fight, exploring the pivotal roles that media, fact checkers, and technology play in upholding truth and combating the spread of false narratives. 

This panel will delve into the multifaceted challenges posed by disinformation campaigns, examining their impact on societies, politics, and public discourse. Through an engaging discussion, the session will spotlight innovative strategies, cutting-edge technologies, and collaborative initiatives employed by media organizations, tech companies, and civil society to safeguard the integrity of information.

FRIDAY 29 SEPTEMBER

Platform Accountability in Africa: Content Moderation and Political Transitions

11am to 12:30pm local time
Location: Hyatt Hotel - Kibo 

Hosted by: Meta Oversight Board, CIPESA, Open Society Foundations 

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nerima Wako, Executive Director, SIASA PLACE
  • Abigail Bridgman, Deputy Vice President, Content Review and Policy, Meta Oversight Board 
  • Afia Asantewaa Asare-Kyei, Member, Meta Oversight Board

Social media platforms are often criticized for failing to address significant and seemingly preventable harms stemming from online content. This is especially true during volatile political transitions, where disinformation, violence incitement, and hate speech on the basis of gender, religion, ethnicity, and other characteristics, are highly associated with increased real-life harms.

This session will discuss best practices for combating harmful online content through the lens of the most urgent and credible threats to political transitions on the African continent. With critical general, presidential, and legislative elections fast approaching, as well as the looming threat of violent political transitions, the panelists will highlight current trends of online content, the impact of harmful content, and chart a path forward for the different stakeholders. The session will also assess the various roles that different institutions, stakeholders, and experts can play to strike the balance between addressing harms and respecting the human rights of users under such a context.

How To Turn Off Google’s “Privacy Sandbox” Ad Tracking—and Why You Should

28 septembre 2023 à 13:42

Google has rolled out "Privacy Sandbox," a Chrome feature first announced back in 2019 that, among other things, exchanges third-party cookies—the most common form of tracking technology—for what the company is now calling "Topics." Topics is a response to pushback against Google’s proposed Federated Learning of Cohorts (FLoC), which we called "a terrible idea" because it gave Google even more control over advertising in its browser while not truly protecting user privacy. While there have been some changes to how this works since 2019, Topics is still tracking your internet use for Google’s behavioral advertising.

If you use Chrome, you can disable this feature through a series of three confusing settings.

With the version of the Chrome browser released in September 2023, Google tracks your web browsing history and generates a list of advertising "topics" based on the web sites you visit. This works as you might expect. At launch there are almost 500 advertising categories—like "Student Loans & College Financing," "Parenting," or "Undergarments"—that you get dumped into based on whatever you're reading about online. A site that supports Privacy Sandbox will ask Chrome what sorts of things you're supposedly into, and then display an ad accordingly. 

The idea is that instead of the dozens of third-party cookies placed on websites by different advertisers and tracking companies, Google itself will track your interests in the browser itself, controlling even more of the advertising ecosystem than it already does. Google calls this “enhanced ad privacy,” perhaps leaning into the idea that starting in 2024 they plan to “phase out” the third-party cookies that many advertisers currently use to track people. But the company will still gobble up your browsing habits to serve you ads, preserving its bottom line in a world where competition on privacy is pushing it to phase out third-party cookies. 

Google plans to test Privacy Sandbox throughout 2024. Which means that for the next year or so, third-party cookies will continue to collect and share your data in Chrome.

The new Topics improves somewhat over the 2019 FLoC. It does not use the FLoC ID, a number that many worried would be used to fingerprint you. The ad-targeting topics are all public on GitHub, hopefully avoiding any clearly sensitive categories such as race, religion, or sexual orientation. Chrome's ad privacy controls, which we detail below, allow you to see what sorts of interest categories Chrome puts you in, and remove any topics you don't want to see ads for. There's also a simple means to opt out, which FLoC never really had during testing.

Other browsers, like Firefox and Safari, baked in privacy protections from third-party cookies in 2019 and 2020, respectively. Neither of those browsers has anything like Privacy Sandbox, which makes them better options if you'd prefer more privacy. 

Google referring to any of this as “privacy” is deceiving. Even if it's better than third-party cookies, the Privacy Sandbox is still tracking, it's just done by one company instead of dozens. Instead of waffling between different tracking methods, even with mild improvements, we should work towards a world without behavioral ads.

But if you're sticking to Chrome, you can at least turn these features off.

How to Disable Privacy Sandbox

Screenshot of Chrome browser with "enhanced ad privacy in Chrome" page Depending on when you last updated Chrome, you may have already received a pop-up asking you to agree to “Enhanced ad privacy in Chrome.” If you just clicked the big blue button that said “Got it” to make the pop-up go away, you opted yourself in. But you can still get back to the opt out page easily enough by clicking the Three-dot icon (⋮) > Settings > Privacy & Security > Ad Privacy page. Here you'll find this screen with three different settings:

  • Ad topics: This is the fundamental component of Privacy Sandbox that generates a list of your interests based on the websites you visit. If you leave this enabled, you'll eventually get a list of all your interests, which are used for ads, as well as the ability to block individual topics. The topics roll over every four weeks (up from weekly in the FLOCs proposal) and random ones will be thrown in for good measure. You can disable this entirely by setting the toggle to "Off."
  • Site-suggested ads: This confusingly named toggle is what allows advertisers to do what’s called "remarketing" or "retargeting," also known as “after I buy a sofa, every website on the internet advertises that same sofa to me.” With this feature, site one gives information to your Chrome instance (like “this person loves sofas”) and site two, which runs ads, can interact with Chrome such that a sofa ad will be shown, even without site two learning that you love sofas. Disable this by setting the toggle to "Off."
  • Ad measurement: This allows advertisers to track ad performance by storing data in your browser that's then shared with other sites. For example, if you see an ad for a pair of shoes, the site would get information about the time of day, whether the ad was clicked, and where it was displayed. Disable this by setting the toggle to "Off."

If you're on Chrome, Firefox, Edge, or Opera, you should also take your privacy protections a step further with our own Privacy Badger, a browser extension that blocks third-party trackers that use cookies, fingerprinting, and other sneaky methods. On Chrome, Privacy Badger also disables the Topics API by default.

The Growing Threat of Cybercrime Law Abuse: LGBTQ+ Rights in MENA and the UN Cybercrime Draft Convention

This is Part II  of a series examining the proposed UN Cybercrime Treaty in the context of LGBTQ+ communities. Part I looks at the draft Convention’s potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

In the digital age, the rights of the LGBTQ+ community in the Middle East and North Africa (MENA) are gravely threatened by expansive cybercrime and surveillance legislation. This reality leads to systemic suppression of LGBTQ+ identities, compelling individuals to censor themselves for fear of severe reprisal. This looming threat becomes even more pronounced in countries like Iran, where same-sex conduct is punishable by death, and Egypt, where merely raising a rainbow flag can lead to being arrested and tortured.

Enter the proposed UN Cybercrime Convention. If ratified in its present state, the convention might not only bolster certain countries' domestic surveillance powers to probe actions that some nations mislabel as crimes, but it could also strengthen and validate international collaboration grounded in these powers. Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures for acts that are in stark contradiction with international human rights law. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. 

The draft convention, in Article 35, permits each country to define its own crimes under domestic laws when requesting assistance from other nations in cross-border policing and evidence collection. In certain countries, many of these criminal laws might be based on subjective moral judgments that suppress what is considered free expression in other nations, rather than adhering to universally accepted standards.

Indeed, international cooperation is permissible for crimes that carry a penalty of four years of imprisonment or more; there's a concerning move afoot to suggest reducing this threshold to merely three years. This is applicable whether the alleged offense is cyber or not. Such provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in some jurisdictions. 

While some countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. The draft treaty allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the MLAT system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the checks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. And where countries do share the same subjective values and problematically criminalize the same conduct, this draft treaty seemingly provides a justification for their cooperation.

One of the more recently introduced pieces of legislation that exemplifies these issues is the Cybercrime Law of 2023 in Jordan. Introduced as part of King Abdullah II’s modernization reforms to increase political participation across Jordan, this law was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. In addition to this new law, the pre-existing cybercrime law in Jordan has already been used against LGBTQ+ people, and this new law expands its capacity to do so. This law, with its overly broad and vaguely defined terms, will severely restrict individual human rights across that country and will become a tool for prosecuting innocent individuals for their online speech. 

Article 13 of the Jordan law expansively criminalizes a wide set of actions tied to online content branded as “pornographic,” from its creation to distribution. The ambiguity in defining what is pornographic could inadvertently suppress content that merely expresses various sexualities, mistakenly deeming them as inappropriate. This goes beyond regulating explicit material; it can suppress genuine expressions of identity. The penalty for such actions entails a period of no less than six months of imprisonment. 

Meanwhile, the nebulous wording in Article 14 of Jordan's laws—terms like “expose public morals,” “debauchery,” and “seduction”—is equally concerning. Such vague language is ripe for misuse, potentially curbing LGBTQ+ content by erroneously associating diverse sexual orientation with immorality. Both articles, in their current form, cast shadows on free expression and are stark reminders that such provisions can lead to over-policing online content that is not harmful at all. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. Deputy Leader of the Opposition, Saleh al Armouti, went further and claimed that “Jordan will become a big jail.” 

Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their rights to freedom of expression and privacy. Article 12 of the Cybercrime Law prohibits the use of Virtual Private Networks (VPNs) and other proxies, with at least six months imprisonment or a fine for violations. 

This will force people in Jordan to choose between engaging in free online expression or keeping their personal identity private. More specifically, this will negatively impact LGBTQ+ people and human rights defenders in Jordan who particularly rely on VPNs and anonymity to protect themselves online. The impact of Article 12 is exacerbated by the fact that there is no comprehensive data privacy legislation in Jordan to protect people’s rights during cyber attacks and data breaches.  

This is not the first time Jordan has limited access to information and content online. In December 2022, Jordanian authorities blocked TikTok to prevent the dissemination of live updates and information during the workers’ protests in the country's south, and authorities there previously had blocked Clubhouse as well

This crackdown on free speech has particularly impacted journalists, such as the recent arrest of Jordanian journalist Heba Abu Taha for criticizing Jordan’s King over his connections with Israel. Given that online platforms like TikTok and Twitter are essential for activists, organizers, journalists, and everyday people around the world to speak truth to power and fight for social justice, the restrictions placed on free speech by Jordan’s new Cybercrime Law will have a detrimental impact on political activism and community building across Jordan.

People across Jordan have protested the law and the European Union has  expressed concern about how the law could limit freedom of expression online and offline. In August, EFF and 18 other civil society organizations wrote to the King of Jordan, calling for the rejection of the country’s draft cybercrime legislation. With the law now in effect, we urge Jordan to repeal the Cybercrime Law 2023.

Jordan’s Cybercrime Law has been said to be a “true copy” of the United Arab Emirates (UAE) Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes. This law replaced its predecessor, which had been used to stifle expression critical of the government or its policies—and was used to sentence human rights defender Ahmed Mansoor to 10 years in prison. 

The UAE’s new cybercrime law further restricts the already heavily-monitored online space and makes it harder for ordinary citizens, as well as journalists and activists, to share information online. More specifically, Article 22 mandates prison sentences of between three and 15 years for those who use the internet to share “information not authorized for publishing or circulating liable to harm state interests or damage its reputation, stature, or status.” 

In September 2022, Tunisia passed its new cybercrime law in Decree-Law No. 54 on “combating offenses relating to information and communication systems.” The wide-ranging decree has been used to stifle opposition free speech, and mandates a five-year prison sentence and a fine for the dissemination of “false news” or information that harms “public security.” In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets and individuals for their opposition to government policies or officials. 

The first criminal investigation under Decree-Law 54 saw the arrest of student Ahmed Hamada in October 2022 for operating a Facebook page that reported on clashes between law enforcement and residents of a neighborhood in Tunisia. 

Similar tactics are being used in Egypt, where the 2018 cybercrime law, Law No. 175/2018, contains broad and vague provisions to silence dissent, restrict privacy rights, and target LGBTQ+ individuals. More specifically, Articles 25 and 26 have been used by the authorities to crackdown on content that allegedly violates “family values.” 

Since its enactment, these provisions have also been used to target LGBTQ+ individuals across Egypt, particularly regarding the publication or sending of pornography under Article 8, as well as illegal access to an information network under Article 3. For example, in March 2022 a court in Egypt charged singers Omar Kamal and Hamo Beeka with “violating family values” for dancing and singing in a video uploaded to YouTube. In another example, police have used cybercrime laws to prosecute LGBTQ+ individuals for using dating apps such as Grindr.

And in Saudi Arabia, national authorities have used cybercrime regulations and counterterrorism legislation to prosecute online activism and stifle dissenting opinions. Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online—for composing a tweet, liking a Facebook post, or writing a blog post. And while Saudi Arabia has no specific law concerning gender identity and sexual orientation, authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.” 

These provisions have been used to prosecute individuals for peaceful actions, particularly since the Arab Spring in 2011. More recently, in August 2022, Salma al-Shehab was sentenced to 34 years in prison with a subsequent 34-year travel ban for her alleged “crime” of sharing content in support of prisoners of conscience and women human rights defenders.

These cybercrime laws demonstrate that if the proposed UN Cybercrime Convention is ratified in its current form with its broad scope, it would authorize domestic surveillance for the investigation of any offenses, as those in Articles 12, 13, and 14 of Jordan's law. Additionally, the convention could authorize international cooperation for investigation of crimes penalized with three or four years of imprisonment, as seen in countries such as the UAE, Tunisia, Egypt, and Saudi Arabia.

As Canada warned (at minute 01:56 ) at the recent negotiation session, these expansive provisions in the Convention permit states to unilaterally define and broaden the scope of criminal conduct, potentially paving the way for abuse and transnational repression. While the Convention may incorporate some procedural safeguards, its far-reaching scope raises profound questions about its compatibility with the key tenets of human rights law and the principles enshrined in the UN Charter. 

The root problem lies not in the severity of penalties, but in the fact that some countries criminalize behaviors and expression that are protected under international human rights law and the UN Charter. This is alarming, given that numerous laws affecting the LGBTQ+ community carry penalties within these ranges, making the potential for misuse of such cooperation profound.

In a nutshell, the proposed UN treaty amplifies the existing threats to the LGBTQ+ community. It endorses a framework where nations can surveil benign activities such as sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

Online, the lack of legal protection of subscriber data threatens the anonymity of the community, making them vulnerable to identification and subsequent persecution. The mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their identities being disclosed, putting them at significant risk.

Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be intercepted and turned against them. 

To all member states: This is a pivotal moment. This is our opportunity to ensure the digital future is one where rights are championed, not compromised. Pledge to protect the rights of all, especially those communities like the LGBTQ+ that are most vulnerable. The international community must unite in its commitment to ensure that the proposed convention serves as an instrument of protection, not persecution.



Is Your State’s Child Safety Law Unconstitutional? Try Comprehensive Data Privacy Instead

Comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harm they do to children. Well-written privacy legislation has the added benefit of being constitutional—unlike the flurry of laws that restrict content behind age verification requirements that courts have recently blocked. Such misguided laws do little to protect kids while doing much to invade everyone’s privacy and speech.

Courts have issued preliminary injunctions blocking laws in Arkansas, California, and Texas because they likely violate the First Amendment rights of all internet users. EFF has warned that such laws were bad policy and would not withstand court challenges. Nonetheless, different iterations of these child safety proposals continue to be pushed at the state and federal level.

The answer is to re-focus attention on comprehensive data privacy legislation, which would address the massive collection and processing of personal data that is the root cause of many problems online. Just as important, it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

It Is Comparatively Easy to Write Data Privacy Laws That Are Constitutional

EFF has long pushed for strong comprehensive commercial data privacy legislation and continues to do so. Data privacy legislation has many components. But at its core, it should minimize the amount of personal data that companies process, give users certain rights to control their personal data, and allow consumers to sue when the law is violated.

EFF has argued that privacy laws pass First Amendment muster when they have a few features that ensure the law reasonably fits its purpose. First, they regulate the commercial processing of personal data. Second, they do not impermissibly restrict the truthful publication of matters of public concern. And finally, the government’s interest and law’s purpose is to protect data privacy; expand the free expression that privacy enables; and protect the security of data against insider threats, hacks, and eventual government surveillance. If so, the privacy law will be constitutional if the government shows a close fit between the law’s goals and its means.

EFF made this argument in support of the Illinois Biometric Information Privacy Act (BIPA), and a law in Maine that limits the use and disclosure of personal data collected by internet service providers. BIPA, in particular, has proved wildly important to biometric privacy. For example, it led to a settlement that prohibits the company Clearview AI from selling its biometric surveillance services to law enforcement in the state. Another settlement required Facebook to pay hundreds of millions of dollars for its policy (since repealed) of extracting faceprints from users without their consent.

Courts have agreed. Privacy laws that have been upheld under the First Amendment, or cited favorably by courts, include those that regulate biometric data, health data, credit reports, broadband usage data, phone call records, and purely private conversations.

The Supreme Court, for example, has cited the federal 1996 Health Insurance Portability and Accountability Act (HIPAA) as an example of a “coherent” privacy law, even when it struck down a state law that targeted particular speakers and viewpoints. Additionally, when evaluating the federal Wiretap Act, the Supreme Court correctly held that the law cannot be used to prevent a person from publishing legally obtained communications on matters of public concern. But it otherwise left in place the wiretap restrictions that date back to 1934, designed to protect the confidentiality of private conversations.

It Is Nearly Impossible to Write Age Verification Requirements That Are Constitutional. Just Ask Arkansas, California, and Texas

Federal Courts have recently granted preliminary injunctions that block laws in Arkansas, California, and Texas from going into effect because they likely violate the First Amendment rights of all internet users. While the laws differ from each other, they all require (or strongly incentivize) age verification for all internet users.

The Arkansas law requires age verification for users of certain social media companies, which EFF strongly opposes, and bans minors from those services without parental consent. The court blocked it. The court reasoned that the age verification requirement would deter everyone from accessing constitutionally protected speech and burden anonymous speech. EFF and ACLU filed an amicus brief against this Arkansas law.

In California, a federal court recently blocked the state’s Age-Appropriate Design Code (AADC) under the First Amendment. Significantly, the AADC strongly incentivized websites to require users to verify their age. The court correctly found that age estimation is likely to “exacerbate” the problem of child security because it requires everyone “to divulge additional personal information” to verify their age. The court blocked the entire law, even some privacy provisions we’d like to see in a comprehensive privacy law if they were not intertwined with content limitations and age-gating. EFF does not agree with the court’s reasoning in its entirety because it undervalued the state’s legitimate interest in and means of protecting people’s privacy online. Nonetheless, EFF originally asked the California governor to veto this law, believing that true data privacy legislation has nothing to do with access restrictions.

The Texas law requires age verification for users of websites that post sexual material, and exclusion of minors. The law also requires warnings about sexual content that the court found unsupported by evidence. The court held both provisions are likely unconstitutional. It explained that the age verification requirement, in particular, is “constitutionally problematic because it deters adults’ access to legal sexually explicit material, far beyond the interest of protecting minors.” EFF, ACLU, and other groups filed an amicus brief against this Texas law.

Support Comprehensive Privacy Legislation That Will Stand the Test of Time

Courts will rightly continue to strike down similar age verification and content blocking laws, just as they did 20 years ago. Lawmakers can and should avoid this pre-determined fight and focus on passing laws that will have a lasting impact: strong, well-written comprehensive data privacy.

California Takes Some Big Steps for Digital Rights

13 octobre 2023 à 11:37

California often sets the bar for technology legislation across the country. This year, the state enacted several laws that strengthen consumer digital rights.

The first big win to celebrate? Californians now enjoy the right to repair. S.B. 244, authored by California Sen. Susan Eggman, makes it easier for individuals and independent repair shops to access materials and parts needed for maintenance on electronics and appliances. That means that Californians with a broken phone screen or a busted washing machine will have many more options for getting them fixed.

S.B. 244 is one of the strongest right-to-repair laws in the country, and caps off a strong couple of years of progress on this issue. This is a huge victory for consumers, pushed by a dedicated group of advocates led by the California Public Interest Research Group, and we're excited to keep pushing to ensure that people have the freedom to tinker.

California's law differs from other right-to-repair laws in a few ways. For one, by building on categories set in the state's warranty laws, S.B. 244 establishes that you'll be able to get documentation, tools, and parts for devices for three years for products that cost between $50 and $99.99. For products that cost $100 or more, those will be available for seven years. Even though some electronics are not included, such as video game consoles, it still raises the bar for other right-to-repair bills.

Another significant win comes with the signing of S.B. 362, also known as the CA Delete Act, which was authored by California Sen. Josh Becker. This bill was supported by a coalition of advocates led by Privacy Rights Clearinghouse and Californians for Consumer Privacy and builds on the state's landmark data privacy law and its data broker registry to make it easier for anyone to exert greater control over their privacy. Despite serious pushback from advertisers, California Governor Gavin Newsom signed this law, which also requires data brokers to report more information about what data they collect on consumers and strengthens enforcement mechanisms against data brokers who fail to comply with the reporting requirement.

This law is an important, common-sense measure that makes rights established by the California Consumer Privacy Act more user-friendly; EFF was proud to support it. 

In addition to these big wins, several California bills we supported are now law. These include measures that will broaden protections for health care data, reproductive data, immigration status data, as well as facilitate better broadband access.

Of course, not everything went as EFF would like.  Governor Newsom signed A.B. 1394—a bill EFF opposed because it's likely to incentivize companies to censor protected speech to avoid liability. A.B. 1394 follows a troubling trend we've seen in several state legislatures, including in California, when lawmakers attempt to address children's online safety. In seeking to protect children, bills such as these run a high risk of censoring protected speech.

As we wrote in our letters opposing this bill, "We have seen this happen with similarly well-intentioned laws. The federal Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) ostensibly sought to criminalize sex trafficking, but swept up Internet speech about sex, sex workers, and sexual freedom, including harm reduction information and speech advocating decriminalization of prostitution. A.B. 1394 could follow a similar path, in which companies fearing the consequences of the law cast an overbroad net and remove information on how to prevent commercial sexual exploitation of minors or support groups for victims. Failing to comply with a notice could be construed as negligence under this bill as written."

We were encouraged to see that some other bills that raised similar concerns did not advance through the legislature. Rather than pursue these laws that facilitate censorship, EFF recommends that lawmakers consider comprehensive data privacy laws that address the massive collection and processing of personal data that is the root cause of many problems online.

As always, we want to acknowledge how much your support has helped our advocacy in California this year. Every person who takes the time to send a message or make a call to your legislators helps to tip the scales. Your voices are invaluable, and they truly make a difference.

Colorado Supreme Court Upholds Keyword Search Warrant

Today, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase. In a weak and ultimately confusing opinion, the court upheld the warrant, finding the police relied on it in good faith. EFF filed two amicus briefs and was heavily involved in the case.

The case is People v. Seymour, which involved a tragic home arson that killed several people. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.

Like geofence warrants, keyword warrants cast a dragnet that require a provider to search its entire reserve of user data—in this case, queries by one billion Google users. Police generally have no identified suspects; instead, the sole basis for the warrant is the officer’s hunch that the suspect might have searched for something in some way related to the crime.

Keyword warrants rely on the fact that it is virtually impossible to navigate the modern Internet without entering search queries into a search engine like Google's. By some accounts, there are over 1.15 billion websites, and tens of billions of webpages. Google Search processes as many as 100,000 queries every second. Many users have come to rely on search engines to such a degree that they routinely search for the answers to sensitive or unflattering questions that they might never feel comfortable asking a human confidant, even friends, family members, doctors, or clergy. Over the course of months and years, there is little about a user’s life that will not be reflected in their search keywords, from the mundane to the most intimate. The result is a vast record of some of users’ most private and personal thoughts, opinions, and associations.

In the Seymour opinion, the four-justice majority recognized that people have a constitutionally-protected privacy interest in their internet search queries and that these queries impact a person’s free speech rights. The federal Supreme Court has held that warrants like this one that target speech are highly suspect so courts must apply constitutional search-and-seizure requirements with “scrupulous exactitude.” Despite recognizing this directive to engage in careful, in-depth analysis, the Seymour majority’s reasoning was cursory and at points mistaken. For example, although the court found that the Colorado constitution protects users’ privacy interests in their search queries, it held that the Fourth Amendment does not, due to the third party doctrine, because federal courts have held that there is no expectation of privacy in IP addresses. However, this overlooks the queries themselves, which many courts have suggested are more akin to the location information that was found to be protected in Carpenter v. United States. Similarly, the Colorado court neglected to address the constitutionality of Google’s initial search of all its users’ search queries because it found that the things seized—users’ queries and IP addresses—were sufficiently narrow. Finally, the court merely assumed without deciding that the warrant lacked probable cause, a shortcut that allowed the court to overlook the warrant's facial deficiency and therefore uphold it on the “good faith exception.”

If the majority had truly engaged with the deep constitutional issues presented by this keyword warrant, it would have found, as the three-justices dissenting on this point did, that keyword warrants “are tantamount to a high-tech version of the reviled ‘general warrants’ that first gave rise to the protections in the Fourth Amendment.” They lack probable cause because a mere hunch that some unknown person might have searched for a specific phrase related to the crime is insufficient to support a search of everyone’s search queries, let alone a specific, previously unnamed individual. And keyword warrants are insufficiently particular because they do next to nothing to narrow the universe of the search.

We are disappointed in the result in this case. Keyword warrants not only have the potential to implicate innocent people, they allow the government to target people for sensitive search terms like the drug mifepristone, or the names of gender-affirming healthcare providers, or information about psychedelic drugs. Even searches that refer to crimes or acts of terror are not themselves criminal in all or even most cases (otherwise historians, reporters, and crime novelists could all be subject to criminal investigation). Dragnet warrants that target speech have no place in a democracy, and we will continue to challenge them in the courts and to support legislation to ban them entirely.

Privacy Advocates to TSA: Slow Down Plans for mDLs

18 octobre 2023 à 17:08

A digital form of identification should have the same privacy and security protections as physical ones. More so, because the standards governing them are so new and untested. This is at the heart of comments EFF and others submitted recently. Why now? Well, in 2021 the DHS submitted a call for comments for mobile driver’s licenses (mDLs). Since then the Transportation Security Administration (TSA) has taken up a process of making mDLs an acceptable identification at airports, and more states have adopted mDLs with either a state sponsored app or Apple and Google Wallet.

With the TSA’s proposed mDL rules, we ask: what’s the hurry? The agency’s rush to mDLs is ill-advised. For example, many mDL privacy guards are not yet well thought out, the standards referenced are not generally accessible to the public, and the scope for mDLs will reach beyond the context of an airport security line.

And so, EFF submitted comments with the American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), and Electronic Privacy Information Center (EPIC) to the TSA. We object to the agency’s proposed rules for waiving current REAL ID regulations for mobile driver’s licenses. Such premature federal action can undermine privacy, information security, democratic control, and transparency in the rollout of mDLs and other digital identification.

Even though standards bodies like the International Organization for Standardization (ISO) have frameworks for mDLs, they do not address various issues, such as an mDL potentially “phoning home” every time it is scanned. The privacy guards are still lacking, and left up to each state to implement them in their own way. With the TSA’s proposed waiver process, mDL development will likely be even more fractured, with some implementations better than others. This happened with digital vaccine credentials.

Another concern is that the standards referenced in the TSA’s proposed rules are under private, closed-off groups like the American Association of Motor Vehicle Administrators (AAMVA), and the ISO process that generated its specification 18013–5:2021. These standards have not been informed by enough transparency and public scrutiny. Moreover, there are other more openly-discussed standards that could open up interoperability. The lack of guidance around provisioning, storage, and privacy-preserving approaches is also a major cause for concern. Privacy should not be an afterthought, and we should not follow the “fail fast” model with such sensitive information.

Considering the mission and methods of the TSA, that agency should not be at the helm of creating nationwide mDL rules. That could lead to a national digital identity system, which EFF has long opposed, in an overreach of the agency’s position far outside the airport.

Well meaning intentions to allow states to “innovate” aside, mDLs done slower and right is a bigger win over fast and potentially harmful. Privacy safeguards need innovation, too, and the privacy risk is immense when it comes to digital documentation.

What to Do If You're Concerned About the 23andMe Breach

20 octobre 2023 à 12:53

In early October, a bad actor claimed they were selling account details from the genetic testing service, 23andMe, which included alleged data of one million users of Ashkenazi Jewish descent and another 100,000 users of Chinese descent. By mid-October this expanded out to another four million more general accounts. The data includes display name, birth year, sex, and some details about genetic ancestry results, but no genetic data. There's nothing you can do if your data was already accessed, but it's a good time to reconsider how you're using the service to begin with. 

What Happened

In a blog post, 23andMe claims the bad actors accessed the accounts through "credential stuffing:" the practice of using one set of leaked usernames and passwords from a previous data breach on another website in hopes that people have reused passwords. 

Details about any specific accounts affected are still scant, but we do know some broad strokes. TechCrunch found the data may have been first leaked back in August when a bad actor posted on a hacking forum that they'd accessed 300 terabytes of stolen 23andMe user data. At the time, not much was made of the supposed breach, but then in early October a bad actor posted a data sample on a different forum claiming that the full set of data contained 1 million data points about people with Ashkenazi Jewish ancestry. In a statement to The Washington Post a 23andMe representative noted that this "would include people with even 1% Jewish ancestry." Soon after, another post claimed they had data on 100,000 Chinese users. Then, on October 18, yet another dataset showed up on the same forum that included four million users, with the poster claiming it included data from "the wealthiest people living in the U.S. and Western Europe on this list." 

23andMe suggests that the bad actors compiled the data from accounts using the optional "DNA Relatives" feature, which allows 23andMe users to automatically share data with others on the platform who they may be relatives with. 

Basically, it appears an attacker took username and password combinations from previous breaches and tried those combinations to see if they worked on 23andMe accounts. When logins worked, they scraped all the information they could, including all the shared data about relatives if both the relatives and the original account opted into the DNA Relatives feature.

That's all we know right now. 23andMe says it will continue updating its blog post here with new information as it has it.

Why It Matters

Genetic information is an important tool in testing for disease markers and researching family history, but there are no federal laws that clearly protect users of online genetic testing sites like 23andMe and Ancestry.com. The ability to research family history and disease risk shouldn’t carry the risk that our data will be accessible in data breaches, through scraped accounts, by law enforcement, insurers, or in other ways we can't foresee. 

It's still unclear if the data is deliberately targeting the Ashkenazi Jewish population or if it's a tasteless way to draw attention to the data sale, but the fact the data can be used to target ethnic groups is an unsettling use. 23andMe pitches "DNA Relatives" almost like a social network, and a fun way to find a second cousin or two. There are some privacy guardrails on using the feature, like the option to hide your full name, but with a potentially full family tree otherwise available an individual's privacy choices here may not be that protective. 

23andme is generally one of the better actors in this space. They require an individualized warrant for police access to their data, don't allow direct access to all data (unlike GEDmatch and FTDNA), and push back on overbroad warrants. But putting the burden on its customers to use unique passwords and to opt intoinstead of requiringaccount protection features like two-factor authentication is an unfortunate look for a company that handles sensitive data. 

Reusing passwords is a common practice, but instead of blaming its customers, 23andMe should be doing more to make its default protections stronger. Features like requiring two-factor authentication and frequent privacy check-up reminders, like those offered by most social networks these days, could go a long way to help users reconsider and better understand their privacy.

How to Best Protect Your Account

If your data is included in this stolen data set, there's not much you can do to get your data back, nor is there a way to search through it to see if your information is included. But you should log into your 23andMe account to make some changes to your security and privacy settings to protect against any issues in the future:

  • 23andMe is currently requiring all users to change their passwords. When you create your new one, be sure to use a unique password. A password manager can help make this easier. A password manager can also usually tell you if previously used passwords of yours have been found in a breach, but in either case you should create a unique password for different sites.
  • Enable two-factor authentication on your 23andMe account by following the directions here. This makes it so in order to log into your account, you'll need to provide not only your username and password, but also a second factor, in this case a code from an two-factor authentication app like Authy or Google Authenticator.
  • Change your display name in DNA Relatives so it's just your initials, or consider disabling this feature entirely if you don't use it. 

Taking these steps may not protect other unforeseen privacy invasions, but it can at least better protect it from the rest of the potential issues we know exist today.

How to Download and Delete Your Data

If this situation makes you uneasy with your data being on the platform, or you've already gotten out of it what you wanted, then you may want to delete your account. But before you do so, consider downloading the data for your own records. To download your data:

  1. Log into your 23andMe account and click your username, then "Settings." 
  2. Scroll down to the bottom where it says "23andMe Data" and click "View."
  3. Here, you'll find the option to download various parts of your 23andMe data. The most important ones to consider are:
    1. The "Reports Summary" includes details like the "Wellness Reports," "Ancestry Reports," and "Traits Reports."
    2. The "Ancestry Composition Raw Data" the company's interpretation of your raw genetic data.
    3. If you were using the DNA Relatives feature, the "Family Tree Data" includes all the information about your relatives. Based on the descriptions of the data we've seen, this sounds like the data the bad actors collected.
    4. You can also download the "Raw data," which is the uninterpreted version of your DNA. 

There are other types of data you can download on this page, though much of it will not be of use to you without special software. But there's no harm in downloading everything.

Once you have that data downloaded, follow the company's guide for deleting your account. The button to start the process is located on the bottom of the same account page where you downloaded data.

Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get genetically determined diseases. This incident is an example of why this matters, and how certain features that may seem useful in the moment can be weaponized in novel ways. For more information about genetic privacy, see our Genetic Information Privacy legal overview, and other Health Privacy-related topics on our blog.

How GoGuardian Invades Student Privacy

Par : Jason Kelley
30 octobre 2023 à 15:54

This post was co-authored by legal intern Kate Prince.

Jump to our detailed report about GoGuardian and student monitoring tools.

GoGuardian is a student monitoring tool that watches over twenty-seven million students across ten thousand schools, but what it does exactly, and how well it works, isn’t easy for students to know. To learn more about its functionality, accuracy, and impact on students, we filed dozens of public records requests and analyzed tens of thousands of results from the software. Using data from multiple schools in both red and blue states, what we uncovered was that, by design, GoGuardian is a red flag machine—its false positives heavily outweigh its ability to accurately determine whether the content of a site is harmful. This results in tens of thousands of students being flagged for viewing content that is not only benign, but often, educational or informative. 

We identified multiple categories of non-explicit content that are regularly marked as harmful or dangerous, including: College application sites and college websites; counseling and therapy sites; sites with information about drug abuse; sites with information about LGBTQ issues; sexual health sites; sites with information about gun violence; sites about historical topics; sites about political parties and figures; medical and health sites; news sites; and general educational sites. 

To illustrate the shocking absurdity of GoGuardian's flagging algorithm, we have built the Red Flag Machine quiz. Derived from real GoGuardian data, visitors are presented with websites that were flagged and asked to guess what keywords triggered the alert. We have also written a detailed report on our findings, available online here (and downloadable here). 

A screenshot of the front page of the red flag machine quiz and website.

But the inaccurate flagging is just one of the dangers of the software. 

How Does Student Monitoring Software Work? 

Along with apps like Gaggle and Bark, GoGuardian is used to proactively monitor primarily middle and high school students, giving schools access to an enormous amount of sensitive student data which the company can also access. In some cases, this has even given teachers the ability to view student webcam footage without their consent when they are in their homes. In others, this sort of software has inaccurately mischaracterized student behavior as dangerous or even outed students to their families.

Though some privacy invasions and errors may be written off as the unintentional costs of protecting students, even commonly used features of these monitoring apps are cause for concern. GoGuardian lets school officials track trends in student search histories. It identifies supposedly “at risk” students and gathers location data on where and when a device is being used, allowing anyone with access to the data to create a comprehensive profile of the student. It flags students for mundane activity, and sends alerts to school officials, parents, and potentially, police.

These companies tout their ability to make the lives of administrators and teachers easier and their students safer. Instead of having to dash around a classroom to check everyone’s computers, teachers can instead sit down and watch a real-time stream of their students’ online activity. They can block sites, get alerts when students are off task, and directly message those who might need help. And during the pandemic, many of these tools were offered to schools free of chargeexacerbating the surveillance while minimizing students’ and parents’ opportunity to push back. Along with the increased use of school issued devices, which are more common in marginalized communities, this has created an atmosphere of hypercharged spying on students.

This problem isn’t new. In 2015, EFF submitted a complaint to the FTC that Google’s Apps for Education (GAFE) software suite was collecting and data mining school children’s personal information, including their Internet searches. In a partial victory, Google changed its tune and began explicitly stating that even though they do collect information on students’ use of non-GAFE services, they treat that information as “student personal information” and do not use it to target ads.

But the landscape has shifted since then. The use of “edtech” software has grown considerably, and with monitoring-specific apps like GoGuardian and Gaggle, students are being taught by our schools that they have no right to privacy and that they can always be monitored.  

Knowing how you’re being surveilled—and how accurate that surveillance is—must be the first step to fighting back, and protecting your privacy.  This blog is a run-down of some of the most common GoGuardian features.

GoGuardian Admin is God Mode for School Administrators

School administrators using GoGuardian’s “Admin” tool have nearly unfettered access to huge amounts of data about students, including browsing histories, documents, videos, app and extension data, and content filtering and alerts. It’s unclear why so much data is available to administrators, but GoGuardian makes it easy for school admins to view detailed information about students that could follow them for the rest of their lives. (The Center for Democracy And Technology has released multiple reports indicating that student monitoring software like GoGuardian is primarily used for disciplinary, rather than safety, reasons.) Administrators can also set up “alerts” for when a student is viewing “offensive” content, though it’s not clear what is and is not “offensive,” to whom. These alerts could be used to stifle a student’s First Amendment right to information--for example, if a school decides that anything from an opposing political party or anything related to the LGBTQ community is harmful, it can prevent students from viewing it. These flags and filters can be applied to all students, or individualized for specific students, and allow administrators to see everything a student looks at online. 

GoGuardian claims that they de-identify data before sharing it with third parties or other entities. But this data can easily be traced back to an individual. This means advertisers could target students based on their internet usage, something explicitly prohibited by federal law and in the student privacy pledge taken by many EdTech companies. 

GoGuardian Teacher: A 24/7 Lesson in Surveillance

GoGuardian gives teachers a real-time feed of their students’ screens, and allows them to block any websites for individuals or groups. Students have no way of opting out of scenes and do not have to confirm that they know they are being monitored. The only indication is the appearance of an extension in their browser. 

This monitoring can happen whether or not a student is on school grounds. “Scenes” can last for eight hours and can be scheduled in advance to start at any time of the day or night, and if a teacher schedules a scene to start immediately after the next, then they could monitor a student 24/7. During a scene, GoGuardian collects minute by minute records of what is on a student’s screen and what tabs they have open, all of which can be easily viewed in a timeline.

GoGuardian takes no responsibility for these potential abuses of their technology, instead putting the onus on school administrators to anticipate abuse and put systems in place to prevent them. In the meantime GoGuardian is still accessing and collecting the data. 

GoGuardian Beacon: Replacing Social Workers with Big Brother

GoGuardian’s “Beacon” tool supposedly uses machine learning and AI to monitor student behavior for flagged key terms, track their history, and provide analysis on their likelihood to commit harmful acts to themselves or others. GoGuardian claims it can detect students who are at risk and “identify students’ online behaviors that could be indicative of suicide or self-harm.” Instead of spending money on an investment in social workers and counselors, people who are trained to detect this same behavior, GoGuardian claims that schools can rely on its tools to do it with algorithms.

GoGuardian touts anecdotal evidence of the system working, but from our research, the flagging inside of Beacon may not be much more accurate than its other flagging features. And while schools can determine to whom Beacon sends alerts, but if those staffers are not trained in mental health, they may not be able to determine whether the alert is accurate.This could lead to inappropriate interventions by school administrators who erroneously believe a student is in the “active planning” stage of a harmful act. If a student is accused of planning a school shooting when in reality they were researching weapons used during historical events, or planning a suicide when they were not, that student will likely not trust the administration in the future and feel their privacy has been violated. You can learn more about GoGuardian Beacon from this detailed documentary by VICE News.

Protecting Students First

Schools should be safe places for students, but they must also be places where students feel safe exploring ideas. Student monitoring software not only hinders that exploration, but endangers those who are already vulnerable. We know it will be an uphill battle to protect students from surveillance software. Still, we hope this research will help people in positions of authority, such as government officials and school administrators, as well as parents and students, to push for the companies that make this software to improve, or to abandon their use entirely.

TAKE the red flag machine quiz

Learn more about our findings in our detailed report.  

Young People May Be The Biggest Target for Online Censorship and Surveillance—and the Strongest Weapon Against Them

Par : Jason Kelley
30 octobre 2023 à 15:54

Over the last year, state and federal legislatures have tried to pass—and in some cases succeeded in passing—legislation that bars young people from digital spaces, censors what they are allowed to see and share online, and monitors and controls when and how they can do it. 

EFF and many other digital rights and civil liberties organizations have fought back against these bills, but the sheer number is alarming. At times it can be nearly overwhelming: there are bills in Texas, Utah, Arkansas, Florida, Montana; there are federal bills like the Kids Online Safety Act and the Protecting Kids on Social Media Act. And there’s legislation beyond the U.S., like the UK’s Online Safety Bill

JOIN EFF AT the neon level 

Young people, too, have fought back. In the long run, we believe we’ll win, together—and because of your help. We’ve won before: In the 1990’s, Congress enacted sweeping legislation that would have curtailed online rights for people of all ages. But that law was aimed, like much of today’s legislation, at young people like you. Along with the ACLU, we challenged the law and won core protections for internet rights in a Supreme Court case, Reno v. ACLU, that recognized that free speech on the Internet merits the highest standards of Constitutional protection. The Court’s decision was its first involving the Internet. 

Even before that, EFF was fighting on the side of teens living on the cutting edge of the ‘net (or however they described it then). In 1990, a Secret Service dragnet called Operation Sundevil seized more than 40 computers from young people in 14 American cities. EFF was formed in part to protect those youths.

So the current struggle isn’t new. As before, young people are targeted by governments, schools, and sometimes parents, who either don’t understand or won’t admit the value that online spaces, and technology generally, offer, no matter your age. 

And, as before, today’s youth aren’t handing over their rights. Tens of thousands of you have vocally opposed flawed censorship bills like KOSA. You’re using the digital tools that governments want to strip you of to fight back, rallying together on Discords and across social media to protect online rights. 

If we don’t succeed in legislatures, know that we will push back in courts, and we will continue building technology for a safe, encrypted internet that anyone, of any age, can access without fear of surveillance or government censorship. 

If you’re a young person eager to help protect your online rights, we’ve put together a few of our favorite ways below to help guide you. We hope you’ll join us, however you can.

Here’s How to Take Your Rights With You When You Go Online—At Any Age

Join EFF at a Special “Neon” Level Membership for Just $18

The huge numbers of young people working hard to oppose the Kids Online Safety Act has been inspiring. Whatever happens, EFF will be there to keep fighting—and you can help us keep up the fight by becoming an EFF member. 

We’ve created a special Neon membership level for anyone under 18 that’s the lowest price we’ve ever offered–just $18 for a year’s membership. If you can, help support the activists, technologists, and attorneys defending privacy, digital creativity, and internet freedom for everyone by becoming an EFF member with a one-time donation. You’ll get a sticker pack (see below), insider briefings, and more. 

JOIN EFF at the neon level 

We aren’t verifying any ages for this membership level because we trust you. (And, because we oppose online age verification laws—read more about why here.)

Gift a Neon Membership 

Not a young person, but have one in your life that cares about digital rights? You can also gift a Neon membership! Membership helps us build better tech, better laws, and a better internet at a time when the world needs it most. Every generation must fight for their rights, and now, that battle is online. If you know a teen that cares about the internet and technology, help make them an EFF member! 

Speak Up with EFF’s Action Center

Young people—and people of every age—have already sent thousands of messages to Congress this year advocating against dangerous bills that would limit their access to online spaces, their privacy, and their ability to speak out online. If you haven’t done so, make sure that legislators writing bills that affect your digital life hear from you by visiting EFF’s Action Center, where you can quickly send messages to your representatives at the federal and state level (and sometimes outside of the U.S., if you live elsewhere). Take our action for KOSA today if you haven’t yet: 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Other bills that might interest you, as of October 2023, are the Protecting Kids on Social Media Act and the RESTRICT Act

If you’re under 18, you should know that many more pieces of legislation at the state level have passed or are pending this year that would impact you. You can always reach out to your representatives even if we don’t have an Action Center message available by finding the legislation here, for example, and the contact info of your rep on their website.

Protect Your Privacy with Surveillance Self-Defense

Protecting yourself online as a young person is often more complicated than it is for others. In addition to threats to your privacy posed by governments and companies, you may also want to protect some private information from schools, peers, and even parents. EFF’s Surveillance Self-Defense hub is a great place to start learning how to think about privacy, and what steps you can take to ensure information about you doesn’t go anywhere you don’t want. 

Fight for Strong Student Rights

Schools have become a breeding ground for surveillance. In 2023, most kids can tell you: surveillance cameras in school buildings are passé. Nearly all online activity in school is filtered and flagged. Children are accused of cheating by algorithms and given little recourse to prove their innocence. Facial recognition and other dangerous, biased biometric scanning is becoming more and more common.

But it’s not all bad. Courts have expanded some student rights recently. And you can fight back in other ways. For a broad overview, use our Privacy for Students guide to understand how potential surveillance and censorship impacts you, and what to do about it. If it fits, consider following that guide up with our LGBTQ Youth module

If you want to know more, take a deep dive into one of the most common surveillance tools in schools—student monitoring software—with our Red Flag Machine project and quiz. We analyzed public records from GoGuardian, a tool used in thousands of schools to monitor the online activity of millions of students, and what we learned is honestly shocking. 

And don’t forget to follow our other Student Privacy work. We regularly dissect developments in school surveillance, monitoring, censorship, and how they can impact you. 

Start a Local Tech or Digital Rights Group 

Don’t work alone! If you have friends or know others in your area that care about the benefits of technology, the internet, digital rights—or think they just might be interested in them—why not form a club? It can be particularly powerful to share why important issues like free speech, privacy, and creativity matter to you, and having a group behind you if you contact a representative can add more weight to your message. Depending on the group you form, you might also consider joining the EFA! (See below.)

Not sure how to meet with other folks in your area? Why not join an already-started online Discord server of young people fighting back against online censorship, or start your own?

Find Allies or Join other Grassroots Groups in the Electronic Frontier Alliance

The Electronic Frontier Alliance is a grassroots network of community and campus organizations across the United States working to educate our neighbors about the importance of digital rights. Groups of young people can be a great fit for the EFA, which includes chapters of Encode Justice, campus groups in computer science, hacking, tech, and more. You can find allies, or if you build your own group, join up with others. On our EFA site you’ll find toolkits on event organizing, talking to media, activism, and more. 

Speak out on Social Media

Social networks are great platforms for getting your message out into the world, cultivating a like-minded community, staying on top of breaking news and issues, and building a name for yourself. Not sure how to make it happen? We’ve got a toolkit to get you started! Also, do a quick search for some of the issues you care about—like “KOSA,” for example—and take a look at what others are saying. (Young TikTok users have made hundreds of videos describing what’s wrong with KOSA, and Tumblr—yes, Tumblr—has multiple anti-KOSA blogs that have gone viral multiple times.) You can always join in the conversation that way. 

Teach Digital Privacy with SEC 

If you’ve been thinking about digital privacy for a while now, you may want to consider sharing that information with others. The Security Education Companion is a great place to start if you’re looking for lesson plans to teach digital security to others.

In College (or Will Be Soon)? Take the Tor University Challenge

In the Tor University Challenge, you can help advance human rights with free and open-source technology, empowering users to defend against mass surveillance and internet censorship. Tor is a service that helps you to protect your anonymity while using the Internet. It has two parts: the Tor Browser that you can download that allows you to use the Internet anonymously, and the volunteer network of computers that makes it possible for that software to work. Universities are great places to run Tor Relays because they have fast, stable connections and computer science and IT departments that can work with students to keep a relay running, while learning hands-on cybersecurity experience and thinking about global policy, law, and society. 

Visit Tor University to get started. 

Learn about Local Surveillance and Fight Back 

Young people don’t just have to worry about government censorship and school surveillance. Law enforcement agencies routinely deploy advanced surveillance technologies in our communities that can be aimed at anyone, but are particularly dangerous for young black and brown people. Our Street-Level Surveillance resources are designed for members of the public, advocacy organizations, journalists, defense attorneys, and policymakers who often are not getting the straight story from police representatives or the vendors marketing this equipment. But at any age, it’s worth learning how automated license plate readers, gunshot detection, and other police equipment works.

Don’t stop there. Our Atlas of Surveillance documents the police tech that’s actually being deployed in individual communities. Search our database of police tech by entering a city, county, state or agency in the United States. 

Follow EFF

Stay educated about what’s happening in the tech world by following EFF. Sign up for our once- or twice-monthly email newsletter, EFFector. Follow us on Meta, Mastodon, Instagram, TikTok, Bluesky, Twitch, YouTube, and Twitter. Listen to our podcast, How to Fix the Internet, for candid discussions of digital rights issues with some of the smartest people working in the field. 


There are so many ways for people of all ages to fight for and protect the internet for themselves and others. (Just take a look at some of the ways we’ve fought for privacy, free speech, and creativity over the years: an airship, an airplane, and a badger; encrypting pretty much the entire web and also cracking insecure encryption to prove a point; putting together a speculative fiction collection and making a virtual reality game—to name just a few.)

Whether you’re new to the fight, or you’ve been online for decades—we’re glad to have you.

VICTORY! California Department of Justice Declares Out-of-State Sharing of License Plate Data Unlawful

California Attorney General Rob Bonta has issued a legal interpretation and guidance for law enforcement agencies around the state that confirms what privacy advocates have been saying for years: It is against the law for police to share data collected from license plate readers with out-of-state or federal agencies. This is an important victory for immigrants, abortion seekers, protesters, and everyone else who drives a car, as our movements expose intimate details about where we’ve been and what we’ve been doing.

Automated license plate readers (ALPRs) are cameras that capture the movements of vehicles and upload the location of the vehicles to a searchable, shareable database. Law enforcement often installs these devices on fixed locations, such as street lights, as well as on patrol vehicles that are used to canvass neighborhoods. It is a mass surveillance technology that collects data on everyone. In fact, EFF research has found that more than 99.9% of the data collected is unconnected to any crime or other public safety interest.

The California State legislature passed SB 34 in 2015 to require basic safeguards for the use of ALPRs. These include a prohibition on California agencies from sharing data with non-California agencies. They also include the publication of a usage policy that is consistent with civil liberties and privacy.

As EFF and other groups such as the ACLU of California, MuckRock News, and the Center for Human Rights and Privacy have demonstrated over and over again through public records requests, many California agencies have either ignored or defied these policies, putting Californians at risk. In some cases, agencies have shared data with hundreds of out-of-state agencies (including in states with abortion restrictions) and with federal agencies (such as U.S. Customs & Border Protection and U.S. Immigration & Customs Enforcement). This surveillance is especially threatening to vulnerable populations, such as migrants and abortion seekers, whose rights are protected in California but not recognized by other states or the federal government.

In 2019, EFF successfully lobbied the legislature to order the California State Auditor to investigate the use of ALPR. The resulting report came out in 2020, with damning findings that agencies were flagrantly violating the law. While state lawmakers have introduced legislation to address the findings, so far no bill has passed. In the absence of new legislative action, Attorney General Bonta's new memo, grounded in SB 34, serves as canon for how local agencies should treat ALPR data.

The bulletin comes after EFF and the California ACLU affiliates sued the Marin County Sheriff in 2021, because his agency was violating SB 34 by sending its ALPR data to federal agencies including ICE and CBP. The case was favorably settled.

Attorney General Bonta’s guidance also follows new advocacy by these groups earlier this year. Along with the ACLU of Northern California and the ACLU of Southern California, EFF released public records from more than 70 law enforcement agencies in California that showed they were sharing data with states that have enacted abortion restrictions. We sent letters to each of the agencies demanding they end the sharing immediately. Dozens complied. Some disagreed with our determination, but nonetheless agreed to pursue new policies to protect abortion access.

Now California’s top law enforcement officer has determined that out-of-state data sharing is illegal and has drafted a model policy. Every agency in California must follow Attorney General Bonta's guidance, review their data sharing, and cut off every out-of-state and federal agency.

Or better yet, they could end their ALPR program altogether.

This Month, The EU Parliament Can Take Action To Stop The Attack On Encryption

Par : Joe Mullin
7 novembre 2023 à 15:10

Update 11/14/2023: The LIBE committee adopted the compromise amendments by a large majority. Once the committee's version of the law becomes the official position of the European Parliament, attention will shift to the Council of the EU. Along with our allies, EFF will continue to advocate that the EU reject proposals to require mass scanning and compromise of end-to-end encryption.

A key European parliamentary committee has taken an important step to defend user privacy, including end-to-end encryption. The Committee on Civil Liberties, Justice and Home Affairs (LIBE) has politically agreed on much-needed amendments to a proposed regulation that, in its original form, would allow for mass-scanning of people’s phones and computers. 

The original proposal from the European Commission, the EU’s executive body, would allow EU authorities to compel online services to analyze all user data and check it against law enforcement databases. The stated goal is to look for crimes against children, including child abuse images. 

But this proposal would have undermined a private and secure internet, which relies on strong encryption to protect the communications of everyone—including minors. The EU proposal even proposed reporting people to police as possible child abusers by using AI to rifle through people’s text messages. 

Every human being should have the right to have a private conversation. That’s true in the offline world, and we must not give up on those rights in the digital world. We deserve to have true private communication, not bugs in our pockets. EFF has opposed this proposal since it was introduced

More than 100 civil society groups joined us in speaking out against this proposal. So did thousands of individuals who signed the petition demanding that the EU “Stop Scanning Me.” 

The LIBE committee has wisely listened to those voices, and now major political groups have endorsed a compromise proposal that has language protecting end-to-end encryption. Early reports indicate the language will be a thorough protection that includes language disallowing client-side scanning, a form of bypassing encryption. 

The compromise proposal also takes out earlier language that could have allowed for mandatory age verification. Such age verification mandates amount to requiring people to show ID cards before they get on the internet; they are not compatible with the rights of adults or minors to speak anonymously when necessary. 

The LIBE committee is scheduled to confirm the new agreement  on November 13. The language is not perfect; some parts of the proposal, while not mandating age verification, may encourage its further use. The proposal could also lead to increased scanning of public online material that could be less than desirable, depending on how it’s done. 

Any time governments access peoples’ private data it should be targeted, proportionate, and subject to judicial oversight. The EU legislators should consider this agreement to be the bare minimum of what must be done to protect the rights of internet users in the EU and throughout the world. 

Introducing Badger Swarm: New Project Helps Privacy Badger Block Ever More Trackers

Today we are introducing Badger Swarm, a new tool for Privacy Badger that runs distributed Badger Sett scans in the cloud. Badger Swarm helps us continue updating and growing Privacy Badger’s tracker knowledge, as well as continue adding new ways of catching trackers. Thanks to continually expanding Badger Swarm-powered training, Privacy Badger comes packed with its largest blocklist yet.

A line chart showing the growth of blocked domains in Privacy Badger’s pre-trained list from late 2018 (about 300 domains blocked by default) through 2023 (over 2000 domains blocked by default). There is a sharp jump in January 2023, from under 1200 to over 1800 domains blocked by default.

We continue to update and grow Privacy Badger’s pre-trained list. Privacy Badger now comes with the largest blocklist yet, thanks to improved tracking detection and continually expanding training. Can you guess when we started using Badger Swarm?

Privacy Badger is defined by its automatic learning. As we write in the FAQ, Privacy Badger was born out of our desire for an extension that would automatically analyze and block any tracker that violated consent, and that would use algorithmic methods to decide what is and isn’t tracking. But when and where that learning happens has evolved over the years.

When we first created Privacy Badger, every Privacy Badger installation started with no tracker knowledge and learned to block trackers as you browsed. This meant that every Privacy Badger became stronger, smarter, and more bespoke over time. It also meant that all learning was siloed, and new Privacy Badgers didn’t block anything until they got to visit several websites. This made some people think their Privacy Badger extension wasn’t working.

In 2018, we rolled out Badger Sett, an automated training tool for Privacy Badger, to solve this problem. We run Badger Sett scans that use a real browser to visit the most popular sites on the web and produce Privacy Badger data. Thanks to Badger Sett, new Privacy Badgers knew to block the most common trackers from the start, which resolved confusion and improved privacy for new users.

In 2020, we updated Privacy Badger to no longer learn from your browsing by default, as local learning may make you more identifiable to websites. 1 In order to make this change, we expanded the scope of Badger Sett-powered remote learning. We then updated Privacy Badger to start receiving tracker list updates as part of extension updates. Training went from giving new installs a jump start to being the default source of Privacy Badger’s tracker knowledge.

Since Badger Sett automates a real browser, visiting a website takes a meaningful amount of time. That’s where Badger Swarm comes in. As the name suggests, Badger Swarm orchestrates a swarm of auto-driven Privacy Badgers to cover much more ground than a single badger could. On a more technical level, Badger Swarm converts a Badger Sett scan of X sites into N parallel Badger Sett scans of X/N sites. This makes medium scans complete as quickly as small scans, and large scans complete in a reasonable amount of time.

Badger Swarm also helps us produce new insights that lead to improved Privacy Badger protections. For example, Privacy Badger now blocks fingerprinters hosted by CDNs, a feature made possible by Badger Swarm-powered expanded scanning. 2

We are releasing Badger Swarm in hope of providing a helpful foundation to web researchers. Like Badger Sett, Badger Swarm is tailor-made for Privacy Badger. However, also like Badger Sett, we built Badger Swarm so it's simple to use and modify. To learn more about how Badger Swarm works, visit its repository on GitHub.

The world of online tracking isn't slowing down. The dangers caused by mass surveillance on the internet cannot be overstated. Privacy Badger continues to protect you from this pernicious industry, and thanks to Badger Swarm, Privacy Badger is stronger than ever.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

  • 1. You may want to opt back in to local learning if you regularly browse less popular websites. To do so, visit your Badger’s options page and mark the checkbox for learning to block new trackers from your browsing.
  • 2. As a compromise to avoid breaking websites, CDN domains are allowed to load without access to cookies. However, sometimes the same domain is used to serve both unobjectionable content and obnoxious fingerprinters that do not need cookies to track your browsing. Privacy Badger now blocks these fingerprinters.

Debunking the Myth of “Anonymous” Data

10 novembre 2023 à 08:49

Today, almost everything about our lives is digitally recorded and stored somewhere. Each credit card purchase, personal medical diagnosis, and preference about music and books is recorded and then used to predict what we like and dislike, and—ultimately—who we are. 

This often happens without our knowledge or consent. Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them. 

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person’s name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy. 

Sometimes companies say our personal data is “anonymized,” implying a one-way ratchet where it can never be dis-aggregated and re-identified. But this is not possible—anonymous data rarely stays this way. As Professor Matt Blaze, an expert in the field of cryptography and data privacy, succinctly summarized: “something that seems anonymous, more often than not, is not anonymous, even if it’s designed with the best intentions.” 

Anonymization…and Re-Identification?

Personal data can be considered on a spectrum of identifiability. At the top is data that can directly identify people, such as a name or state identity number, which can be referred to as “direct identifiers.” Next is information indirectly linked to individuals, like personal phone numbers and email addresses, which some call “indirect identifiers.” After this comes data connected to multiple people, such as a favorite restaurant or movie. The other end of this spectrum is information that cannot be linked to any specific person—such as aggregated census data, and data that is not directly related to individuals at all like weather reports.

Data anonymization is often undertaken in two ways. First, some personal identifiers like our names and social security numbers might be deleted. Second, other categories of personal information might be modified—such as obscuring our bank account numbers. For example, the Safe Harbor provision contained with the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that only the first three digits of a zip code can be reported in scrubbed data.

However, in practice, any attempt at de-identification requires removal not only of your identifiable information, but also of information that can identify you when considered in combination with other information known about you. Here's an example: 

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to one landmark study, these three characteristics are enough to uniquely identify 87% of the U.S. population. A different study showed that 63% of the U.S. population can be uniquely identified from these three facts.

We cannot trust corporations to self-regulate. The financial benefit and business usefulness of our personal data often outweighs our privacy and anonymity. In re-obtaining the real identity of the person involved (direct identifier) alongside a person’s preferences (indirect identifier), corporations are able to continue profiting from our most sensitive information. For instance, a website that asks supposedly “anonymous” users for seemingly trivial information about themselves may be able to use that information to make a unique profile for an individual. 

Location Surveillance

To understand this system in practice, we can look at location data. This includes the data collected by apps on your mobile device about your whereabouts: from the weekly trips to your local supermarket to your last appointment at a health center, an immigration clinic, or a protest planning meeting. The collection of this location data on our devices is sufficiently precise for law enforcement to place suspects at the scene of a crime, and for juries to convict people on the basis of that evidence. What’s more, whatever personal data is collected by the government can be misused by its employees, stolen by criminals or foreign governments, and used in unpredictable ways by agency leaders for nefarious new purposes. And all too often, such high tech surveillance disparately burdens people of color.  

Practically speaking, there is no way to de-identify individual location data since these data points serve as unique personal identifiers of their own. And even when location data is said to have been anonymized, re-identification can be achieved by correlating de-identified data with other publicly available data like voter rolls or information that's sold by data brokers. One study from 2013 found that researchers could uniquely identify 50% of people using only two randomly chosen time and location data points. 

Done right, aggregating location data can work towards preserving our personal rights to privacy by producing non-individualized counts of behaviors instead of detailed timelines of individual location history. For instance, an aggregation might tell you how many people’s phones reported their location as being in a certain city within the last month, but not the exact phone number and other data points that would connect this directly and personally to you. However, there’s often pressure on the experts doing the aggregation to generate granular aggregate data sets that might be more meaningful to a particular decision-maker but which simultaneously expose individuals to an erosion of their personal privacy.  

Moreover, most third-party location tracking is designed to build profiles of real people. This means that every time a tracker collects a piece of information, it needs something to tie that information to a particular person. This can happen indirectly by correlating collected data with a particular device or browser, which might later correlate to one person or a group of people, such as a household. Trackers can also use artificial identifiers, like mobile ad IDs and cookies to reach users with targeted messaging. And “anonymous” profiles of personal information can nearly always be linked back to real people—including where they live, what they read, and what they buy.

For data brokers dealing in our personal information, our data can either be useful for their profit-making or truly anonymous, but not both. EFF has long opposed location surveillance programs that can turn our lives into open books for scrutiny by police, surveillance-based advertisers, identity thieves, and stalkers. We’ve also long blown the whistle on phony anonymization

As a matter of public policy, it is critical that user privacy is not sacrificed in favor of filling the pockets of corporations. And for any data sharing plan, consent is critical: did each person consent to the method of data collection, and did they consent to the particular use? Consent must be specific, informed, opt-in, and voluntary. 

To Address Online Harms, We Must Consider Privacy First

Every year, we encounter new, often ill-conceived, bills written by state, federal, and international regulators to tackle a broad set of digital topics ranging from child safety to artificial intelligence. These scattershot proposals to correct online harm are often based on censorship and news cycles. Instead of this chaotic approach that rarely leads to the passage of good laws, we propose another solution in a new report: Privacy First: A Better Way to Address Online Harms.

In this report, we outline how many of the internet's ills have one thing in common: they're based on the business model of widespread corporate surveillance online. Dismantling this system would not only be a huge step forward to our digital privacy, it would raise the floor for serious discussions about the internet's future.

What would this comprehensive privacy law look like? We believe it must include these components:

  • No online behavioral ads.
  • Data minimization.
  • Opt-in consent.
  • User rights to access, port, correct, and delete information.
  • No preemption of state laws.
  • Strong enforcement with a private right to action.
  • No pay-for-privacy schemes.
  • No deceptive design.

A strong comprehensive data privacy law promotes privacy, free expression, and security. It can also help protect children, support journalism, protect access to health care, foster digital justice, limit private data collection to train generative AI, limit foreign government surveillance, and strengthen competition. These are all issues on which lawmakers are actively pushing legislation—both good and bad.

Comprehensive privacy legislation won’t fix everything. Children may still see things that they shouldn’t. New businesses will still have to struggle against the deep pockets of their established tech giant competitors. Governments will still have tools to surveil people directly. But with this one big step in favor of privacy, we can take a bite out of many of those problems, and foster a more humane, user-friendly technological future for everyone.

❌
❌