Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Les enchères en temps réel, un danger pour la vie privée mais aussi pour la sécurité européenne

7 avril 2024 à 12:08

Les enchères en temps réel, ou Real-Time Bidding (RTB), sont une technologie publicitaire omniprésente sur les sites web et applications mobiles commerciaux. Selon un rapport publié en novembre dernier, cette technologie soulève de sérieuses préoccupations en matière de confidentialité, car elle permet la diffusion de données sensibles sur les utilisateurs à un grand nombre d’entités, sans garanties de sécurité adéquates. Le système RTB expose les utilisateurs à des risques potentiels de la part d’acteurs étatiques et non étatiques malveillants.

La technologie RTB permet à des entités étrangères et à des acteurs non étatiques d’accéder à des informations confidentielles sur le personnel sensible et les dirigeants clés en Europe. Ces données peuvent être obtenues directement via l’exploitation de plateformes de demande (DSP) ou indirectement à partir d’autres entités. De plus, les entreprises de RTB transmettent souvent ces données personnelles en Russie et en Chine, où les lois locales permettent aux agences de sécurité d’y accéder. La large diffusion des données RTB auprès de multiples entreprises au sein de l’UE augmente également le risque d’accès par des acteurs indésirables.

Les données RTB contiennent souvent des informations personnelles telles que la localisation, les horodatages et d’autres identifiants, ce qui facilite l’identification des individus. Cela peut inclure des informations sensibles sur leur situation financière, leur santé, leurs préférences sexuelles et leurs activités en ligne et hors ligne. Même les personnes utilisant des appareils sécurisés à des fins professionnelles ne sont pas à l’abri, car leurs données circulent toujours via le RTB à partir de leurs appareils personnels, de ceux de leurs familles ou de leurs contacts.

Détails et exemples

La menace posée par le RTB est très réelle, comme le démontrent les exemples suivants :

  • Aux USA, un groupe conservateur catholique a utilisé des données RTB d’une application de rencontre pour révéler que des prêtres catholiques n’étaient pas célibataires, ce qui a conduit l’un d’eux à démissionner lorsque ses visites sur des applications et lieux gays ont été rendues publiques.
  • Les données RTB peuvent indiquer une variété de problèmes de santé, tels que la dépression, les douleurs chroniques, la toxicomanie ou les troubles anxieux.
  • Les acteurs malveillants peuvent utiliser les données RTB pour identifier les enfants, les collègues et les trajets quotidiens d’une cible.
  • La situation financière d’une personne peut être exposée, et donc une vulnérabilité potentielle à la corruption.
  • Les opinions politiques et les affiliations peuvent être déduites à partir des données RTB, ciblant potentiellement des individus pour de l’exploitation ou de la manipulation, comme on l’a vu avec le scandale « Facebook-Cambridge Analytica » il y a quelques années.

Solutions proposées

Face à ces menaces, nous recommandons les actions suivantes :

  1. La Commission européenne devrait solliciter le Conseil européen de la protection des données pour examiner la crise de sécurité du RTB. Les autorités de protection des données devraient appliquer le « principe de sécurité » du RGPD, en exigeant que IAB TechLab et Google, en tant que contrôleurs de données, modifient leurs normes RTB pour interdire l’inclusion de données personnelles. Toutes les données d’identification et de liaison doivent être supprimées.
  2. L’Agence européenne pour la cybersécurité (ENISA) devrait émettre une alerte aux États membres et aux institutions de l’Union, recommandant le blocage des publicités pour réduire la collecte de données par des tiers.
  3. Le Service européen pour l’action extérieure (SEAE), le groupe de coopération NIS et l’ENISA devraient évaluer conjointement l’impact du RTB sur la sécurité de l’Union européenne.
  4. Si nécessaire, la Commission européenne devrait envisager des mesures juridiques pour introduire une certitude et une harmonisation dans la gestion de cette menace pour la sécurité commune.

Commentaires : voir le flux Atom ouvrir dans le navigateur

Khrys’presso du lundi 8 avril 2024

Par : Khrys
8 avril 2024 à 01:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial Palestine et Israël

Spécial femmes dans le monde

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

  • Ces banlieusard·es forcé·es de partir quand le métro arrive enfin chez elleux (streetpress.com)

    Dans Les naufragés du Grand Paris Express, la journaliste Laura Wojcik et la géographe Anne Clerval donnent la parole aux petits propriétaires expropriés et aux locataires expulsés de leur logement pour laisser la place aux futures gares.

  • Nucléaire : l’opium des capitalistes (frustrationmagazine.fr)

    Avec la centralité donnée au réchauffement climatique au sein de la lutte environnementale, la critique du nucléaire est tombée en désuétude. De son côté, la classe capitaliste s’est engouffrée dans la brèche, parvenant à opérer un retournement spectaculaire : repeindre en vert une menace existentielle à l’origine même du mouvement écologiste dans les années 1960 […] le nucléaire, en tant qu’énergie relativement décarbonée et plus encore déterrestrée, permet de rendre le techno-solutionnisme et la « transition énergétique » hégémoniques dans l’agenda de la crise écologique, c’est-à-dire de résoudre par le statu quo les nouvelles contradictions du capitalisme. […] les antinucléaires pâtissent aujourd’hui d’avoir délaissé complètement la dimension militaire de l’énergie nucléaire, pourtant au cœur des luttes environnementales dans les années 1960-1970 – qui étaient à l’époque également des luttes pour la paix. Rappelons que le nucléaire civil n’a jamais été qu’un sous-produit du nucléaire militaire ; le capitalisme fissile est d’abord un capitalisme belliqueux.

  • Quand le capitalisme fait sécession (terrestres.org)
  • Toxicité des polluants éternels : les industriels savaient depuis 50 ans (reporterre.net)
  • Le capitalisme algorithmique, une dystopie devenue réalité (reporterre.net)
  • Les femmes ou les « oublis » de l’Histoire – épisode 44 : Elizabeth Magie (blogs.mediapart.fr)

    Vous connaissez Elizabeth Magie ? Quand elle inventa le principe du Monopoly en 1904, c’était pour sensibiliser aux dangers des monopoles. Le jeu visait à gagner ensemble en partageant les richesses grâce à la création de services publics. Quand Charles Darrow lui vola l’idée 30 ans plus tard, il en fit le véhicule ludique de l’idéologie capitaliste… et devint millionnaire.

  • Chronologie de l’attaque contre le logiciel libre xz (linuxtricks.fr)
  • RFC 9340 : Architectural Principles for a Quantum Internet (bortzmeyer.org)
  • Pluralistic : The Coprophagic AI crisis (pluralistic.net)

    Historically, the fact that some people […] couldn’t tell the difference wasn’t all that important, because people who fell prey to the sf-as-prophecy delusion didn’t have the power to re-orient our society around their mistaken beliefs. But with the rise and rise of sf-obsessed tech billionaires who keep trying to invent the torment nexus, sf writers are starting to be more vocal about distinguishing between our made-up funny stories and predictions

  • Pluralistic : Humans are not perfectly vigilant (pluralistic.net)

    The one thing AI is unarguably very good at is producing bullshit at scale.

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

  • Israel’s attacks on al-Shifa ‘are the actions of a rogue state’ : Analysis (invidious.fdn.fr)

    Antony Loewenstein, the author of The Palestine Laboratory, who has been reporting on Israel and the Palestinian territories for 20 years, has been speaking to Al Jazeera following Israel’s latest withdrawal from al-Shifa. He said the dozens of bodies the Health Ministry has discovered there are an indication of just how many people had been sheltering in the complex. “Even though hospitals have been targeted extensively by the Israelis, many civilians have nowhere else to go,” he told Al Jazeera. “Many Palestinians need intense medical care and hospitals are – well there’s nowhere safe in Gaza – but it’s somewhere to go and after Israel [first] pulled out of al-Shifa, the hope was that it would remain a safe place and clearly, it was not. “Not just bombing but air striking areas around these hospitals is not just a breach of international law, these are the actions of a rogue state, not a so-called democracy.”

  • Économie(s) d’énergie (attac63.site.attac.org)
  • PFAS : comment les industriels nous empoisonnent (invidious.fdn.fr)

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Khrys’presso du lundi 1er avril 2024

Par : Khrys
1 avril 2024 à 01:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial Palestine et Israël

Spécial femmes dans le monde

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Khrys’presso du lundi 25 mars 2024

Par : Khrys
25 mars 2024 à 02:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial Palestine et Israël

Spécial femmes dans le monde

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Contre la criminalisation et la surveillance des militant·es politiques

8 avril 2024 à 10:44

Ce texte a été lu par un·e membre de La Quadrature du Net le 5 avril 2024 lors du rassemblement devant le Tribunal judiciaire d’Aix-en-Provence, à l’occasion des deux nouvelles mises en examen dans l’affaire Lafarge.

On est là aujourd’hui pour dire notre soutien aux personnes convoquées, à toutes les personnes placées en garde à vue et inquiétées dans cette affaire. On est là pour dénoncer l’instrumentalisation de la justice et des services antiterroristes pour réprimer les militantes et militants et dissuader toute forme de désobéissance civile. Là, enfin, pour dire notre indignation face au recours systématique à des formes de surveillance intrusives et totalement disproportionnées.

Les mises en examen de ce jour, ou celles qui les ont précédé dans cette « affaire Lafarge », s’inscrivent dans un contexte plus global. Jusqu’à un passé pas si lointain, de nombreuses formes d’action directes étaient tolérées par les autorités. Mais progressivement, en lien avec la dérive néo-libérale amorcée dans les années 1980, l’espace accordé à la critique d’un système injuste et écocide a fondu comme neige au soleil. De crise en crise, on a assisté à la consolidation d’un État d’exception, à l’inflation des services de renseignement, à la multiplication de dérogations au droit libéral — un droit certes bien trop imparfait, mais qui n’en demeurait pas moins un héritage fondamental des luttes passées. On a également vu un pouvoir politique s’entêter au point de ne plus tolérer la moindre contestation, instrumentalisant le droit commun à coup d’amendes, de dissolutions, de maintien de l’ordre hyper-violent.

Le tout pour réprimer toutes celles et ceux qui ont la dignité de dire leur refus d’un système à la violence décomplexée, et le courage de mettre ce refus en actes. Dans ce processus du criminalisation des militant·es, les services de renseignement, de police judiciaire comme les magistrats du parquet peuvent désormais s’appuyer sur les exorbitants moyens de surveillance. Autant de dispositifs qui se sont accumulés depuis 25 ans et qui, dans l’affaire Lafarge et d’autres jugées récemment, s’emboîtent pour produire une surveillance totale. Une surveillance censée produire des éléments sur lesquels pourront s’édifier le récit policier et la répression.

Cette surveillance, elle commence par l’activité des services de renseignement. Contrôles d’identité qui vous mettent dans le viseur des services, caméras et micro planquées autour de lieux militants ou dans des librairies, balises GPS, interceptions, analyse des métadonnées, … Tout est bon pour servir les priorités politiques et justifier la pérennisation des crédits. L’activité du renseignement consacrée à la surveillance des militant·es – érigée en priorité depuis la stratégie nationale du renseignement de 2019 –, elle a doublé sous Macron, passant de 6 % au moins du total des mesures de surveillance en 2017 à plus de 12% en 2022.

Après le fichage administratif, après les notes blanches du renseignement, vient le stade des investigations judiciaires. Là encore, comme l’illustre l’affaire Lafarge, la surveillance en passe par le recours à la vidéosurveillance – plus de 100 000 caméras sur la voie publique aujourd’hui –, puis par l’identification biométrique systématique, notamment via la reconnaissance faciale et le fichier TAJ, ou quand ce n’est pas possible par le fichier des cartes d’identité et de passeport, l’infâme fichier TES, qui est ainsi détourné.

Pour rappel, le recours à la reconnaissance faciale via le fichier TAJ, ce n’est pas de la science fiction. Ce n’est pas non plus l’exception. Il est aujourd’hui utilisée au moins 1600 fois par jour par la police, et ce alors que cette modalité d’identification dystopique n’a jamais été autorisée par une loi et que, de fait, son usage n’est pas contrôlé par l’autorité judiciaire.

Cette reconnaissance faciale, elle est employée y compris pour des infractions dérisoires, notamment lorsqu’il s’agit d’armer la répression d’opposants politiques comme l’ont illustré les jugements de la semaine dernière à Niort, un an après Sainte-Soline. Et ce alors que le droit européen impose normalement un critère de « nécessité absolue ».

La surveillance découle enfin du croisement de toutes les traces numériques laissées au gré de nos vies et nos activités sociales. Dans cette affaire et d’autres encore, on voit ainsi se multiplier les réquisitions aux réseaux sociaux comme Twitter ou Facebook, l’espionnage des conversations téléphoniques et des SMS, le suivi des correspondances et des déplacements de groupes entiers de personnes via leurs métadonnées, la surveillance de leurs publications et de leurs lectures, la réquisition de leurs historiques bancaires ou des fichiers détenus par les services sociaux, … Le tout, souvent sur la seule base de vagues soupçons. Et à la clé, une violation systématique de leur intimité ensuite jetée en pâture à des policiers, lesquels n’hésitent pas à à s’en servir pour intimider ou tenter d’humilier lors des interrogatoires, et construire une vision biaisée de la réalité qui puisse corroborer leurs fantasmes.

De plus en plus, c’est la logique même de la résistance à la dérive autoritaire qui est criminalisée. Vous utilisez des logiciels libres et autres services alternatifs aux multinationales qui dominent l’industrie de la tech et s’imbriquent dans les systèmes de surveillance d’État ? Cela suffit à faire de vous un suspect, comme le révèle l’affaire du « 8 décembre » jugée il y a quelques mois. Vous choisissez des messageries dotées de protocoles de chiffrement pour protéger votre droit à la confidentialité des communications ? On pourra recourir aux spywares et autres méthodes d’intrusion informatique pour aspirer le maximum de données contenues dans vos ordinateurs ou smartphones. C’est ce dont a été victime le photographe mis en cause dans cette affaire. Et si vous refusez de livrer vos codes de chiffrement lors d’une garde à vue, on retiendra cela contre vous et on intentera des poursuites, quand bien même l’infraction censée légitimer votre garde à vue s’est avérée tout à fait grotesque.

Pour conclure, nous voudrions rappeler que, dans les années 30, alors que l’Europe cédait peu à peu au fascisme, un gouvernement français pouvait faire du pays une terre d’accueil pour les militant·es, les artistes, les intellectuelles. C’était juste avant la fin honteuse de la IIIe république, juste avant le régime de Vichy. Aujourd’hui, alors que, à travers l’Europe comme dans le monde entier, les militant·es des droits humains, les militant·es écologistes, celles et ceux qui dénoncent la violence systémique des États ou les méfaits des multinationales, sont chaque jour plus exposé·es à la répression, l’État français se place aux avant-gardes de la dérive post-fasciste.

Reste à voir si, plutôt que de s’en faire la complice active comme le font craindre les décisions récentes, l’institution judiciaire aura encore la volonté d’y résister.

Podcast Episode: About Face (Recognition)

Par : Josh Richman
26 mars 2024 à 03:05

Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here. 

In this episode, you’ll learn about: 

  • The difficulty of anticipating how information that you freely share might be used against you as technology advances. 
  • How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use. 
  • The racial biases that were built into many face recognition technologies.  
  • How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow. 

Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KASHMIR HILL
Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts.

But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money.

And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in.

And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used?

CINDY COHN
That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade.

She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI.

Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement.

I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right.

JASON KELLEY
It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode.

Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation.

KASHMIR HILL
Clearview AI scraped billions of photos from the internet -

JASON KELLEY
Billions with a B. Sorry to interrupt you, just to make sure people hear that.

KASHMIR HILL
Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database.

They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears.

So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there.

JASON KELLEY

Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit?

KASHMIR HILL

Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten.

I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are.
On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos.

It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it.

CINDY COHN
Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main.

KASHMIR HILL
Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up.

But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases.

So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information.

We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large.

CINDY COHN
Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention.
It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission.

I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right?

For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash.

But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash?

KASHMIR HILL
Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from.

And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to.

And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials.

But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though.

JASON KELLEY
We were very upset about this.

CINDY COHN
Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened.

KASHMIR HILL
And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters.

You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen.

CINDY COHN
I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing.

Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images.

KASHMIR HILL
And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with.

JASON KELLEY
What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right?

KASHMIR HILL
I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves.

And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face.

So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill.

CINDY COHN
So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building

KASHMIR HILL
The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades.
The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well.

But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans.

And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness.

CINDY COHN
I love that term.

KASHMIR HILL
This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back.

So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first?

And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it.

And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology.

So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left.

And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country.

And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd.

And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage.

CINDY COHN
I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't.

JASON KELLEY
One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example.

KASHMIR HILL
So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails.

And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps.

It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots.

You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken.

But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database.

So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways.

CINDY COHN
And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it.

And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system.

And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data.

KASHMIR HILL
Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with.

And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers.

That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets.

And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore.

Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school.

And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be?

CINDY COHN
I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives.

KASHMIR HILL
I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him.

Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated.

And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives.

CINDY COHN
Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right?

Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed.

KASHMIR HILL
In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology.

And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden.

The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling.

CINDY COHN
I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right?

Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that.

And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize.

KASHMIR HILL
Laws work is one of the themes of the book.

CINDY COHN
Thank you so much, Kash, for joining us. It was really fun to talk about this important topic.

KASHMIR HILL
Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you.

JASON KELLEY
That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF.

CINDY COHN
Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story.

JASON KELLEY
Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did.

And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one.

CINDY COHN
Yeah, I love that perspective, right?

JASON KELLEY
I mean, it’s a terrible thing, but it is helpful to think about, right?

CINDY COHN
Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across.

The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws.
There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look.

So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here.

And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones.

JASON KELLEY
Exactly.

CINDY COHN
Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well.

You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story.

JASON KELLEY
Thanks for joining us for this episode of how to fix the internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon.

And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Khrys’presso du lundi 18 mars 2024

Par : Khrys
18 mars 2024 à 02:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial femmes dans le monde

Spécial Palestine et Israël

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

  • Affaire du 8 décembre : poursuivre la mobilisation (soutien812.blackblogs.org)

    Cinq mois après l’infâme procès contre les sept « inculpé·es du 8 décembre 2020 », trois mois après un verdict d’une grande sévérité dépassant les réquisitoires du Parquet National Antiterroriste (PNAT), le combat n’est toujours pas fini pour nos compagnon·es. Alors que presque toustes ont fait appel, et que le Tribunal n’a toujours pas transmis les justifications du jugement, notre soutien financier et politique leur reste indispensable.

  • Libre Flot conteste la légalité de sa mise sur écoute devant le Conseil d’État (leparisien.fr)

    Selon la défense de Florian D., les micros auraient été installés dans le véhicule dans lequel ce dernier résidait à son retour du Rojava (nord-est de la Syrie), où il avait combattu en 2017 auprès des Kurdes des Unités de protection du peuple (YPG) contre le groupe djihadiste État islamique (EI). Or, soutiennent ses avocats Mes Isabelle Zribi et Raphaël Kempf, une telle surveillance ne peut selon la loi intervenir qu’en cas de « soupçon d’activité terroriste ». « Le fait d’avoir rejoint les YPG, qui n’est pas considéré comme un groupe terroriste par la France, ne suffit pas », a affirmé Me Raphaël Kempf.

  • Plan d’urgence : la grève des enseignants s’ancre dans le 93 (rapportsdeforce.fr)
  • Paris : mouvement lycéen en feu (contre-attaque.net)

    “8 millions pour Stanislas, des rats pour le 93” : c’était la banderole affichée devant le lycée Balzac à Paris de vendredi 15 mars. Le Mouvement d’Action Lycéenne Autonome a organisé un blocus incroyable avec un tournois de foot, un barbecue, de la joie, quelques feux d’artifice et même un salon de coiffure !

  • Aéroports : une mobilisation inédite réclame de limiter les vols (reporterre.net)
  • Grève à Radio France le 26 mars : “On a le sentiment que la rédaction se fait découper en petits morceaux” (telerama.fr)

    La volonté de la direction de regrouper les services sciences, santé et environnement des stations France Inter, Info et Culture provoque de vives inquiétudes, attisées par les dernières déclarations de Rachida Dati.

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Khrys’presso du lundi 11 mars 2024

Par : Khrys
11 mars 2024 à 02:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial femmes dans le monde

Spécial Palestine et Israël

RIP

  • Mort d’Akira Toriyama, un mangaka d’école (liberation.fr)

    Le créateur de manga est mort à l’âge de 68 ans. Avec son œuvre phare « Dragon Ball », celui qui était probablement l’artiste japonais le plus influent des temps modernes a laissé une marque durable sur les imaginaires adolescents du monde entier.

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial emmerdeurs irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Khrys’presso du lundi 4 mars 2024

Par : Khrys
4 mars 2024 à 01:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial femmes dans le monde

  • Abused former Japan’ GSDF member Gonoi gets women of courage award (mainichi.jp)
  • Pakistan woman in Arabic script dress saved from mob claiming blasphemy (bbc.com)

    An angry mob in Pakistan accused a woman who wore a dress adorned with Arabic calligraphy of blasphemy, after mistaking them for Quran verses. She was saved by police who escorted her to safety after hundreds gathered. She later gave a public apology. The dress has the word “Halwa” printed in Arabic letters on it, meaning beautiful in Arabic.

  • Iranian women ‘ready to pay the price’ for defying hijab rules (bbc.co.uk)

    “It’s very scary,” 20-year-old music student Donya tells me over an encrypted app. “Because they can arrest you any minute and fine you. Or torture you with lashes. The usual penalty if you’re arrested is 74 lashes.” Last month, a 33-year old Kurdish-Iranian activist, Roya Heshmati, made public that she’d been given 74 lashes after posting a photograph of herself unveiled. But Donya, Azad and Bahareh say there is, for them, no going back. “It is symbolic,” says Donya. “Because it is the regime’s key to suppressing women in Iran. If this is the only way I can protest and take a step for my freedom, I’ll do it.”

  • La guerre de Poutine contre les femmes (legrandcontinent.eu)

    Les preuves amassées par les observateurs et chercheurs étrangers révèlent des actes encore jamais vus […] Les viols sont souvent publics. Les soldats russes s’y livrent en pleine rue ou forcent d’autres membres de la communauté à y assister. Des parents ont dû regarder le viol de leurs enfants, les enfants celui de leurs parents. Certaines victimes ont été violées à mort.

  • Embryos are people in Alabama, but women aren’t. (onlysky.media)

    The Alabama Supreme Court hands down a religion-laden ruling that embryos are people, with the side effect of ending IVF treatment. It’s another step in the religious right’s plan to return women to a state of reproductive subordination.

  • Students at Albert Einstein College of Medicine in the Bronx will no longer have to pay tuition after a longtime professor donated $1 billion to the school (nydailynews.com)

    The gift by Ruth Gottesman, chairwoman of Einstein’s Board of Trustees, is considered the largest gift made to a medical school in the country, according to a press release.

Spécial Palestine et Israël

Spécial France

Spécial femmes en France

1 – Il est dangereux de qualifier de « liberté » un droit fondamental tel que celui d’accès à l’avortement.
2 – Le mot « femme » n’est pas neutre
3 – Faire entrer l’avortement par la petite porte en donnant toute latitude au « législateur » n’est pas neutre non plus

Spécial #MeTooGarçons

Spécial médias et pouvoir

Spécial pénibles irresponsables gérant comme des pieds (et à la néolibérale)

  • Nouvelle tentative au Sénat pour orienter une partie du livret A vers l’industrie de la défense (publicsenat.fr)

    Une proposition de loi de la majorité sénatoriale, qui sera examinée le 5 mars, prévoit de flécher une partie des fonds, collectés au titre du livret A, vers les entreprises de l’industrie de défense française.

  • Écologie, éducation, recherche, cohésion des territoires : le plan d’économies détaillé dans un décret (publicsenat.fr)

    Le décret annulant 10 milliards d’euros de dépenses est paru au Journal officiel. L’effort budgétaire va s’avérer plus important en proportion pour plusieurs ministères. C’est le cas pour l’écologie, le travail, l’aide publique au développement, l’éducation ou encore la recherche.

  • 10 milliards de coupes budgétaires, 10 milliards d’erreurs… (alternatives-economiques.fr)

    À nouveau, le débat budgétaire n’aura pas lieu et cette fois sans même que le gouvernement n’ait eu besoin d’avoir recours à nouveau au 49-3 ! Il pourra donc dérouler ses arguments sans frein. Y compris les arguments les plus éculés, comme celui de Bruno Lemaire et de l’inévitable « bon-sens », vieux cache-misère des positions réactionnaires et du libéralisme économique.

  • Élèves radicalisés : Nicole Belloubet veut des « classes spécifiques » (liberation.fr)

    La nouvelle ministre de l’Éducation nationale confirme ce lundi 26 février la volonté du gouvernement de placer les jeunes radicalisés dans des structures dédiées.

  • Macron au Salon de l’agriculture : le chaos, et toujours pas d’écologie (reporterre.net)
  • Le plan « un milliard d’arbres » vire au mensonge d’État (humanite.fr)

    Payés pour planter des arbres tout juste rasés. C’était donc ça, le plan d’avant-garde du milliard d’arbres à planter d’ici 2030, annoncé par Emmanuel Macron en 2022.

  • Moins d’écologie, plus d’exploitation : l’exécutif donne de nouveaux gages à la FNSEA (revolutionpermanente.fr)

    Un mois après le début de la mobilisation des agriculteurs, le Premier ministre Gabriel Attal a annoncé de nouvelles mesures afin de prévenir toute reprise d’un mouvement. Nouveau projet de loi Egalim, baisse du contrôle des pesticides, … des annonces qui visent avant tout à contenter la FNSEA plus qu’à résoudre les problèmes de fond des agriculteurices.

  • Les dirigeants de la FNSEA se gavent avec les cotisations des agriculteurices (contre-attaque.net)

    13.400 euros mensuels : le boss du “syndicat” agricole productiviste est mieux payé que le ministre de l’agriculture

  • Attal dépouille les chômeurs, et il s’en vante (contre-attaque.net)

    « On est passé de 24 à 18 mois de durée d’indemnisation, on peut encore la réduire », menace-t-il. Le journal lui demande s’il est conscient de prendre le risque de « réveiller la colère sociale », Gabriel Attal répond : « Oui, et ? »

  • Loi Pacte 2 : démolir encore les Prud’hommes (rapportsdeforce.fr)

    Les Prud’hommes ? « C’est l’enfer », racontent les salariés qui y passent. Et ça pourrait encore s’aggraver. Dans sa prochaine réforme du code du travail, prévue après l’été, le gouvernement pourrait bien proposer de réduire la durée pendant laquelle un·e salarié·e peut contester son licenciement aux Prud’hommes à 6 mois au lieu d’un an. Une mesure qui fragiliserait encore les salarié·es.

  • « Salauds de pauvres ! » (blogs.mediapart.fr)

    La polémique du week-end : la petite phrase attribuée à Macron au sujet du goût des smicards pour les abonnements VOD. À force de mépris gonflé au sentiment d’impunité, et même si la violence institutionnelle déployée pour endiguer la colère profonde qui fait gronder les entrailles du pays augmente et repousse par la peur les contestations, il arrivera toujours un moment où un peuple sentira qu’il n’a plus rien ni à espérer, ni à perdre.

  • Assurance chômage : les contrôles des bénéficiaires vont être multipliés par trois (lavoixdunord.fr)

    Le Premier ministre a aussi évoqué l’inscription des allocataires du Revenu de solidarité active (RSA) de s’inscrire à France Travail, qui remplace progressivement Pôle emploi cette année. « En allant chercher tous les bénéficiaires du RSA et en les inscrivant à France Travail, les chiffres du chômage vont mécaniquement monter (…) C’est la condition pour pouvoir agir et pour pouvoir offrir à chacun une opportunité de retrouver un emploi durable. » Les allocataires du RSA vont bientôt devoir travailler quinze heures par semaine pour toucher l’aide financière.

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Save the date

L’édition 2024 des Journées du Logiciel Libre aura lieu le week-end du 25-26 mai dans les locaux de l’École Normale Supérieure de Lyon – Site René Descartes (pouet mastodon)

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Khrys’presso du lundi 26 février 2024

Par : Khrys
26 février 2024 à 01:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Agir

Spécial femmes dans le monde

Spécial Palestine et Israël

Spécial Assange

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial il y a quelque chose de pourri au royaume de Wikipedia.fr

Spécial pénibles irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

  • Panthéoniser les Manouchian, c’est les trahir (ujfp.org)

    Hypocrisie ! Utiliser l’incontestable puissance évocatrice de la vie de ces « étrangers et nos frères pourtant » au moment même où l’on fait voter une des pires lois xénophobes sur l’immigration, et où à Mayotte commence la remise en cause du droit du sol.

  • « Taxez les riches » : à Paris, Attac déploie une banderole géante sur le futur hôtel Vuitton des Champs-Elysées (liberation.fr)

    Plusieurs dizaines de militants d’Attac ont réussi à déployer une banderole géante ce samedi 24 février depuis le haut de la façade du futur hôtel du groupe LVMH à Paris. Reprenant un slogan bien connu, qui s’était frayé jusqu’au Met Gala il y a un peu plus de deux ans.

  • La REV dénonce l’abandon de la cause féministe au nom de la guerre (revolutionecologiquepourlevivant.fr)
  • Des « charlots » comme les autres (cqfd-journal.org)

    C’est un camarade de longue date, croisé de-ci de-là, à Paris ou ailleurs. Comme on sait qu’il a fait des gardes à vue sous le régime de ­l’antiterrorisme dans le cadre des luttes contre l’enfermement des personnes sans papiers au début des années 2010, on a voulu lui donner la parole. Il évoque un antiterrorisme « quotidien », « banal » et donne des pistes pour s’en défendre.

  • Zbeul Olympique ! (solidairesinformatique.org)

    Solidaires Informatique Île-de-France appelle à la grève pour la période des Jeux Olympiques, du 26 Juillet au 8 Septembre 2024.Cet appel à la grève permet aux salarié·es de secteurs de l’informatique, du conseil et du jeu vidéo de dégager du temps pour les mobilisations pendant les jeux !

  • Scandale du filtrage des eaux minérales : l’association Foodwatch porte plainte contre Nestlé et Sources Alma (liberation.fr)

    L’association de défense des consommateurs dépose ce mercredi 21 février une plainte contre les deux groupes pour les traitements de désinfection auxquels ils ont eu recours sur leurs eaux telles que Vittel ou Perrier.

  • Siège de Lactalis occupé : les agriculteurices de la Confédération paysanne vont passer la nuit sur place (liberation.fr)

    Environ 200 exploitant·es agricoles occupent ce mercredi 21 février le siège social du géant du lait à Laval, en Mayenne, pour réclamer une hausse du prix de vente.

  • Colère des agriculteurices : les manifs reprennent partout en France (reporterre.net)
  • Clermont-l’herault : les maraîchers font de la résistance (europalestine.com)

    Le mercredi 14 février, la police municipale et la gendarmerie nous ont demandé d’enlever les t-shirts qui portaient l’inscription« Justice en Palestine » et « Free Palestine » que nous affichons à notre stand de maraîchage depuis une quinzaine d’années. Suite à notre refus d’obtempérer, nous avons été menacés d’expulsion du marché. […] Un client ayant montré sa solidarité, refusant de présenter ses papiers a été menotté et gardé à la gendarmerie pendant une heure. Ayant eu une entrevue avec le maire de Clermont-l’Hérault le lundi 19 février, ce dernier nous demande d’enlever les t-shirts suspendus mais accepte cependant que nous puissions les porter sur nous.

  • Aiguilleurs en grève : vers une multiplication des luttes catégorielles à la SNCF ? (rapportsdeforce.fr)
  • S’organiser pour gagner : les recettes de Karl marx (frustrationmagazine.fr)

    Cela fait désormais généralement consensus : ce qui a manqué pendant le mouvement des Gilets Jaunes, et aussi, dans une certaine mesure, pendant le mouvement contre la réforme des retraites, c’est l’organisation, question stratégique essentielle. Tout le monde désire le changement, les luttes éclatent partout et sans cesse et pourtant, échouent. Il manque donc quelque chose. Ce quelque chose pourrait bien être la forme adéquate de l’organisation. Il nous faut donc travailler et étudier afin d’identifier cette lacune. Marx, et le mouvement ouvrier en général, ont des choses à nous apprendre.

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Notation des allocataires : la CAF étend sa surveillance à l’analyse des revenus en temps réel

Par : henri
13 mars 2024 à 05:37

Retrouvez l’ensemble de nos publications, documentations et prises de positions sur l’utilisation par les organismes sociaux – CAF, Pôle Emploi, Assurance Maladie, Assurance Vieillesse – d’algorithmes à des fins de contrôle social sur notre page dédiée et notre gitlab.

Il y a tout juste deux mois, nous publiions le code source de l’algorithme de notation des allocataires de la CAF. Cette publication démontrait l’aspect dystopique d’un système de surveillance allouant des scores de suspicion à plus de 12 millions de personnes, sur la base desquels la CAF organise délibérement la discrimination et le sur-contrôle des plus précaires. Ce faisant, nous espérions que, face à la montée de la contestation1Le président de la Seine-Saint-Denis a notamment saisi le Défenseur des Droits suite à la publication du code source de l’algorithme. Notre travail pour obtenir le code source de l’algorithme a par ailleurs servi aux équipes du journal Le Monde et de Lighthouse Reports pour publier une série d’articles ayant eu un grand retentissement médiatique. Une députée EELV a par ailleurs abordé la question de l’algorithme lors des questions au gouvernement. Thomas Piketty a écrit une tribune sur le sujet et ATD Quart Monde un communiqué. Le parti EELV a aussi lancé une pétition sur ce sujet disponible ici., les dirigeant·es de la CAF accepteraient de mettre fin à ces pratiques iniques. Il n’en fut rien.

À la remise en question, les responsables de la CAF ont préféré la fuite en avant. La première étape fut un contre-feu médiatique où son directeur, Nicolas Grivel, est allé jusqu’à déclarer publiquement que la CAF n’avait ni « à rougir » ni à s’« excuser » de telles pratiques. La deuxième étape, dont nous venons de prendre connaissance2Voir l’article « L’État muscle le DRM, l’arme pour lutter contre la fraude et le non-recours aux droits » publié le 01/02/2024 par Emile Marzof et disponible ici., est bien plus inquiétante. Car parallèlement à ses déclarations, ce dernier cherchait à obtenir l’autorisation de démultiplier les capacités de surveillance de l’algorithme via l’intégration du suivi en « temps réel »3Bien que la fréquence de mise à jour des revenus soit majoritairement mensuelle, dans la mesure où les salaires sont versés une fois par mois, nous reprenons ici l’expression utilisée par la Cour des comptes. Voir le chapitre 9 du Rapport sur l’application des lois de financement de la sécurité sociale de 2022 disponible ici. des revenus de l’ensemble des allocataires. Autorisation qu’il a obtenue, avec la bénédiction de la CNIL, le 29 janvier dernier4Décret n° 2024-50 du 29 janvier 2024 disponible ici. Voir aussi la délibération n° 2023-120 du 16 novembre 2023 de la CNIL ici. Le décret prévoit une expérimentation d’un an. La surveillance des revenus est aussi autorisée pour le contrôle des agriculteurs·rices par les Mutualités Sociales Agricoles et des personnes âgées par la Caisse Nationale d’Assurance Vieillesse..

Surveillance et « productivité » des contrôles

Pour rappel, le revenu est une des quelque quarante variables utilisées par la CAF pour noter les allocataires. Comme nous l’avions montré, plus le revenu d’un·e allocataire est faible, plus son score de suspicion est élevé et plus ses risques d’être contrôlé·e sont grands. C’est donc un des paramètres contribuant directement au ciblage et à la discrimination des personnes défavorisées.

Jusqu’à présent, les informations sur les revenus des allocataires étaient soit récupérées annuellement auprès des impôts, soit collectées via les déclarations trimestrielles auprès des allocataires concerné·es (titulaires du RSA, de l’AAH…)5Voir lignes 1100 du code de l’algorithme en usage entre 2014 et 2018 disponible ici : pour le calcul des revenus mensuels, la CAF utilise soit les déclarations de revenus trimestrielles (dans le cadre des personnes au RSA/AAH) divisées par 3, soit les revenus annuels divisés par 12. Si nous ne disposons pas de la dernière version de l’algorithme, la logique devrait être la même.
. Désormais, l’algorithme de la CAF bénéficiera d’un accès en « temps réel » aux ressources financières de l’ensemble des 12 millions d’allocataires (salaires et prestations sociales).

Pour ce faire, l’algorithme de la CAF sera alimenté par une gigantesque base de données agrégeant, pour chaque personne, les déclarations salariales transmises par les employeurs ainsi que les prestations sociales versées par les organismes sociaux (retraites, chômage, RSA, AAH, APL…)6 L’architecture de la base DRM repose sur l’agrégation de deux bases de données. La première est la base des « Déclarations Sociales Nominatives » (DSN) regroupant les déclarations de salaires faites par les employeurs. La seconde, « base des autres revenus » (PASRAU), centralise les prestations sociales monétaires (retraites, APL, allocations familiales, indemnités journalières, AAH, RSA, allocations chômage..). La base DRM est mise à jour quotidiennement et consultable en temps réel. D’un point de vue pratique, il semblerait que le transfert de données de la base DRM à la CAF soit fait mensuellement. La CAF peut aussi accéder à une API pour une consultation du DRM en temps réel. Voir notamment le chapitre 9 du rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici.
 : c’est le « Dispositif des Ressources Mensuelles » (DRM). Cette base, créée en 2019 lors de mise en place de la réforme de la « contemporanéisation » des APL7Plus précisément, cette base a été créée afin de mettre en place la réforme des APL de 2021 et l’information des assuré·es sociaux (voir la délibération de la CNIL 2019-072 du 23 mai 2019 disponible ici et le décret n° 2019-969 du 18 septembre 2019 disponible ici.) La liste des prestations sociales pour lesquelles le DRM peut être utilisé à des fins de calcul s’est agrandie avec le récent décret permettant son utilisation à des fins de contrôle (voir le décret n°2024-50 du 29 janvier 2024 disponible ici. Il peut désormais, entre autres, être utilisée pour le calcul du RSA, de la PPA – prime d’activité –, des pensions d’invalidités, de la complémentaire santé-solidaire, des pensions de retraite… Il est par ailleurs le pilier de la collecte de données sur les ressources dans le cadre du projet de « solidarité » à la source. Concernant la lutte contre la fraude, son utilisation n’était pas envisagée pour détecter des situations « à risque » même si certaines de ces données pouvaient, a priori, être utilisées notamment lors d’un contrôle par les administrations sociales (consultation RNCPS – répertoire national commun de protection sociale…) via l’exercice du droit de communication. Voir aussi le rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici ainsi que le rapport de la Cour des comptes de 2021 sur la mise en place du prélèvement à la source disponible ici.
, est mise à jour quotidiennement, et offre des capacités inégalées de surveillance des allocataires.

La justification d’une telle extension de la surveillance à l’œuvre à des fins de notation des allocataires est d’accroître la « productivité du dispositif [de l’algorithme] » selon les propres termes des responsables de la CAF8Voir la délibération 2023-120 de la CNIL disponible ici.
. Qu’importe que se multiplient les témoignages révélant les violences subies par les plus précaires lors des contrôles9Voir notamment les témoignages collectés par le collectif Changer de Cap, disponibles ici et le rapport de la Défenseure des Droits.. Qu’importe aussi que les montants récupérés par l’algorithme soient dérisoires au regard du volume des prestations sociales versées par l’institution10Les montants d’« indus » récupérés par la CAF dans le cadre des contrôles déclenchés par l’algorithme représentent 0,2% du montant total des prestations versées par la CAF. Voir ce document de la CAF.. Les logiques gestionnaires ont fait de la course aux « rendements des contrôles » une fin en soi à laquelle tout peut être sacrifié.

Que cette autorisation soit donnée à titre « expérimental », pour une période d’un an, ne peut être de nature à nous rassurer tant on sait combien le recours aux « expérimentations » est devenu un outil de communication visant à faciliter l’acceptabilité sociale des dispositifs de contrôle numérique11Voir notamment notre article « Stratégies d’infiltration de la surveillance biométrique dans nos vies », disponible ici..

La CNIL à la dérive

La délibération de la CNIL qui acte l’autorisation accordée à la CAF de ce renforcement sans précédent des capacités de surveillance de son algorithme de notation laisse sans voix12Voir la délibération n° 2023-120 du 16 novembre 2023 disponible ici.. Loin de s’opposer au projet, ses recommandations se limitent à demander à ce qu’une attention particulière soit « accordée à la transparence » de l’algorithme et à ce que… le « gain de productivité du dispositif » fasse l’objet d’un « rapport circonstancié et chiffré ». La violation de l’intimité des plus de 30 millions de personnes vivant dans un foyer bénéficiant d’une aide de la CAF est donc ramenée à une simple question d’argent…

Nulle part n’apparaît la moindre critique politique d’un tel dispositif, alors même que cela fait plus d’un an que, aux côtés de différents collectifs et de la Défenseure des Droits, nous alertons sur les conséquences humaines désastreuses de cet algorithme. La CNIL alerte par contre la CNAF sur le risque médiatique auquelle elle s’expose en rappelant qu’un scandale autour d’un algorithme en tout point similaire a « conduit le gouvernement néerlandais à démissionner en janvier 2021 ». Une illustration caricaturale de la transformation du « gendarme des données » en simple agence de communication pour administrations désireuses de ficher la population.

On relèvera également un bref passage de la CNIL sur les « conséquences dramatiques » du risque de « décisions individuelles biaisées » conduisant l’autorité à demander à ce que l’algorithme soit « conçu avec soin ». Celui-ci démontre – au mieux – l’incompétence technique de ses membres. Rappelons que cet algorithme ne vise pas à détecter la fraude mais les indus ayant pour origine des erreurs déclaratives. Or, ces erreurs se concentrent, structurellement, sur les allocataires aux minima sociaux, en raison de la complexité des règles d’encadrement de ces prestations13Voir nos différents articles sur le sujet ici et l’article de Daniel Buchet, ancien directeur de la maîtrise des risques et de la lutte contre la fraude de la CNAF. 2006. « Du contrôle des risques à la maîtrise des risques », disponible ici.
. Le ciblage des plus précaires par l’algorithme de la CAF n’est donc pas accidentel mais nécessaire à l’atteinte de son objectif politique : assurer le « rendement des contrôles ». La seule façon d’éviter de tels « biais » est donc de s’opposer à l’usage même de l’algorithme.

Pire, la CNIL valide, dans la même délibération, l’utilisation du DRM à des fins de contrôle de nos aîné·es par l’Assurance Vieillesse (CNAV)… tout en reconnaissant que l’algorithme de la CNAV n’a jamais « fait l’objet de formalités préalables auprès d’elle, même anciennes »14Si nous n’avons pas encore la preuve certaine que la CNAV utilise un algorithme de profilage pour le contrôle des personnes à la retraite, la CNIL évoque concernant cette administration dans sa délibération « un traitement de profilage » et « un dispositif correspondant [à l’algorithme de la CNAF] » laissant sous-entendre que c’est le cas. . Soit donc qu’il est probablement illégal. Notons au passage que le rapporteur de la CNIL associé à cette délibération n’est autre que le député Philippe Latombe, dont nous avons dû signaler les manquements déontologiques auprès de la CNIL elle-même du fait de ses accointances répétées et scandaleuses avec le lobby sécuritaire numérique15Voir aussi l’article de Clément Pouré dans StreetPress, disponible ici, qui pointe par ailleurs les relations du député avec l’extrême-droite..

« Solidarité » à la source et contrôle social : un appel à discussion

Si nous ne nous attendions pas à ce que le directeur de la CAF abandonne immédiatement son algorithme de notation des allocataires, nous ne pouvons qu’être choqué·es de voir que sa seule réponse soit de renforcer considérablement ses capacités de surveillance. C’est pourquoi nous appelons, aux côtés des collectifs avec qui nous luttons depuis le début, à continuer de se mobiliser contre les pratiques numériques de contrôle des administrations sociales, au premier rang desquelles la CAF.

Au-delà du mépris exprimé par la CAF face à l’opposition grandissante aux pratiques de contrôle, cette annonce met en lumière le risque de surveillance généralisée inhérent au projet gouvernemental de « solidarité » à la source. Présenté comme la « grande mesure sociale » du quinquennat16Pour reprendre les termes de cet article du Figaro., ce projet vise à substituer au système déclaratif une automatisation du calcul des aides sociales via le pré-remplissage des déclarations nécessaires à l’accès aux prestations sociales.

Étant donné la grande complexité des règles de calculs et d’attribution de certaines prestations sociales – en particulier les minima sociaux – cette automatisation nécessite en retour que soit déployée la plus grande infrastructure numérique jamais créée à des fins de récolte, de partage et de centralisation des données personnelles de la population française (impôts, CAF, Assurance-Maladie, Pôle Emploi, CNAV, Mutualités Sociales Agricoles….). De par sa taille et sa nature, cette infrastructure pose un risque majeur en termes de surveillance et de protection de la vie privée.

Et c’est précisément à cet égard que l’autorisation donnée à la CAF d’utiliser le DRM pour nourrir son algorithme de notation des allocataires est emblématique. Car le DRM est lui-même une pierre angulaire du projet de « solidarité » à la source17Plus précisément, cette base a été créée afin de mettre en place la réforme des APL de 2021 et l’information des assuré·es sociaux (voir la délibération de la CNIL 2019-072 du 23 mai 2010 disponible ici et le décret n° 2019-969 du 18 septembre 2019 disponible ici.) La liste des prestations sociales pour lesquelles le DRM peut être utilisé à des fins de calcul s’est agrandie avec le récent décret permettant son utilisation à des fins de contrôle (voir le décret n°2024-50 du 29 janvier 2024 disponible ici. Il peut désormais, entre autres, être utilisée pour le calcul du RSA, de la PPA – prime d’activité –, des pensions d’invalidités, de la complémentaire santé-solidaire, des pensions de retraite… Il est par ailleurs le pilier de la collecte de données sur les ressources dans le cadre du projet de « solidarité » à la source. Concernant la lutte contre la fraude, son utilisation n’était pas envisagée pour détecter des situations « à risque » même si certaines de ces données pouvaient, a priori, être utilisées notamment lors d’un contrôle par les administrations sociales (consultation RNCPS – répertoire national commun de protection sociale…) via l’exercice du droit de communication. Voir aussi le rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici ainsi que le rapport de la Cour des comptes de 2021 sur la mise en place du prélèvement à la source disponible ici.
 – sa « première brique » selon les termes du Premier ministre – dont il constitue le socle en termes de centralisation des données financières18Sénat, commission des affaires sociales, audition de M. Gabriel Attal, alors ministre délégué chargé des comptes publics. Disponible ici.. Or, si sa constitution avait à l’époque soulevé un certain nombre d’inquiétudes19Voir notamment l’article de Jérôme Hourdeaux « Caisse d’allocations familiales : le projet du gouvernement pour ficher les allocataires » disponible (paywall) ici., le gouvernement s’était voulu rassurant. Nulle question qu’il soit utilisée à des fins de contrôle : ses finalités étaient limitées à la lutte contre le non-recours et au calcul des prestations sociales20Décret n° 2019-969 du 18 septembre 2019 relatif à des traitements de données à caractère personnel portant sur les ressources des assurés sociaux disponible ici. La délibération de la CNIL associée est disponible ici.. Cinq années auront suffit pour que ces promesses soient oubliées.

Nous reviendrons très prochainement sur la solidarité à la source dans un article dédié. Dans le même temps, nous appelons les acteurs associatifs, au premier titre desquels les collectifs de lutte contre la précarité, à la plus grande prudence quant aux promesses du gouvernement et les invitons à engager une discussion collective autour de ces enjeux.

References[+]

References
1 Le président de la Seine-Saint-Denis a notamment saisi le Défenseur des Droits suite à la publication du code source de l’algorithme. Notre travail pour obtenir le code source de l’algorithme a par ailleurs servi aux équipes du journal Le Monde et de Lighthouse Reports pour publier une série d’articles ayant eu un grand retentissement médiatique. Une députée EELV a par ailleurs abordé la question de l’algorithme lors des questions au gouvernement. Thomas Piketty a écrit une tribune sur le sujet et ATD Quart Monde un communiqué. Le parti EELV a aussi lancé une pétition sur ce sujet disponible ici.
2 Voir l’article « L’État muscle le DRM, l’arme pour lutter contre la fraude et le non-recours aux droits » publié le 01/02/2024 par Emile Marzof et disponible ici.
3 Bien que la fréquence de mise à jour des revenus soit majoritairement mensuelle, dans la mesure où les salaires sont versés une fois par mois, nous reprenons ici l’expression utilisée par la Cour des comptes. Voir le chapitre 9 du Rapport sur l’application des lois de financement de la sécurité sociale de 2022 disponible ici.
4 Décret n° 2024-50 du 29 janvier 2024 disponible ici. Voir aussi la délibération n° 2023-120 du 16 novembre 2023 de la CNIL ici. Le décret prévoit une expérimentation d’un an. La surveillance des revenus est aussi autorisée pour le contrôle des agriculteurs·rices par les Mutualités Sociales Agricoles et des personnes âgées par la Caisse Nationale d’Assurance Vieillesse.
5 Voir lignes 1100 du code de l’algorithme en usage entre 2014 et 2018 disponible ici : pour le calcul des revenus mensuels, la CAF utilise soit les déclarations de revenus trimestrielles (dans le cadre des personnes au RSA/AAH) divisées par 3, soit les revenus annuels divisés par 12. Si nous ne disposons pas de la dernière version de l’algorithme, la logique devrait être la même.
6 L’architecture de la base DRM repose sur l’agrégation de deux bases de données. La première est la base des « Déclarations Sociales Nominatives » (DSN) regroupant les déclarations de salaires faites par les employeurs. La seconde, « base des autres revenus » (PASRAU), centralise les prestations sociales monétaires (retraites, APL, allocations familiales, indemnités journalières, AAH, RSA, allocations chômage..). La base DRM est mise à jour quotidiennement et consultable en temps réel. D’un point de vue pratique, il semblerait que le transfert de données de la base DRM à la CAF soit fait mensuellement. La CAF peut aussi accéder à une API pour une consultation du DRM en temps réel. Voir notamment le chapitre 9 du rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici.
7 Plus précisément, cette base a été créée afin de mettre en place la réforme des APL de 2021 et l’information des assuré·es sociaux (voir la délibération de la CNIL 2019-072 du 23 mai 2019 disponible ici et le décret n° 2019-969 du 18 septembre 2019 disponible ici.) La liste des prestations sociales pour lesquelles le DRM peut être utilisé à des fins de calcul s’est agrandie avec le récent décret permettant son utilisation à des fins de contrôle (voir le décret n°2024-50 du 29 janvier 2024 disponible ici. Il peut désormais, entre autres, être utilisée pour le calcul du RSA, de la PPA – prime d’activité –, des pensions d’invalidités, de la complémentaire santé-solidaire, des pensions de retraite… Il est par ailleurs le pilier de la collecte de données sur les ressources dans le cadre du projet de « solidarité » à la source. Concernant la lutte contre la fraude, son utilisation n’était pas envisagée pour détecter des situations « à risque » même si certaines de ces données pouvaient, a priori, être utilisées notamment lors d’un contrôle par les administrations sociales (consultation RNCPS – répertoire national commun de protection sociale…) via l’exercice du droit de communication. Voir aussi le rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici ainsi que le rapport de la Cour des comptes de 2021 sur la mise en place du prélèvement à la source disponible ici.
8 Voir la délibération 2023-120 de la CNIL disponible ici.
9 Voir notamment les témoignages collectés par le collectif Changer de Cap, disponibles ici et le rapport de la Défenseure des Droits.
10 Les montants d’« indus » récupérés par la CAF dans le cadre des contrôles déclenchés par l’algorithme représentent 0,2% du montant total des prestations versées par la CAF. Voir ce document de la CAF.
11 Voir notamment notre article « Stratégies d’infiltration de la surveillance biométrique dans nos vies », disponible ici.
12 Voir la délibération n° 2023-120 du 16 novembre 2023 disponible ici.
13 Voir nos différents articles sur le sujet ici et l’article de Daniel Buchet, ancien directeur de la maîtrise des risques et de la lutte contre la fraude de la CNAF. 2006. « Du contrôle des risques à la maîtrise des risques », disponible ici.
14 Si nous n’avons pas encore la preuve certaine que la CNAV utilise un algorithme de profilage pour le contrôle des personnes à la retraite, la CNIL évoque concernant cette administration dans sa délibération « un traitement de profilage » et « un dispositif correspondant [à l’algorithme de la CNAF] » laissant sous-entendre que c’est le cas.
15 Voir aussi l’article de Clément Pouré dans StreetPress, disponible ici, qui pointe par ailleurs les relations du député avec l’extrême-droite.
16 Pour reprendre les termes de cet article du Figaro.
17 Plus précisément, cette base a été créée afin de mettre en place la réforme des APL de 2021 et l’information des assuré·es sociaux (voir la délibération de la CNIL 2019-072 du 23 mai 2010 disponible ici et le décret n° 2019-969 du 18 septembre 2019 disponible ici.) La liste des prestations sociales pour lesquelles le DRM peut être utilisé à des fins de calcul s’est agrandie avec le récent décret permettant son utilisation à des fins de contrôle (voir le décret n°2024-50 du 29 janvier 2024 disponible ici. Il peut désormais, entre autres, être utilisée pour le calcul du RSA, de la PPA – prime d’activité –, des pensions d’invalidités, de la complémentaire santé-solidaire, des pensions de retraite… Il est par ailleurs le pilier de la collecte de données sur les ressources dans le cadre du projet de « solidarité » à la source. Concernant la lutte contre la fraude, son utilisation n’était pas envisagée pour détecter des situations « à risque » même si certaines de ces données pouvaient, a priori, être utilisées notamment lors d’un contrôle par les administrations sociales (consultation RNCPS – répertoire national commun de protection sociale…) via l’exercice du droit de communication. Voir aussi le rapport de la Cour des comptes d’octobre 2022 sur l’application des lois de financement de la sécurité sociale, disponible ici ainsi que le rapport de la Cour des comptes de 2021 sur la mise en place du prélèvement à la source disponible ici.
18 Sénat, commission des affaires sociales, audition de M. Gabriel Attal, alors ministre délégué chargé des comptes publics. Disponible ici.
19 Voir notamment l’article de Jérôme Hourdeaux « Caisse d’allocations familiales : le projet du gouvernement pour ficher les allocataires » disponible (paywall) ici.
20 Décret n° 2019-969 du 18 septembre 2019 relatif à des traitements de données à caractère personnel portant sur les ressources des assurés sociaux disponible ici. La délibération de la CNIL associée est disponible ici.

Khrys’presso du lundi 19 février 2024

Par : Khrys
19 février 2024 à 01:42

Comme chaque lundi, un coup d’œil dans le rétroviseur pour découvrir les informations que vous avez peut-être ratées la semaine dernière.


Tous les liens listés ci-dessous sont a priori accessibles librement. Si ce n’est pas le cas, pensez à activer votre bloqueur de javascript favori ou à passer en “mode lecture” (Firefox) ;-)

Brave New World

Spécial femmes dans le monde

Spécial Palestine et Israël

RIP

Spécial France

Spécial femmes en France

Spécial médias et pouvoir

Spécial pénibles irresponsables gérant comme des pieds (et à la néolibérale)

Spécial recul des droits et libertés, violences policières, montée de l’extrême-droite…

Spécial résistances

Spécial GAFAM et cie

Les autres lectures de la semaine

Les BDs/graphiques/photos de la semaine

Les vidéos/podcasts de la semaine

Les trucs chouettes de la semaine

Retrouvez les revues de web précédentes dans la catégorie Libre Veille du Framablog.

Les articles, commentaires et autres images qui composent ces « Khrys’presso » n’engagent que moi (Khrys).

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 mars 2024 à 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests

19 mars 2024 à 18:55

The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.  

The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city. 

According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.” 

Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight. 

Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.” 

You can read the lawsuit here. 

Thousands of Young People Told Us Why the Kids Online Safety Act Will Be Harmful to Minors

Par : Jason Kelley
15 mars 2024 à 15:37

With KOSA passed, the information i can access as a minor will be limited and censored, under the guise of "protecting me", which is the responsibility of my parents, NOT the government. I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for! - Alan, 15  

___________________

If information is put through a filter, that’s bad. Any and all points of view should be accessible, even if harmful so everyone can get an understanding of all situations. Not to mention, as a young neurodivergent and queer person, I’m sure the information I’d be able to acquire and use to help myself would be severely impacted. I want to be free like anyone else. - Sunny, 15 

 ___________________

How young people feel about the Kids Online Safety Act (KOSA) matters. It will primarily affect them, and many, many teenagers oppose the bill. Some have been calling and emailing legislators to tell them how they feel. Others have been posting their concerns about the bill on social media. These teenagers have been baring their souls to explain how important social media access is to them, but lawmakers and civil liberties advocates, including us, have mostly been the ones talking about the bill and about what’s best for kids, and often we’re not hearing from minors in these debates at all. We should be — these young voices should be essential when talking about KOSA.

So, a few weeks ago, we asked some of the young advocates fighting to stop the Kids Online Safety Act a few questions:  

- How has access to social media improved your life? What do you gain from it? 

- What would you lose if KOSA passed? How would your life be different if it was already law? 

Within a week we received over 3,000 responses. As of today, we have received over 5,000.

These answers are critical for legislators to hear. Below, you can read some of these comments, sorted into the following themes (though they often overlap):  

These comments show that thoughtful young people are deeply concerned about the proposed law's fallout, and that many who would be affected think it will harm them, not help them. Over 700 of those who responded reported that they were currently sixteen or under—the age under which KOSA’s liability is applicable. The average age of those who answered the survey was 20 (of those who gave their age—the question was optional, and about 60% of people responded).  In addition to these two questions, we also asked those taking the survey if they were comfortable sharing their email address for any journalist who might want to speak with them; unfortunately much coverage usually only mentions one or two of the young people who would be most affected. So, journalists: We have contact info for over 300 young people who would be happy to speak to you about why social media matters to them, and why they oppose KOSA.

Individually, these answers show that social media, despite its current problems, offer an overall positive experience for many, many young people. It helps people living in remote areas find connection; it helps those in abusive situations find solace and escape; it offers education in history, art, health, and world events for those who wouldn’t otherwise have it; it helps people learn about themselves and the world around them. (Research also suggests that social media is more helpful than harmful for young people.) 

And as a whole, these answers tell a story that is 180° different from that which is regularly told by politicians and the media. In those stories, it is accepted as fact that the majority of young people’s experiences on social media platforms are harmful. But from these responses, it is clear that many, many young people also experience help, education, friendship, and a sense of belonging there—precisely because social media allows them to explore, something KOSA is likely to hinder. These kids are deeply engaged in the world around them through these platforms, and genuinely concerned that a law like KOSA could take that away from them and from other young people.  

Here are just a few of the thousands of reasons they’re worried.  

Note: We are sharing individuals’ opinions, without editing. We do not necessarily endorse them or their interpretation of KOSA.

KOSA Will Harm Rights That Young People Know They Ought to Have 

One of the most important things that would be lost is the freedom of speech - a given right that is crucial to a healthy, functioning environment. Not every speech is morally okay, but regulating what speech is deemed "acceptable" constricts people's rights; a clear violation of the First Amendment. Those who need or want to access certain information are not allowed to - not because the information will harm them or others, but for the reason that a certain portion of people disagree with the information. If the country only ran on what select people believed, we would be a bland, monotonous place. This country thrives on diversity, whether it be race, gender, sex, or any other personal belief. If KOSA was passed, I would lose my safe spaces, places where I can go to for mental health, places that make me feel more like a human than just some girl. No more would I be able to fight for ideas and beliefs I hold, nor enjoy my time on the internet either. - Anonymous, 16 

 ___________________

I, and many of my friends, grew up in an Internet where remaining anonymous was common sense, and where revealing your identity was foolish and dangerous, something only to be done sparingly, with a trusted ally at your side, meeting at a common, crowded public space like a convention or a college cafeteria. This bill spits in the face of these very practical instincts, forces you to dox yourself, and if you don’t want to be outed, you must be forced to withdraw from your communities. From your friends and allies. From the space you have made for yourself, somewhere you can truly be yourself with little judgment, where you can find out who you really are, alongside people who might be wildly different from you in some ways, and exactly like you in others. I am fortunate to have parents who are kind and accepting of who I am. I know many people are nowhere near as lucky as me. - Maeve, 25 

 ___________________ 

I couldn't do activism through social media and I couldn't connect with other queer individuals due to censorship and that would lead to loneliness, depression other mental health issues, and even suicide for some individuals such as myself. For some of us the internet is the only way to the world outside of our hateful environments, our only hope. Representation matters, and by KOSA passing queer children would see less of age appropriate representation and they would feel more alone. Not to mention that KOSA passing would lead to people being uninformed about things and it would start an era of censorship on the internet and by looking at the past censorship is never good, its a gateway to genocide and a way for the government to control. – Sage, 15 

  ___________________

Privacy, censorship, and freedom of speech are not just theoretical concepts to young people. Their rights are often already restricted, and they see the internet as a place where they can begin to learn about, understand, and exercise those freedoms. They know why censorship is dangerous; they understand why forcing people to identify themselves online is dangerous; they know the value of free speech and privacy, and they know what they’ve gained from an internet that doesn’t have guardrails put up by various government censors.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Impact Young People’s Artistic Education and Opportunities 

I found so many friends and new interests from social media. Inspirations for my art I find online, like others who have an art style I admire, or models who do poses I want to draw. I can connect with my friends, send them funny videos and pictures. I use social media to keep up with my favorite YouTubers, content creators, shows, books. When my dad gets drunk and hard to be around or my parents are arguing, I can go on YouTube or Instagram and watch something funny to laugh instead. It gives me a lot of comfort, being able to distract myself from my sometimes upsetting home life. I get to see what life is like for the billions of other people on this planet, in different cities, states, countries. I get to share my life with my friends too, freely speaking my thoughts, sharing pictures, videos, etc.  
I have found my favorite YouTubers from other social media platforms like tiktok, this happened maybe about a year ago, and since then I think this is the happiest I have been in a while. Since joining social media I have become a much more open minded person, it made me interested in what others lives are like. It also brought awareness and educated me about others who are suffering in the world like hunger, poor quality of life, etc. Posting on social media also made me more confident in my art, in the past year my drawing skills have immensely improved and I’m shocked at myself. Because I wanted to make better fan art, inspire others, and make them happy with my art. I have been introduce to many styles of clothing that have helped develop my own fun clothing style. It powers my dreams and makes me want to try hard when I see videos shared by people who have worked hard and made it. - Anonymous, 15 

  ___________________

As a kid I was able to interact in queer and disabled and fandom spaces, so even as a disabled introverted child who wasn’t popular with my peers I still didn’t feel lonely. The internet is arguably a safer way to interact with other fans of media than going to cons with strangers, as long as internet safety is really taught to kids. I also get inspiration for my art and writing from things I’ve only discovered online, and as an artist I can’t make money without the internet and even minors do commissions. The issue isn’t that the internet is unsafe, it’s that internet safety isn’t taught anymore. - Rachel, 19 

  ___________________

i am an artist, and sharing my things online makes me feel happy and good about myself. i love seeing other people online and knowing that they like what i make. when i make art, im always nervous to show other people. but when i post it online i feel like im a part of something, and that im in a community where i feel that i belong. – Anonymous, 15 

 ___________________ 

Social media has saved my life, just like it has for many young people. I have found safe spaces and motivation because of social media, and I have never encountered anything negative or harmful to me. With social media I have been able to share my creativity (writing, art, and music) and thoughts safely without feeling like I'm being held back or oppressed. My creations have been able to inspire and reach so many people, just like how other people's work have reached me. Recently, I have also been able to help the library I volunteer at through the help of social media. 
What I do in life and all my future plans (career, school, volunteer projects, etc.) surrounds social media, and without it I wouldn't be able to share what I do and learn more to improve my works and life. I wouldn't be able to connect with wonderful artists, musicians, and writers like I do now. I would be lost and feel like I don't have a reason to do what I do. If KOSA is passed, I wouldn't be able to get the help I need in order to survive. I've made so many friends who have been saved because of social media, and if this bill gets passed they will also be affected. Guess what? They wouldn't be able to get the help they need either. 
If KOSA was already a law when I was just a bit younger, I wouldn't even be alive. I wouldn't have been able to reach help when I needed it. I wouldn't have been able to share my mind with the world. Social media was the reason I was able to receive help when I was undergoing abuse and almost died. If KOSA was already a law, I would've taken my life, or my abuser would have done it before I could. If KOSA becomes a law now, I'm certain that the likeliness of that happening to kids of any age will increase. – Anonymous, 15 

  ___________________

A huge number of young artists say they use social media to improve their skills, and in many cases, the avenue by which they discovered their interest in a type of art or music. Young people are rightfully worried that the magic moment where you first stumble upon an artist or a style that changes your entire life will be less and less common for future generations if KOSA passes. We agree: KOSA would likely lead platforms to limit that opportunity for young people to experience unexpected things, forcing their online experiences into a much smaller box under the guise of protecting them.  

Also, a lot of young people told us they wanted to, or were developing, an online business—often an art business. Under KOSA, young people could have less opportunities in the online communities where artists share their work and build a customer base, and a harder time navigating the various communities where they can share their art.  

KOSA Will Hurt Young People’s Ability to Find Community Online 

Social media has allowed me to connect with some of my closest friends ever, probably deeper than some people in real life. i get to talk about anything i want unimpeded and people accept me for who i am. in my deepest and darkest moments, knowing that i had somewhere to go was truly more relieving than anything else. i've never had the courage to commit suicide, but still, if it weren't for social media, i probably wouldn't be here, mentally & emotionally at least. 
i'd lose the space that accepts me. i'd lose the only place where i can be me. in life, i put up a mask to appease my parents and in some cases, my friends. with how extreme the u.s. is becoming these days, i could even lose my life. i would live my days in fear. i'm terrified of how fast this country is changing and if this bill passes, saying i would fall into despair would be an understatement. people say to "be yourself", but they don't understand that if i were to be my true self tomorrow, i could be killed. – march, 14 

 ___________________ 

Without the internet, and especially the rhythm gaming community which I found through Discord, I would've most likely killed myself at 13. My time on here has not been perfect, as has anyone's but without the internet I wouldn't have been the person I am today. I wouldn't have gotten help recognizing that what my biological parents were doing to me was abuse, the support I've received for my identity (as queer youth) and the way I view things, with ways to help people all around the world and be a more mindful ally, activist, and thinker, and I wouldn't have met my mom. 
I love my chosen mom. We met at a Dance Dance Revolution tournament in April of last year and have been friends ever since. When I told her that she was the first person I saw as a mother figure in my life back in November, I was bawling my eyes out. I'm her mije, and she's my mom. love her so much that saying that doesn't even begin to express exactly how much I love her.  
I love all my chosen family from the rhythm gaming community, my older sisters and siblings, I love them all. I have a few, some I talk with more regularly than others. Even if they and I may not talk as much as we used to, I still love them. They mean so much to me. – X86, 15 

  ___________________

i spent my time in public school from ages 9-13 getting physically and emotionally abused by special ed aides, i remember a few months after i left public school for good, i saw a post online that made me realize that what i went through wasn’t normal. if it wasn’t for the internet, i wouldn’t have come to terms with my autism, i would have still hated myself due to not knowing that i was genderqueer, my mental health would be significantly worse, and i would probably still be self harming, which is something i stopped doing at 13. besides the trauma and mental health side of things, something important to know is that spaces for teenagers to hang out have been eradicated years ago, minors can’t go to malls unless they’re with their parents, anti loitering laws are everywhere, and schools aren’t exactly the best place for teenagers to hang out, especially considering queer teens who were murdered by bullies (such as brianna ghey or nex benedict), the internet has become the third space that teenagers have flocked to as a result. – Anonymous, 17 

  ___________________

KOSA is anti-community. People online don’t only connect over shared interests in art and music—they also connect over the difficult parts of their lives. Over and over again, young people told us that one of the most valuable parts of social media was learning that they were not alone in their troubles. Finding others in similar circumstances gave them a community, as well as ideas to improve their situations, and even opportunities to escape dangerous situations.  

KOSA will make this harder. As platforms limit the types of recommendations and public content they feel safe sharing with young people, those who would otherwise find communities or potential friends will not be as likely to do so. A number of young people explained that they simply would never have been able to overcome some of the worst parts of their lives alone, and they are concerned that KOSA’s passage would stop others from ever finding the help they did. 

KOSA Could Seriously Hinder People’s Self-Discovery  

I am a transgender person, and when I was a preteen, looking down the barrel of the gun of puberty, I was miserable. I didn't know what was wrong I just knew I'd rather do anything else but go through puberty. The internet taught me what that was. They told me it was okay. There were things like haircuts and binders that I could use now and medical treatment I could use when I grew up to fix things. The internet was there for me too when I was questioning my sexuality and again when my mental health was crashing and even again when I was realizing I'm not neurotypical. The internet is a crucial source of information for preteens and beyond and you cannot take it away. You cannot take away their only realistically reachable source of information for what the close-minded or undereducated adults around them don't know. - Jay, 17 

   ___________________

Social media has improved my life so much and led to how I met my best friend, I’ve known them for 6+ years now and they mean so much to me. Access to social media really helps me connect with people similar to me and that make me feel like less of an outcast among my peers, being able to communicate with other neurodivergent queer kids who like similar interests to me. Social media makes me feel like I’m actually apart of a community that won’t judge me for who I am. I feel like I can actually be myself and find others like me without being harassed or bullied, I can share my art with others and find people like me in a way I can’t in other spaces. The internet & social media raised me when my parents were busy and unavailable and genuinely shaped the way I am today and the person I’ve become. – Anonymous, 14 

   ___________________

The censorship likely to come from this bill would mean I would not see others who have similar struggles to me. The vagueness of KOSA allows for state attorney generals to decide what is and is not appropriate for children to see, a power that should never be placed in the hands of one person. If issues like LGBT rights and mental health were censored by KOSA, I would have never realized that I AM NOT ALONE. There are problems with children and the internet but KOSA is not the solution. I urge the senate to rethink this bill, and come up with solutions that actually protect children, not put them in more danger, and make them feel ever more alone. - Rae, 16 

  ___________________ 

KOSA would effectively censor anything the government deems "harmful," which could be anything from queerness and fandom spaces to anything else that deviates from "the norm." People would lose support systems, education, and in some cases, any way to find out about who they are. I'll stop beating around the bush, if it wasn't for places online, I would never have discovered my own queerness. My parents and the small circle of adults I know would be my only connection to "grown-up" opinions, exposing me to a narrow range of beliefs I would likely be forced to adopt. Any kids in positions like mine would have no place to speak out or ask questions, and anything they bring up would put them at risk. Schools and families can only teach so much, and in this age of information, why can't kids be trusted to learn things on their own? - Anonymous, 15 

   ___________________

Social media helped me escape a very traumatic childhood and helped me connect with others. quite frankly, it saved me from being brainwashed. – Milo, 16 

   ___________________

Social media introduced me to lifelong friends and communities of like-minded people; in an abusive home, online social media in the 2010s provided a haven of privacy, safety, and information. I honed my creativity, nurtured my interests and developed my identity through relating and talking to people to whom I would otherwise have been totally isolated from. Also, unrestricted internet access actually taught me how to spot shady websites and inappropriate content FAR more effectively than if censorship had been at play like it is today. 
A couple of the friends I made online, as young as thirteen, were adults; and being friends with adults who knew I was a child, who practiced safe boundaries with me yet treated me with respect, helped me recognise unhealthy patterns in predatory adults. I have befriended mothers and fathers online through games and forums, and they were instrumental in preventing me being groomed by actual pedophiles. Had it not been for them, I would have wound up terribly abused by an "in real life" adult "friend". Instead, I recognised the differences in how he was treating me (infantilising yet praising) vs how my adult friends had treated me (like a human being), and slowly tapered off the friendship and safely cut contact. 
As I grew older, I found a wealth of resources on safe sex and sexual health education online. Again, if not for these discoveries, I would most certainly have wound up abused and/or pregnant as a teenager. I was never taught about consent, safe sex, menstruation, cervical health, breast health, my own anatomy, puberty, etc. as a child or teenager. What I found online-- typically on Tumblr and written with an alarming degree of normalcy-- helped me understand my body and my boundaries far more effectively than "the talk" or in-school sex ed ever did. I learned that the things that made me panic were actually normal; the ins and outs of puberty and development, and, crucially, that my comfort mattered most. I was comfortable and unashamed of being a virgin my entire teen years because I knew it was okay that I wasn't ready. When I was ready, at twenty-one, I knew how to communicate with my partner and establish safe boundaries, and knew to check in and talk afterwards to make sure we both felt safe and happy. I knew there was no judgement for crying after sex and that it didn't necessarily mean I wasn't okay. I also knew about physical post-sex care; e.g. going to the bathroom and cleaning oneself safely. 
AGAIN, I would NOT have known any of this if not for social media. AT ALL. And seeing these topics did NOT turn me into a dreaded teenage whore; if anything, they prevented it by teaching me safety and self-care. 
I also found help with depression, anxiety, and eating disorders-- learning to define them enabled me to seek help. I would not have had this without online spaces and social media. As aforementioned too, learning, sometimes through trial of fire, to safely navigate the web and differentiate between safe and unsafe sites was far more effective without censored content. Censorship only hurts children; it has never, ever helped them. How else was I to know what I was experiencing at home was wrong? To call it "abuse"? I never would have found that out. I also would never have discovered how to establish safe sexual AND social boundaries, or how to stand up for myself, or how to handle harassment, or how to discover my own interests and identity through media. The list goes on and on and on. – June, 21 

   ___________________

One of the claims that KOSA’s proponents make is that it won’t stop young people from finding the things they already want to search for. But we read dozens and dozens of comments from people who didn’t know something about themselves until they heard others discussing it—a mental health diagnosis, their sexuality, that they were being abused, that they had an eating disorder, and much, much more.  

Censorship that stops you from looking through a library is still dangerous even if it doesn’t stop you from checking out the books you already know. It’s still a problem to stop young people in particular from finding new things that they didn’t know they were looking for.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Stop Young People from Getting Accurate News and Valuable Information 

Social media taught me to be curious. It taught me caution and trust and faith and that simply being me is enough. It brought me up where my parents failed, it allowed me to look into stories that assured me I am not alone where I am now. I would be fucking dead right now if it weren't for the stories of my fellow transgender folk out there, assuring me that it gets better.  
I'm young and I'm not smart but I know without social media, myself and plenty of the people I hold dear in person and online would not be alive. We wouldn't have news of the atrocities happening overseas that the news doesn't report on, we wouldn't have mentors to help teach us where our parents failed. - Anonymous, 16 

  ___________________ 

Through social media, I've learned about news and current events that weren't taught at school or home, things like politics or controversial topics that taught me nuance and solidified my concept of ethics. I learned about my identity and found numerous communities filled with people I could socialize with and relate to. I could talk about my interests with people who loved them just as much as I did. I found out about numerous different perspectives and cultures and experienced art and film like I never had before. My empathy and media literacy greatly improved with experience. I was also able to gain skills in gathering information and proper defences against misinformation. More technically, I learned how to organize my computer and work with files, programs, applications, etc; I could find guides on how to pursue my hobbies and improve my skills (I'm a self-taught artist, and I learned almost everything I know from things like YouTube or Tumblr for free). - Anonymous, 15 

  ___________________ 

A huge portion of my political identity has been shaped by news and information I could only find on social media because the mainstream news outlets wouldn’t cover it. (Climate Change, International Crisis, Corrupt Systems, etc.) KOSA seems to be intentionally working to stunt all of this. It’s horrifying. So much of modern life takes place on the internet, and to strip that away from kids is just another way to prevent them from formulating their own thoughts and ideas that the people in power are afraid of. Deeply sinister. I probably would have never learned about KOSA if it were in place! That’s terrifying! - Sarge, 17 

  ___________________

I’ve met many of my friends from [social media] and it has improved my mental health by giving me resources. I used to have an eating disorder and didn’t even realize it until I saw others on social media talking about it in a nuanced way and from personal experience. - Anonymous, 15 

   ___________________

Many young people told us that they’re worried KOSA will result in more biased news online, and a less diverse information ecosystem. This seems inevitable—we’ve written before that almost any content could fit into the categories that politicians believe will cause minors anxiety or depression, and so carrying that content could be legally dangerous for a platform. That could include truthful news about what’s going on in the world, including wars, gun violence, and climate change. 

“Preventing and mitigating” depression and anxiety isn’t a goal of any other outlet, and it shouldn’t be required for social media platforms. People have a right to access information—both news and opinion— in an open and democratic society, and sometimes that information is depressing or anxiety-inducing. To truly “prevent and mitigate” self-destructive behaviors, we must look beyond the media to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships—not hiding truthful information that is disappointing.  

Young People’s Voices Matter 

While KOSA’s sponsors intend to help these young people, those who responded to the survey don’t see it that way. You may have noticed that it’s impossible to limit these complex and detailed responses into single categories—many childhood abuse victims found help as well as arts education on social media; many children connected to communities that they otherwise couldn’t and learned something essential about themselves in doing so. Many understand that KOSA would endanger their privacy, and also know it could harm marginalized kids the most.  

In reading thousands of these comments, it becomes clear that social media itself was not in itself a solution to the issues they experienced. What helped these young people was other people. Social media was where they were able to find and stay connected with those friends, communities, artists, activists, and educators. When you look at it this way, of course KOSA seems absurd: social media has become an essential element of young peoples’ lives, and they are scared to death that if the law passes, that part of their lives will disappear. Older teens and twenty-somethings, meanwhile, worry that if the law had been passed a decade ago, they never would have become the person that they did. And all of these fears are reasonable.  

There were thousands more comments like those above. We hope this helps balance the conversation, because if young people’s voices are suppressed now—and if KOSA becomes law—it will be much more difficult for them to elevate their voices in the future.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

San Diego City Council Breaks TRUST

15 mars 2024 à 14:54

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said: 

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

The Atlas of Surveillance Removes Ring, Adds Third-Party Investigative Platforms

Running the Atlas of Surveillance, our project to map and inventory police surveillance across the United States, means experiencing emotional extremes.

Whenever we announce that we've added new data points to the Atlas, it comes with a great sense of satisfaction. That's because it almost always means that we're hundreds or even thousands of steps closer to achieving what only a few years ago would've seemed impossible: comprehensively documenting the surveillance state through our partnership with students at the University of Nevada, Reno Reynolds School of Journalism.

At the same time, it's depressing as hell. That's because it also reflects how quickly and dangerously the surveillance technology is metastasizing.

We have the exact opposite feeling when we remove items from the Atlas of Surveillance. It's a little sad to see our numbers drop, but at the same time that change in data usually means that a city or county has eliminated a surveillance program.

That brings us to the biggest change in the Atlas since our launch in 2018. This week, we removed 2,530 data points: an entire category of surveillance. With the announcement from Amazon that its home surveillance company Ring will no longer facilitate warrantless requests for consumer video footage, we've decided to sunset that particular dataset.

While law enforcement agencies still maintain accounts on Ring's Neighbors social network, it seems to serve as a communications tool, a function on par with services like Nixle and Citizen, which we currently don't capture in the Atlas. That's not to say law enforcement won't be gathering footage from Ring cameras: they will, through legal process or by directly asking residents to give them access via the Fusus platform. But that type of surveillance doesn't result from merely having a Neighbors account (agencies without accounts can use these methods to obtain footage), which was what our data documented. You can still find out which agencies are maintaining camera registries through the Atlas. 

Ring's decision was a huge victory – and the exact outcome EFF and other civil liberties groups were hoping for. It also has opened up our capacity to track other surveillance technologies growing in use by law enforcement. If we were going to remove a category, we decided we should add one too.

Atlas of Surveillance users will now see a new type of technology: Third-Party Investigative Platforms, or TPIPs. Commons TPIP products include Thomson Reuters CLEAR, LexisNexis Accurint Virtual Crime Center, TransUnion TLOxp, and SoundThinking CrimeTracer (formerly Coplink X from Forensic Logic). These are technologies we've been watching for awhile, but have been struggling to categorize and define. But here's the definition we've come up with:

Third-Party Investigative Platforms are cloud-based software systems that law enforcement agencies subscribe to in order to access, share, mine, and analyze various sources of investigative data. Some of the data the agencies upload themselves, but the systems also provide access to data from other law enforcement, as well as from commercial sources and data brokers. Many products offer AI features, such as pattern identification, face recognition, and predictive analytics. Some agencies employ multiple TPIPs.

We are calling this new category a beta feature in the Atlas, since we are still figuring out how best to research and compile this data nationwide. You'll find fairly comprehensive data on the use of CrimeTracer in Tennessee and Massachusetts, because both states provide the software to local law enforcement agencies throughout the state. Similarly, we've got a large dataset for the use of the Accurint Virtual Crime Center in Colorado, due to a statewide contract. (Big thanks to Prof. Ran Duan's Data Journalism students for working with us to compile those lists!) We've also added more than 60 other agencies around the country, and we expect that dataset to grow as we hone our research methods.

If you've got information on the use of TPIPs in your area, don't hesitate to reach out. You can email us at aos@eff.org, submit a tip through our online form, or file a public records request using the template that EFF and our students have developed to reveal the use of these platforms. 

We Flew a Plane Over San Francisco to Fight Proposition E. Here's Why.

29 février 2024 à 15:19

Proposition E, which San Franciscans will be asked to vote on in the March 5 election, is so dangerous that last weekend we chartered a plane to inform our neighbors about what the ballot measure does and urge them to vote NO on it. If you were in Dolores Park, Golden Gate Park, Chinatown, or anywhere in between on Saturday, there’s a chance you saw it, with a huge banner flying through the sky: “No Surveillance State! No on Prop E.”

Despite the fact that the San Francisco Chronicle has endorsed a NO vote on Prop E, and even quoted some police who don’t find its changes useful to keeping the public safe, proponents of Prop E have raised over $1 million to push this unnecessary, ill-thought out, and downright dangerous ballot measure.

San Francisco, Say NOPE: Vote NO on Prop E on March 5

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

What Does Prop E Do?

Prop E is a haphazard mess of proposals that tries to capitalize on residents’ fear of crime in an attempt to gut commonsense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the civilian-staffed Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Prop E would also amend existing law passed in 2019 to protect San Franciscans from invasive, untested, or biased police surveillance technologies. Currently, if the SFPD wants to acquire a new technology, they must provide a detailed use policy to the democratically-elected Board of Supervisors, in a process that allows for public comment. The Board then votes on whether and how the police can use the technology.

Prop E guts these protective measures designed to bring communities into the conversation about public safety. If Prop E passes on March 5, then the SFPD can unilaterally use any technology they want for a full year without the Board’s approval, without publishing an official policy about how they’d use the technology, and without allowing community members to voice their concerns.

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

Why is Prop E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency, accountability, or democratic control.

San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Prop E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

What Technology Would Prop E Allow Police to Use?

That's the thing—we don't know, and if Prop E passes, we may never know. Today, if the SFPD decides to use a piece of surveillance technology, there is a process for sharing that information with the public. With Prop E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. 

Even though we don't know what technologies the SFPD is eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And according to the City Attorney, Prop E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology. San Francisco currently has a ban on police using remote-controlled robots to deploy deadly force, but if passed, Prop E would allow police to invest in technologies like taser-armed drones without any oversight or potential for elected officials to block the sale. 

Don’t let police experiment on San Franciscans with dangerous, untested surveillance technologies. Say NOPE to a surveillance state. Vote NO on Prop E on March 5.  

❌
❌