Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

EFF Submits Comments on FRT to Commission on Civil Rights

Par : Hannah Zhao
12 avril 2024 à 18:06

Our faces are often exposed and, unlike passwords or pin numbers, cannot be remade. Governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations. This is why EFF recently submitted comments to the U.S. Commission on Civil Rights, which is preparing a report on face recognition technology (FRT).   

In our submission, we reiterated our stance that there should be a ban on governmental use of FRT and strict regulations on private use because it: (1) is not reliable enough to be used in determinations affecting constitutional and statutory rights or social benefits; (2) is a menace to social justice as its errors are far more pronounced when applied to people of color, members of the LGBTQ+ community, and other marginalized groups; (3) threatens privacy rights; (4) chills and deters expression; and (5) creates information security risks.

Despite these grave concerns, FRT is being used by the government and law enforcement agencies with increasing frequency, and sometimes with devastating effects. At least one Black woman and five Black men have been wrongfully arrested due to misidentification by FRT: Porcha Woodruff, Michael Oliver, Nijeer Parks, Randal Reid, Alonzo Sawyer, and Robert Williams. And Harvey Murphy Jr., a white man, was wrongfully arrested due to FRT misidentification, and then sexually assaulted while in jail.

Even if FRT was accurate, or at least equally inaccurate across demographics, it would still severely impact our privacy and security. We cannot change our face, and we expose it to the mass surveillance networks already in place every day we go out in public. But doing that should not be license for the government or private entities to make imprints of our face and retain that data, especially when that data may be breached by hostile actors.

The government should ban its own use of FRT, and strictly limit private use, to protect us from the threats posed by FRT. 

Podcast Episode: About Face (Recognition)

Par : Josh Richman
26 mars 2024 à 03:05

Is your face truly your own, or is it a commodity to be sold, a weapon to be used against you? A company called Clearview AI has scraped the internet to gather (without consent) 30 billion images to support a tool that lets users identify people by picture alone. Though it’s primarily used by law enforcement, should we have to worry that the eavesdropper at the next restaurant table, or the creep who’s bothering you in the bar, or the protestor outside the abortion clinic can surreptitiously snap a pic of you, upload it, and use it to identify you, where you live and work, your social media accounts, and more?

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

New York Times reporter Kashmir Hill has been writing about the intersection of privacy and technology for well over a decade; her book about Clearview AI’s rise and practices was published last fall. She speaks with EFF’s Cindy Cohn and Jason Kelley about how face recognition technology’s rapid evolution may have outpaced ethics and regulations, and where we might go from here. 

In this episode, you’ll learn about: 

  • The difficulty of anticipating how information that you freely share might be used against you as technology advances. 
  • How the all-consuming pursuit of “technical sweetness” — the alluring sensation of neatly and functionally solving a puzzle — can blind tech developers to the implications of that tech’s use. 
  • The racial biases that were built into many face recognition technologies.  
  • How one state's 2008 law has effectively curbed how face recognition technology is used there, perhaps creating a model for other states or Congress to follow. 

Kashmir Hill is a New York Times tech reporter who writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to our privacy. Her book, “Your Face Belongs To Us” (2023), details how Clearview AI gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it. She joined The Times in 2019 after having worked at Gizmodo Media Group, Fusion, Forbes Magazine and Above the Law. Her writing has appeared in The New Yorker and The Washington Post. She has degrees from Duke University and New York University, where she studied journalism. 

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

KASHMIR HILL
Madison Square Garden, the big events venue in New York City, installed facial recognition technology in 2018, originally to address security threats. You know, people they were worried about who'd been violent in the stadium before, or Or perhaps the Taylor Swift model of, you know, known stalkers wanting to identify them if they're trying to come into concerts.

But then in the last year, they realized, well, we've got this system set up. This is a great way to keep out our enemies, people that the owner, James Dolan, doesn't like, namely lawyers who work at firms that have sued him and cost him a lot of money.

And I saw this, I actually went to a Rangers game with a banned lawyer and it's, you know, thousands of people streaming into Madison Square Garden. We walk through the door, put our bags down on the security belt, and by the time we go to pick them up, a security guard has approached us and told her she's not welcome in.

And yeah, once you have these systems of surveillance set up, it goes from security threats to just keeping track of people that annoy you. And so that is the challenge of how do we control how these things get used?

CINDY COHN
That's Kashmir Hill. She's a tech reporter for the New York Times, and she's been writing about the intersection of privacy and technology for well over a decade.

She's even worked with EFF on several projects, including security research into pregnancy tracking apps. But most recently, her work has been around facial recognition and the company Clearview AI.

Last fall, she published a book about Clearview called Your Face Belongs to Us. It's about the rise of facial recognition technology. It’s also about a company that was willing to step way over the line. A line that even the tech giants abided by. And it did so in order to create a facial search engine of millions of innocent people to sell to law enforcement.

I'm Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to make our digital lives BETTER. At EFF we spend a lot of time envisioning the ways things can go wrong — and jumping into action to help when things DO go wrong online. But with this show, we're trying to give ourselves a vision of what it means to get it right.

JASON KELLEY
It's easy to talk about facial recognition as leading towards this sci-fi dystopia, but many of us use it in benign - and even helpful - ways every day. Maybe you just used it to unlock your phone before you hit play on this podcast episode.

Most of our listeners probably know that there's a significant difference between the data that's on your phone and the data that Clearview used, which was pulled from the internet, often from places that people didn't expect. Since Kash has written several hundred pages about what Clearview did, we wanted to start with a quick explanation.

KASHMIR HILL
Clearview AI scraped billions of photos from the internet -

JASON KELLEY
Billions with a B. Sorry to interrupt you, just to make sure people hear that.

KASHMIR HILL
Billions of photos from, the public internet and social media sites like Facebook, Instagram, Venmo, LinkedIn. At the time I first wrote about them in January, 2020, they had 3 billion faces in their database.

They now have 30 billion and they say that they're adding something like 75 million images every day. So a lot of faces, all collected without anyone's consent and, you know, they have paired that with a powerful facial recognition algorithm so that you can take a photo of somebody, you know, upload it to Clearview AI and it will return the other places on the internet where that face appears along with a link to the website where it appears.

So it's a way of finding out who someone is. You know, what their name is, where they live, who their friends are, finding their social media profiles, and even finding photos that they may not know are on the internet, where their name is not linked to the photo but their face is there.

JASON KELLEY

Wow. Obviously that's terrifying, but is there an example you might have of a way that this affects the everyday person. Could you talk about that a little bit?

KASHMIR HILL

Yeah, so with a tool like this, um, you know, if you were out at a restaurant, say, and you're having a juicy conversation, whether about your friends or about your work, and it kind of catches the attention of somebody sitting nearby, you assume you're anonymous. With a tool like this, they could take a photo of you, upload it, find out who you are, where you work, and all of a sudden understand the context of the conversation. You know, a person walking out of an abortion clinic, if there's protesters outside, they can take a photo of that person. Now they know who they are and the health services they may have gotten.

I mean, there's all kinds of different ways. You know, you go to a bar and you're talking to somebody. They're a little creepy. You never want to talk to them again. But they take your picture. They find out your name. They look up your social media profiles. They know who you are.
On the other side, you know, I do hear about people who think about this in a positive context, who are using tools like this to research people they meet on dating sites, finding out if they are who they say they are, you know, looking up their photos.

It's complicated, facial recognition technology. There are positive uses, there are negative uses. And right now we're trying to figure out what place this technology should have in our lives and, and how authorities should be able to use it.

CINDY COHN
Yeah, I think Jason's, like, ‘this is creepy’ is very widely shared, I think, by a lot of people. But you know the name of this is How to Fix the Internet. I would love to hear your thinking about how facial recognition might play a role in our lives if we get it right. Like, what would it look like if we had the kinds of law and policy and technological protections that would turn this tool into something that we would all be pretty psyched about on the main rather than, you know, worried about on the main.

KASHMIR HILL
Yeah, I mean, so some activists feel that facial recognition technology should be banned altogether. Evan Greer at Fight for the Future, you know, compares it to nuclear weapons and that there's just too many possible downsides that it's not worth the benefits and it should be banned altogether. I kind of don't think that's likely to happen just because I have talked to so many police officers who really appreciate facial recognition technology, think it's a very powerful tool that when used correctly can be such an important part of their tool set. I just don't see them giving it up.

But when I look at what's happening right now, you have these companies like not just Clearview AI, but PimEyes, Facecheck, Eye-D. There's public face search engines that exist now. While Clearview is limited to police use, these are on the internet. Some are even free, some require a subscription.  And right now in the U. S., we don't have much of a legal infrastructure, certainly at the national level about whether they can do that or not. But there's been a very different approach in Europe where they say, that citizens shouldn't be included in these databases without their consent. And, you know, after I revealed the existence of Clearview AI, privacy regulators in Europe, in Canada, in Australia, investigated Clearview AI and said that what it had done was illegal, that they needed people's consent to put them in the databases.

So that's one way to handle facial recognition technology is you can't just throw everybody's faces into a database and make them searchable, you need to get permission first. And I think that is one effective way of handling it. Privacy regulators actually inspired by Clearview AA actually issued a warning to other AI companies saying, hey, just because there's all these, there's all this information that's public on the internet, it doesn't mean that you're entitled to it. There can still be a personal interest in the data, and you may violate our privacy laws by collecting this information.

We haven't really taken that approach, in the U. S. as much, with the exception of Illinois, which has this really strong law that's relevant to facial recognition technology. When we have gotten privacy laws at the state level, it says you have the right to get out of the databases. So in California, for example, you can go to Clearview AI and say, hey, I want to see my file. And if you don't like what they have on you, you can ask them to delete you. So that's a very different approach, uh, to try to give people some rights over their face. And California also requires that companies say how many of these requests they get per year. And so I looked and in the last two years fewer than a thousand Californians have asked to delete themselves from Clearview's database and you know, California's population is very much bigger than that, I think, you know 34 million people or so and so I'm not sure how effective those laws are at protecting people at large.

CINDY COHN
Here’s what I hear from that. Our world where we get it right is one where we have a strong legal infrastructure protecting our privacy. But it’s also one where if the police want something, it doesn’t mean that they get it. It’s a world where control of our faces and faceprints rests with us, and any use needs to have our permission. That’s the Illinois law called BIPA - the Biometric Privacy Act, or the foreign regulators you mention.
It also means that a company like Venmo cannot just put our faces onto the public internet, and a company like Clearview cannot just copy them. Neither can happen without our affirmative permission.

I think of technologies like this as needed to have good answers to two questions. Number one, who is the technology serving - who benefits if the technology gets it right? And number two, who is harmed if the technology DOESN’T get it right?

For police use of facial recognition, the answers to both of these questions are bad. Regular people don’t benefit from the police having their faces in what has been called a perpetual line-up. And if the technology doesn’t work, people can pay a very heavy price of being wrongly arrested - as you document in your book, Kash.

But for facial recognition technology allowing me to unlock my phone and manipulate apps like digital credit cards, I benefit by having an easy way to lock and use my phone. And if the technology doesn’t work, I just use my password, so it’s not catastrophic. But how does that compare to your view of a fixed facial recognition world, Kash?

KASHMIR HILL
Well, I'm not a policymaker. I am a journalist. So I kind of see my job as, as here's what has happened. Here's how we got here. And here's how different, you know, different people are dealing with it and trying to solve it. One thing that's interesting to me, you brought up Venmo, is that Venmo was one of the very first places that the kind of technical creator of Clearview AI, Hoan Ton-That, one of the first places he talked about getting faces from.

And this was interesting to me as a privacy reporter because I very much remembered this criticism that the privacy community had for Venmo that, you know, when you've signed up for the social payment site, they made everything public by default, all of your transactions, like who you were sending money to.

And there was such a big pushback saying, Hey, you know, people don't realize that you're making this public by default. They don't realize that the whole world can see this. They don't understand, you know, how that could come back to be used against them. And, you know, some of the initial uses were, you know, people who were sending each other Venmo transactions and like putting syringes in it and you know, cannabis leaves and how that got used in criminal trials.

But what was interesting with Clearview is that Venmo actually had this iPhone on their homepage on Venmo.com and they would show real transactions that were happening on the network. And it included people's profile photos and a link to their profile. So Hoan Ton-That sent this scraper to Venmo.com and it would just, he would just hit it every few seconds and pull down the photos and the links to the profile photos and he got, you know, millions of faces this way, and he says he remembered that the privacy people were kind of annoyed about Venmo making everything public, and he said it took them years to change it, though.

JASON KELLEY
We were very upset about this.

CINDY COHN
Yeah, we had them on our, we had a little list called Fix It Already in 2019. It wasn't a little, it was actually quite long for like kind of major privacy and other problems in tech companies. And the Venmo one was on there, right, in 2019, I think was when we launched it. In 2021, they fixed it, but that was right in between there was right when all that scraping happened.

KASHMIR HILL
And Venmo is certainly not alone in terms of forcing everyone to make their profile photos public, you know, Facebook did that as well, but it was interesting when I exposed Clearview AI and said, you know, here are some of the companies that they scraped from Venmo and also Facebook and LinkedIn, Google sent Clearview cease and desist letters and said, Hey, you know, you, you violated our terms of service in collecting this data. We want you to delete it, and people often ask, well, then what happened after that? And as far as I know, Clearview did not change their practices. And these companies never did anything else beyond the cease and desist letters.

You know, they didn't sue Clearview. Um, and so it's clear that the companies alone are not going to be protecting our data, and they've pushed us to, to be more public and now that is kind of coming full circle in a way that I don't think people, when they are putting their photos on the internet were expecting this to happen.

CINDY COHN
I think we should start from the source, which is, why are they gathering all these faces in the first place, the companies? Why are they urging you to put your face next to your financial transactions? There's no need for your face to be next to a financial transaction, even in social media and other kinds of situations, there's no need for it to be public. People are getting disempowered because there's a lack of privacy protection to begin with, and the companies are taking advantage of that, and then turning around and pretending like they're upset about scraping, which I think is all they did with the Clearview thing.

Like there's problems all the way down here. But I don't think that, from our perspective, the answer isn't to make scraping, which is often over limited, even more limited. The answer is to try to give people back control over these images.

KASHMIR HILL
And I get it, I mean, I know why Venmo wants photos. I mean, when I use Venmo and I'm paying someone for the first time, I want to see that this is the face of the person I know before I send it to, you know, @happy, you know, nappy on Venmo. So it's part of the trust, but it does seem like you could have a different architecture. So it doesn't necessarily mean that you're showing your face to the entire, you know, world. Maybe you could just be showing it to the people that you're doing transactions with.

JASON KELLEY
What we were pushing Venmo to do was what you mentioned was make it NOT public by default. And what I think is interesting about that campaign is that at the time, we were worried about one thing, you know, that the ability to sort of comb through these financial transactions and get information from people. We weren't worried about, or at least I don't think we talked much about, the public photos being available. And it's interesting to me that there are so many ways that public defaults, and that privacy settings can impact people that we don't even know about yet, right?

KASHMIR HILL
I do think this is one of the biggest challenges for people trying to protect their privacy is, it's so hard to anticipate how information that, you know, kind of freely give at one point might be used against you or weaponized in the future as technology improves.

And so I do think that's really challenging. And I don't think that most people, when they're kind of freely putting Photos on the internet, their face on the internet were anticipating that the internet would be reorganized to be searchable by face.

So that's where I think regulating the use of the information can be very powerful. It's kind of protecting people from the mistakes they've made in the past.

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians. And now back to our conversation with Kashmir Hill.

CINDY COHN
So a supporter asked a question that I'm curious about too. You dove deep into the people who built these systems, not just the Clearview people, but people before them. And what did you find? Are these like Dr. Evil, evil geniuses who intended to, you know, build a dystopia? Or are there people who were, you know, good folks trying to do good things who either didn't see the consequences of what they're looking at or were surprised at the consequences of what they were building

KASHMIR HILL
The book is about Clearview AI, but it's also about all the people that kind of worked to realize facial recognition technology over many decades.
The government was trying to get computers to be able to recognize human faces in Silicon Valley before it was even called Silicon Valley. The CIA was, you know, funding early engineers there to try to do it with those huge computers which, you know, in the early 1960s weren't able to do it very well.

But I kind of like went back and asked people that were working on this for so many years when it was very clunky and it did not work very well, you know, were you thinking about what you are working towards? A kind of a world in which everybody is easily tracked by face, easily recognizable by face. And it was just interesting. I mean, these people working on it in the ‘70s, ‘80s, ‘90s, they just said it was impossible to imagine that because the computers were so bad at it, and we just never really thought that we'd ever reach this place where we are now, where we're basically, like, computers are better at facial recognition than humans.

And so this was really striking to me, that, and I think this happens a lot, where people are working on a technology and they just want to solve that puzzle, you know, complete that technical challenge, and they're not thinking through the implications of what if they're successful. And so this one, a philosopher of science I talked to, Heather Douglas, called this technical sweetness.

CINDY COHN
I love that term.

KASHMIR HILL
This kind of motivation where it's just like, I need to solve this, the kind of Jurassic Park, the Jurassic Park dilemma where it's like,it'd be really cool if we brought the dinosaurs back.

So that was striking to me and all of these people that were working on this, I don't think any of them saw something like Clearview AI coming and when I first heard about Clearview, this startup that had scraped the entire internet and kind of made it searchable by face. I was thinking there must be some, you know, technological mastermind here who was able to do this before the big companies, the Facebooks, the Googles. How did they do it first?

And what I would come to figure out is that. You know, what they did was more of an ethical breakthrough than a technological breakthrough. Companies like Google and Facebook had developed this internally and shockingly, you know, for these companies that have released many kind of unprecedented products, they decided facial recognition technology like this was too much and they held it back and they decided not to release it.

And so Clearview AI was just willing to do what other companies hadn't been willing to do. Which I thought was interesting and part of why I wrote the book is, you know, who are these people and why did they do this? And honestly, they did have, in the early days, some troubling ideas about how to use facial recognition technology.

So one of the first deployments was of, of Clearview AI, before it was called Clearview AI, was at the Deploraball, this kind of inaugural event around Trump becoming president and they were using it because It was going to be this gathering of all these people who had had supported Trump, the kind of MAGA crowd, O=of which some of the Clearview AI founders were part of. And they were worried about being infiltrated by Antifa, which I think is how they pronounce it, and so they wanted to run a background check on ticket buyers and find out whether any of them were from the far left.

And apparently this smartchecker worked for this and they identified two people who kind of were trying to get in who shouldn't have. And I found out about this because they included it in a PowerPoint presentation that they had developed for the Hungarian government. They were trying to pitch Hungary on their product as a means of border control. And so the idea was that you could use this background check product, this facial recognition technology, to keep out people you didn't want coming into the country.

And they said that they had fine tuned it so it would work on people that worked with the Open Society Foundations and George Soros because they knew that Hungary's leader, Viktor Orban, was not a fan of the Soros crowd.

And so for me, I just thought this just seemed kind of alarming that you would use it to identify essentially political dissidents, democracy activists and advocates, that that was kind of where their minds went to for their product when it was very early, basically still at the prototype stage.

CINDY COHN
I think that it's important to recognize these tools, like many technologies, they're dual use tools, right, and we have to think really hard about how they can be used and create laws and policies around there because I'm not sure that you can use some kind of technological means to make sure only good guys use this tool to do good things and that bad guys don't.

JASON KELLEY
One of the things that you mentioned about sort of government research into facial recognition reminds me that shortly after you put out your first story on Clearview in January of 2020, I think, we put out a website called Who Has Your Face, which we'd been doing research for for, I don't know, four to six months or something before that, that was specifically trying to let people know which government entities had access to your, let's say, DMV photo or your passport photo for facial recognition purposes, and that's one of the great examples, I think, of how sort of like Venmo, you put information somewhere that's, even in this case, required by law, and you don't ever expect that the FBI would be able to run facial recognition on that picture based on like a surveillance photo, for example.

KASHMIR HILL
So it makes me think of two things, and one is, you know, as part of the book I was looking back at the history of the US thinking about facial recognition technology and setting up guardrails or for the most part NOT setting up guardrails.

And there was this hearing about it more than a decade ago. I think actually Jen Lynch from the EFF testified at it. And it was like 10 years ago when facial recognition technology was first getting kind of good enough to get deployed. And the FBI was starting to build a facial recognition database and police departments were starting to use these kind of early apps.

It troubles me to think about just knowing the bias problems that facial recognition technology had at that time that they were kind of actively using it. But lawmakers were concerned and they were asking questions about whose photo is going to go in here? And the government representatives who were there, law enforcement, at the time they said, we're only using criminal mugshots.

You know, we're not interested in the goings about of normal Americans. We just want to be able to recognize the faces of people that we know have already had encounters with the law, and we want to be able to keep track of those people. And it was interesting to me because in the years to come, that would change, you know, they started pulling in state driver's license photos in some places, and it, it ended up not just being criminals that were being tracked or people, not always even criminals, just people who've had encounters with law enforcement where they ended up with a mugshot taken.

But that is the the kind of frog boiling of ‘well we'll just start out with some of these photos and then you know we'll actually we'll add in some state driver's license photos and then we'll start using a company called Clearview AI that's scraped the entire internet Um, you know everybody on the planet in this facial recognition database.

So it just speaks to this challenge of controlling it, you know,, this kind of surveillance creep where once you start setting up the system, you just want to pull in more and more data and you want to surveil people in more and more ways.

CINDY COHN
And you tell some wonderful stories or actually horrific stories in the book about people who were misidentified. And the answer from the technologists is, well, we just need more data then. Right? We need everybody's driver's licenses, not just mugshots. And then that way we eliminate the bias that comes from just using mugshots. Or you tell a story that I often talk about, which is, I believe the Chinese government was having a hard time with its facial recognition, recognizing black faces, and they made some deals in Africa to just wholesale get a bunch of black faces so they could train up on it.

And, you know, to us, talking about bias in a way that doesn't really talk about comprehensive privacy reform and instead talks only about bias ends up in this technological world in which the solution is more people's faces into the system.

And we see this with all sorts of other biometrics where there's bias issues with the training data or the initial data.

KASHMIR HILL
Yeah. So this is something, so bias has been a huge problem with facial recognition technology for a long time. And really a big part of the problem was that they were not getting diverse training databases. And, you know, a lot of the people that were working on facial recognition technology were white people, white men, and they would make sure that it worked well on them and the other people they worked with.

And so we had, you know, technologies that just did not work as well on other people. One of those early facial recognition technology companies I talked to who was in business, you know, in 2000, 2001, actually used at the Super Bowl in Tampa in 2000 and in 2001 to secretly scan the faces of football fans looking for pickpockets and ticket scalpers.

That company told me that they had to pull out of a project in South Africa because they found the technology just did not work on people who had darker skin. But the activist community has brought a lot of attention to this issue that there is this problem with bias and the facial recognition vendors have heard it and they have addressed it by creating more diverse training sets.

And so now they are training their algorithms to work on different groups and the technology has improved a lot. It really has been addressed and these algorithms don't have those same kind of issues anymore.

Despite that, you know, the handful of wrongful arrests that I've covered. where, um, people are arrested for the crime of looking like someone else. Uh, they've all involved people who are black. One woman so far, a woman who was eight months pregnant, arrested for carjacking and robbery on a Thursday morning while she was getting her two kids ready for school.

And so, you know, even if you fix the bias problem in the algorithms, you're still going to have the issue of, well, who is this technology deployed on? Who is this used to police? And so yeah, I think it'll still be a problem. And then there's just these bigger questions of the civil liberty questions that still need to be addressed. You know, do we want police using facial recognition technology? And if so, what should the limitations be?

CINDY COHN
I think, you know, for us in thinking about this, the central issue is who's in charge of the system and who bears the cost if it's wrong. The consequences of a bad match are much more significant than just, oh gosh, the cops for a second thought I was the wrong person. That's not actually how this plays out in people's lives.

KASHMIR HILL
I don't think most people who haven't been arrested before realize how traumatic the whole experience can be. You know, I talk about Robert Williams in the book who was arrested after he got home from work, in front of all of his neighbors, in front of his wife and his two young daughters, spent the night in jail, you know, was charged, had to hire a lawyer to defend him.

Same thing, Portia Woodruff, the woman who was pregnant, taken to jail, charged, even though the woman who they were looking for had committed the crime the month before and was not visibly pregnant, I mean it was so clear they had the wrong person. And yet, she had to hire a lawyer, fight the charges, and she wound up in the hospital after being detained all day because she was so stressed out and dehydrated.

And so yeah, when you have people that are relying too heavily on the facial recognition technology and not doing proper investigations, this can have a very harmful effect on, on individual people's lives.

CINDY COHN
Yeah, I mean, one of my hopes is that when, you know, that those of us who are involved in tech trying to get privacy laws passed and other kinds of things passed can have some knock on effects on trying to make the criminal justice system better. We shouldn't just be coming in and talking about the technological piece, right?

Because it's all a part of a system that itself needs reform. And so I think it's important that we recognize, um, that as well and not just try to extricate the technological piece from the rest of the system and that's why I think EFF's come to the position that governmental use of this is so problematic that it's difficult to imagine a world in which it's fixed.

KASHMIR HILL
In terms of talking about laws that have been effective We alluded to it earlier, but Illinois passed this law in 2008, the Biometric Information Privacy Act, rare law that moved faster than the technology.

And it says if you want to use somebody's biometrics, like their face print or their fingerprint to their voice print, You need to get their consent, or as a company, or you'll be fined. And so Madison Square Garden is using facial recognition technology to keep out security threats and lawyers at all of its New York City venues: The Beacon Theater, Radio City Music Hall, Madison Square Garden.

The company also has a theater in Chicago, but they cannot use facial recognition technology to keep out lawyers there because they would need to get their consent to use their biometrics that way. So it is an example of a law that has been quite effective at kind of controlling how the technology is used, maybe keeping it from being used in a way that people find troubling.

CINDY COHN
I think that's a really important point. I think sometimes people in technology despair that law can really ever do anything, and they think technological solutions are the only ones that really work. And, um, I think it's important to point out that, like, that's not always true. And the other point that you make in your book about this that I really appreciate is the Wiretap Act, right?

Like the reason that a lot of the stuff that we're seeing is visual and not voice, // you can do voice prints too, just like you can do face prints, but we don't see that.

And the reason we don't see that is because we actually have very strong federal and state laws around wiretapping that prevent the collection of this kind of information except in certain circumstances. Now, I would like to see those circumstances expanded, but it still exists. And I think that, you know, kind of recognizing where, you know, that we do have legal structures that have provided us some protection, even as we work to make them better, is kind of an important thing for people who kind of swim in tech to recognize.

KASHMIR HILL
Laws work is one of the themes of the book.

CINDY COHN
Thank you so much, Kash, for joining us. It was really fun to talk about this important topic.

KASHMIR HILL
Thanks for having me on. It's great. I really appreciate the work that EFF does and just talking to you all for so many stories. So thank you.

JASON KELLEY
That was a really fun conversation because I loved that book. The story is extremely interesting and I really enjoyed being able to talk to her about the specific issues that sort of we see in this story, which I know we can apply to all kinds of other stories and technical developments and technological advancements that we're thinking about all the time at EFF.

CINDY COHN
Yeah, I think that it's great to have somebody like Kashmir dive deep into something that we spend a lot of time talking about at EFF and, you know, not just facial recognition, but artificial intelligence and machine learning systems more broadly, and really give us the, the history of it and the story behind it so that we can ground our thinking in more reality. And, you know, it ends up being a rollicking good story.

JASON KELLEY
Yeah, I mean, what surprised me is that I think most of us saw that facial recognition sort of exploded really quickly, but it didn't, actually. A lot of the book, she writes, is about the history of its development and, um, You know, we could have been thinking about how to resolve the potential issues with facial recognition decades ago, but no one sort of expected that this would blow up in the way that it did until it kind of did.

And I really thought it was interesting that her explanation of how it blew up so fast wasn't really a technical development as much as an ethical one.

CINDY COHN
Yeah, I love that perspective, right?

JASON KELLEY
I mean, it’s a terrible thing, but it is helpful to think about, right?

CINDY COHN
Yeah, and it reminds me again of the thing that we talk about a lot, which is Larry Lessig's articulation of the kind of four ways that you can control behavior online. There's markets, there's laws, there's norms, and there's architecture. In this system, you know, we had. norms that were driven across.

The thing that Clearview did that she says wasn't a technical breakthrough, it was an ethical breakthrough. I think it points the way towards, you know, where you might need laws.
There's also an architecture piece though. You know, if Venmo hadn't set up its system so that everybody's faces were easily made public and scrapable, you know, that architectural decision could have had a pretty big impact on how vast this company was able to scale and where they could look.

So we've got an architecture piece, we've got a norms piece, we've got a lack of laws piece. It's very clear that a comprehensive privacy law would have been very helpful here.

And then there's the other piece about markets, right? You know, when you're selling into the law enforcement market, which is where Clearview finally found purchase, that's an extremely powerful market. And it ends up distorting the other ones.

JASON KELLEY
Exactly.

CINDY COHN
Once law enforcement decides they want something, I mean, when I asked Kash, you know, like, what do you think about ideas about banning facial recognition? Uh, she said, well, I think law enforcement really likes it. And so I don't think it'll be banned. And what that tells us is this particular market. can trump all the other pieces, and I think we see that in a lot of the work we do at EFF as well.

You know, we need to carve out a better space such that we can actually say no to law enforcement, rather than, well, if law enforcement wants it, then we're done in terms of things, and I think that's really shown by this story.

JASON KELLEY
Thanks for joining us for this episode of how to fix the internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch, and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators.

In this episode, you heard Cult Orrin by Alex featuring Starfrosh and Jerry Spoon.

And Drops of H2O, The Filtered Water Treatment, by Jay Lang, featuring Airtone.

You can find links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Corporate Spy Tech and Inequality: 2023 Year in Review

24 décembre 2023 à 12:30

Our personal data and the ways private companies harvest and monetize it plays an increasingly powerful role in modern life. Throughout 2023, corporations have continued to collect our personal data, sell it to governments, use it to reach inferences about us, and exacerbate existing structural inequalities across society. 

EFF is fighting back. Earlier this year, we filed comments with the U.S. National Telecommunications and Information Administration addressing the ways that corporate data surveillance practices cause discrimination against people of color, women, and other vulnerable groups. Thus, data privacy legislation is civil rights legislation. And we need it now.

In early October, a bad actor claimed they were selling stolen data from the genetic testing service, 23andMe. This initially included display name, birth year, sex, and some details about genetic ancestry results—of one million users of Ashkenazi Jewish descent and another 100,000 users of Chinese descent. By mid-October this expanded out to another four million accounts. It's still unclear if the thieves deliberately targeted users based on race or religion. EFF provided guidance to users about how to protect their accounts. 

When it comes to corporate data surveillance, users’ incomes can alter their threat models. Lower-income people are often less able to avoid corporate harvesting of their data, as some lower-priced technologies collect more data than other technologies, whilst others contain pre-installed malicious programmes. This year, we investigated the low-budget Dragon Touch KidzPad Y88X 10 kid’s tablet, bought from online vendor Amazon, and revealed that malware and pre-installed riskware were present. Likewise, lower-income people may suffer the most from data breaches, because it costs money and takes considerable time to freeze and monitor credit reports, and to obtain identity theft prevention services.

Disparities in whose data is collected by corporations leads to disparities in whose data is sold by corporations to government agencies. As we explained this year, even the U.S. Director of National Intelligence thinks the government should stop buying corporate surveillance data. Structural inequalities affect whose data is purchased by governments. And when government agencies have access to the vast reservoir of personal data that businesses have collected from us, bias is a likely outcome.  

This year we’ve also repeatedly blown the whistle on the ways that automakers stockpile data about how we drive—and about where self-driving cars take us. There is an active government and private market for vehicle data, including location data, which is difficult if not impossible to de-identify. Cars can collect information not only about the vehicle itself, but also about what's around the vehicle. Police have seized location data about people attending Black-led protests against police violence and racism. Further, location data can have a disparate impact on certain consumers who may be penalized for living in a certain neighborhood.

Technologies developed by businesses for governments can yield discriminatory results. Take face recognition, for example. Earlier this year, the Government Accountability Office (GAO) published a report highlighting the inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve said over and over again: governments cannot be trusted with this flawed and dangerous technology. The technology all too often does not work—particularly pertaining to Black people and women. In February, Porcha Woodruff was arrested by six Detroit police officers on the charges of robbery and carjacking after face recognition technology incorrectly matched an eight-year-old image of her (from a police database) with video footage of a suspect. The charges were dropped and she has since filed a lawsuit against the City of Detroit. Her lawsuit joins two others against the Detroit police for incorrect face recognition matches.

Developments throughout 2023 affirm that we need to reduce the amount of data that corporations can collect and sell to end the disparate impacts caused by corporate data processing. EFF has repeatedly called for such privacy legislation. To be effective, it must include effective private enforcement, and prohibit “pay for privacy” schemes that hurt lower-income people. In the U.S., states have been more proactive and more willing to consider such protections, so legislation at the federal level must not preempt state legislation. The pervasive ecosystem of data surveillance is a civil rights problem, and as we head into 2024 we must continue thinking about them as parts of the same problem. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

FTC’s Rite Aid Ruling Rightly Renews Scrutiny of Face Recognition

20 décembre 2023 à 17:10

The Federal Trade Commission on Tuesday announced action against the pharmacy chain Rite Aid for its use of face recognition technology in hundreds of stores. The regulator found that Rite Aid deployed a massive, error-riddled surveillance program, chose vendors that could not properly safeguard the personal data the chain hoarded, and attempted to keep it all under wraps. Under a proposed settlement, Rite Aid can't operate a face recognition system in any of its stores for five years.

EFF advocates for laws that require companies to get clear, opt-in consent from any person before scanning their faces. Rite Aid's program, as described in the complaint, would violate such laws. The FTC’s action against Rite Aid illustrates many of the problems we have raised about face recognition—including how data collected for face recognition systems is often insufficiently protected, and how systems are often deployed in ways that disproportionately hurt BIPOC communities.

The FTC’s complaint outlines a face recognition system that often relied on "low-quality" images to identify so-called “persons of interest,” and that the chain instructed staff to ask such customers to leave its stores.

From the FTC's press release on the ruling:

According to the complaint, Rite Aid contracted with two companies to help create a database of images of individuals—considered to be “persons of interest” because Rite Aid believed they engaged in or attempted to engage in criminal activity at one of its retail locations—along with their names and other information such as any criminal background data. The company collected tens of thousands of images of individuals, many of which were low-quality and came from Rite Aid’s security cameras, employee phone cameras and even news stories, according to the complaint.

Rite Aid's system falsely flagged numerous customers, according to the complaint, including an 11 year-old girl whom employees searched based on a false-positive result. Another unnamed customer quoted in the complaint told Rite Aid, "Before any of your associates approach someone in this manner they should be absolutely sure because the effect that it can [have] on a person could be emotionally damaging.... [E]very black man is not [a] thief nor should they be made to feel like one.”

Even if Rite Aid's face recognition technology had been completely accurate (and it clearly was not), the way the company deployed it was wrong. Rite Aid scanned everyone who came into certain stores and matched them against an internal list. Any company that does this assumes the guilt of everyone who walks in the door. And, as we have pointed out time and again, that assumption of guilt doesn't fall on all customers equally: People of color, who are already historically over-surveilled, are the ones who most often find themselves under new surveillance.

As the FTC explains in its complaint (emphasis added):

"[A]lthough approximately 80 percent of Rite Aid stores are located in plurality-White (i.e., where White people are the single largest group by race or ethnicity) areas, about 60 percent of Rite Aid stores that used facial recognition technology were located in plurality non-White areas. As a result, store patrons in plurality-Black, plurality-Asian, and plurality-Latino areas were more likely to be subjected to and surveilled by Rite Aid’s facial recognition technology."

The FTC's ruling rightly pulls the many problems with facial recognition into the spotlight. It also proposes remedies to many ways Rite Aid failed to ensure its system was safe and functional, failed to train employees on how to interpret results, and failed to evaluate whether its technology was harming its customers.

We encourage lawmakers to go further. They must enact laws that require businesses to get opt-in consent before collecting or disclosing a person’s biometrics. This will ensure that people can make their own decisions about whether to participate in face recognition systems and know in advance which companies are using them. 

The State of Chihuahua Is Building a 20-Story Tower in Ciudad Juarez to Surveil 13 Cities–and Texas Will Also Be Watching

EFF Special Advisor Paul Tepper and EFF intern Michael Rubio contributed research to this report.

Chihuahua state officials and a notorious Mexican security contractor broke ground last summer on the Torre Centinela (Sentinel Tower), an ominous, 20-story high-rise in downtown Ciudad Juarez that will serve as the central node of a new AI-enhanced surveillance regime. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project will be unlike anything seen before along the U.S.-Mexico border.

And that's saying a lot, considering the last 30-plus years of surging technology on the U.S side of the border. 

The Torre Centinela will stand in a former parking lot next to the city's famous bullring, a mere half-mile south of where migrants and asylum seekers have camped and protested at the Paso del Norte International Bridge leading to El Paso. But its reach goes much further: the Torre Centinela is just one piece of the Plataforma Centinela (Sentinel Platform), an aggressive new technology strategy developed by Chihuahua's Secretaria de Seguridad Pública Estatal (Secretary of State Public Security or SSPE) in collaboration with the company Seguritech.

With its sprawling infrastructure, the Plataforma Centinela will create an atmosphere of surveillance and data-streams blanketing the entire region. The plan calls for nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more.

If the project comes together as advertised in the Avengers-style trailer that SSPE released to influence public opinion, law enforcement personnel on site will be surrounded by wall-to-wall monitors (140 meters of screens per floor), while 2,000 officers in the field will be able to access live intelligence through handheld tablets.

Texas law enforcement will also have "eyes on this side of the border" via the Plataforma Centinela, Chihuahua Governor Maru Campos publicly stated last year. Texas Governor Greg Abbott signed a memorandum of understanding confirming the partnership.

Plataforma Centinela will transform public life and threaten human rights in the borderlands in ways that aren't easy to assess. Regional newspapers and local advocates–especially Norte Digital and Frente Político Ciudadano para la Defensa de los Derechos Humanos (FPCDDH)--have raised significant concerns about the project, pointing to a low likelihood of success and high potential for waste and abuse.

"It is a myopic approach to security; the full emphasis is placed on situational prevention, while the social causes of crime and violence are not addressed," FPCDDH member and analyst Victor M. Quintana tells EFF, noting that the Plataforma Centinela's budget is significantly higher than what the state devotes to social services. "There are no strategies for the prevention of addiction, neither for rebuilding the fabric of society nor attending to dropouts from school or young people at risk, which are social causes of insecurity."

Instead of providing access to unfiltered information about the project, the State of Chihuahua has launched a public relations blitz. In addition to press conferences and the highly-produced cinematic trailer, SSPE recently hosted a "Pabellón Centinel" (Sentinel Pavillion), a family-friendly carnival where the public was invited to check out a camera wall and drones, while children played with paintball guns, drove a toy ATV patrol vehicle around a model city, and colored in illustrations of a data center operator.

Behind that smoke screen, state officials are doing almost everything they can to control the narrative around the project and avoid public scrutiny.

According to news reports, the SSPE and the Secretaría de Hacienda (Finance Secretary) have simultaneously deemed most information about the project as classified and left dozens of public records requests unanswered. The Chihuahua State Congress also rejected a proposal to formally declassify the documents and stymied other oversight measures, including a proposed audit. Meanwhile, EFF has submitted public records requests to several Texas agencies and all have claimed they have no records related to the Plataforma Centinela.

This is all the more troubling considering the relationship between the state and Seguritech, a company whose business practices in 22 other jurisdictions have been called into question by public officials.

What we can be sure of is that the Plataforma Centinela project may serve as proof of concept of the kind of panopticon surveillance governments can get away with in both North America and Latin America.

What Is the Plataforma Centinela?

High-tech surveillance centers are not a new phenomenon on the Mexican side of the border. These facilities tend to use "C" distinctions to explain their functions and purposes. EFF has mapped out dozens of these in the six Mexican border states.

A screen capture of a Google Map of Mexican C-Centers

Click to explore the map. Google's Privacy Policy applies.

They include:

  • C4 (Centro de Comunicación, Cómputo, Control y Comando) (Center for Communications, Calculation, Control, and Command), 
  • C5 (Centro de Coordinación Integral, de Control, Comando, Comunicación y Cómputo del Estado) (Center for Integral Coordination for Control, Command, Communications, and State Calculation), 
  • C5i (Centro de Control, Comando, Comunicación, Cómputo, Coordinación e Inteligencia) (Center for Control, Command, Communication, Calculation, Coordination and Intelligence).

Typically, these centers focus as a cross between a 911 call center and a real-time crime center, with operators handling emergency calls, analyzing crime data, and controlling a network of surveillance cameras via a wall bank of monitors. In some cases, the Cs may be presented in different order or stand for slightly different words. For example, some C5s might alternately stand for "Centros de Comando, Control, Comunicación, Cómputo y Calidad" (Centers for Command, Control, Communication, Computation and Quality). These facilities also exist in other parts of Mexico. The number of Cs often indicate scale and responsibilities, but more often than not, it seems to be a political or marketing designation.

The Plataforma Centinela however, goes far beyond the scope of previous projects and in fact will be known as the first C7 (Centro de Comando, Cómputo, Control, Coordinación, Contacto Ciudadano, Calidad, Comunicaciones e Inteligencia Artificial) (Center for Command, Calculation, Control, Coordination, Citizen Contact, Quality, Communications and Artificial Intelligence). The Torre Centinela in Ciudad Juarez will serve as the nerve center, with more than a dozen sub-centers throughout the state. 

According to statistics that Gov. Campos disclosed as part of negotiations with Texas and news reports, the Plataforma Centinela will include: 

    • 1,791 automated license plate readers. These are cameras that photograph vehicles and their license plates, then upload that data along with the time and location where the vehicles were seen to a massive searchable database. Law enforcement can also create lists of license plates to track specific vehicles and receive alerts when those vehicles are seen. 
    • 4,800 fixed cameras. These are your run-of-the-mill cameras, positioned to permanently surveil a particular location from one angle.  
    • 3,065 pan-tilt-zoom (PTZ) cameras. These are more sophisticated cameras. While they are affixed to a specific location, such as a street light or a telephone pole, these cameras can be controlled remotely. An operator can swivel the camera around 360-degrees and zoom in on subjects. 
    • 2,000 tablets. Officers in the field will be issued handheld devices for accessing data directly from the Plataforma Centinela
    • 102 security arches. This is a common form of surveillance in Mexico, but not the United States. These are structures built over highways and roads to capture data on passing vehicles and their passengers. 
    • 74 drones (Unmanned Aerial Vehicles/UAVs). While the Chihuahua government has not disclosed what surveillance payload will be attached to these drones, it is common for law enforcement drones to deploy video, infrared, and thermal imaging technology.
    • 40 mobile video surveillance trailers. While details on these systems are scant, it is likely these are camera towers that can be towed to and parked at targeted locations. 
    • 15 anti-drone systems. These systems are designed to intercept and disable drones operated by criminal organizations.
    • Face recognition. The project calls for the application of "biometric filters" to be applied to camera feeds "to assist in the capture of cartel leaders," and the collection of migrant biometrics. Such a system would require scanning the faces of the general public.
    • Artificial intelligence. So far, the administration has thrown around the term AI without fully explaining how it will be used. However, typically law enforcement agencies have used this technology to "predict" where crime might occur, identify individuals mostly likely to be connected to crime, and to surface potential connections between suspects that would not have been obvious to a human observer. However, all these technologies have a propensity for making errors or exacerbating existing bias. 

As of May, 60% of the Plataforma Centinela camera network had been installed, with an expected completion date of December, according to Norte Digital. However, the cameras were already being used in criminal investigations. 

All combined, this technology amounts to an unprecedented expansion of the surveillance state in Latin America, as SSPE brags in its promotional material. The threat to privacy may also be unprecedented: creating cities where people can no longer move freely in their communities without being watched, scanned, and tagged.

But that's assuming the system functions as advertised—and based on the main contractor's history, that's anything but guaranteed. 

Who Is Seguritech?

The Plataforma Centinela project is being built by the megacorporation Seguritech, which has signed deals with more than a dozen government entities throughout Mexico. As of 2018, the company received no-bid contracts in at least 10 Mexican states and cities, which means it was able to sidestep the accountability process that requires companies to compete for projects.

And when it comes to the Plataforma Centinela, the company isn't simply a contractor: It will actually have ownership over the project, the Torre Centinela, and all its related assets, including cameras and drones, until August 2027.

That's what SSPE Secretary Gilberto Loya Chávez told the news organization Norte Digital, but the terms of the agreement between Seguritech and Chihuahua's administration are not public. The SSPE's Transparency Committee decided to classify the information "concerning the procedures for the acquisition of supplies, goods, and technology necessary for the development, implementation, and operation of the Platforma Centinela" for five years.

In spite of the opacity shrouding the project, journalists have surfaced some information about the investment plan. According to statements from government officials, the Plataforma Centinela will cost 4.2 billion pesos, with Chihuahua's administration paying regular installments to the company every three months (Chihuahua's governor had previously said that these would be yearly payments in the amount of 700 million to 1 billion pesos per year). According to news reports, when the payments are completed in 2027, the ownership of the platform's assets and infrastructure are expected to pass from Seguritech to the state of Chihuahua.

The Plataforma Centinela project marks a new pinnacle in Seguritech's trajectory as a Mexican security contractor. Founded in 1995 as a small business selling neighborhood alarms, SeguriTech Privada S.A de C.V. became a highly profitable brand, and currently operates in five areas: security, defense, telecommunications, aeronautics, and construction. According to Zeta Tijuana, Seguritech also secures contracts through its affiliated companies, including Comunicación Segura (focused on telecommunications and security) and Picorp S.A. de C.V. (focused on architecture and construction, including prisons and detention centers). Zeta also identified another SecuriTech company, Tres10 de C.V., as the contractor named in various C5i projects.

Thorough reporting by Mexican outlets such as Proceso, Zeta Tijuana, Norte Digital, and Zona Free paint an unsettling picture of Seguritech's activities over the years.

Former President Felipe Calderón's war on drug trafficking, initiated during his 2006-2012 term, marked an important turning point for surveillance in Mexico. As Proceso reported, Seguritech began to secure major government contracts beginning in 2007, receiving its first billion-peso deal in 2011 with Sinaloa's state government. In 2013, avoiding the bidding process, the company secured a 6-billion peso contract assigned by Eruviel Ávila, then governor of the state of México (or Edomex, not to be confused with the country of Mexico). During Enrique Peña Nieto's years as Edomex's governor, and especially later, as Mexico's president, Seguritech secured its status among Mexico's top technology contractors.

According to Zeta Tijuana, during the six years that Peña Nieto served as president (2012-2018), the company monopolized contracts for the country's main surveillance and intelligence projects, specifically the C5i centers. As Zeta Tijuana writes:

"More than 10 C5i units were opened or began construction during Peña Nieto's six-year term. Federal entities committed budgets in the millions, amid opacity, violating parliamentary processes and administrative requirements. The purchase of obsolete technological equipment was authorized at an overpriced rate, hiding information under the pretext of protecting national security."

Zeta Tijuana further cites records from the Mexican Institute of Industrial Property showing that Seguritech registered the term "C5i" as its own brand, an apparent attempt to make it more difficult for other surveillance contractors to provide services under that name to the government.

Despite promises from government officials that these huge investments in surveillance would improve public safety, the country’s number of violent deaths increased during Peña Nieto's term in office.

"What is most shocking is how ineffective Seguritech's system is," says Quintana, the spokesperson for FPCDDH. By his analysis, Quintana says, "In five out of six states where Seguritech entered into contracts and provided security services, the annual crime rate shot up in proportions ranging from 11% to 85%."

Seguritech has also been criticized for inflated prices, technical failures, and deploying obsolete equipment. According to Norte Digital, only 17% of surveillance cameras were working by the end of the company's contract with Sinaloa's state government. Proceso notes the rise of complaints about the malfunctioning of cameras in Cuauhtémoc Delegation (a borough of Mexico City) in 2016. Zeta Tijuana reported on the disproportionate amount the company charged for installing 200 obsolete 2-megapixel cameras in 2018.

Seguritech's track record led to formal complaints and judicial cases against the company. The company has responded to this negative attention by hiring services to take down and censor critical stories about its activities published online, according to investigative reports published as part of the Global Investigative Journalism Network's Forbidden Stories project.

Yet, none of this information dissuaded Chihuahua's governor, Maru Campos, from closing a new no-bid contract with Seguritech to develop the Plataforma Centinela project. 

 A Cross-Border Collaboration 


The Plataforma Centinela project presents a troubling escalation in cross-border partnerships between states, one that cuts out each nation's respective federal governments.  In April 2022, the states of Texas and Chihuahua signed a memorandum of understanding to collaborate on reducing "cartels' human trafficking and smuggling of deadly fentanyl and other drugs" and to "stop the flow of migrants from over 100 countries who illegally enter Texas through Chihuahua."

A slide describing the "New Border Model"

While much of the agreement centers around cargo at the points of entry, the document also specifically calls out the various technologies that make up the Plataforma Centinela. In attachments to the agreement, Gov. Campos promises Chihuahua is "willing to share that information with Texas State authorities and commercial partners directly."

During a press conference announcing the MOU, Gov. Abbot declared, “Governor Campos has provided me with the best border security plan that I have seen from any governor from Mexico.” He held up a three-page outline and a slide, which were also provided to the public, but also referenced the existence of "a much more extensive detailed memo that explains in nuance" all the aspects of the program.

Abbott went on to read out a summary of Plataforma Centinela, adding, "This is a demonstration of commitment from a strong governor who is working collaboratively with the state of Texas."

Then Campos, in response to a reporter's question, added: "We are talking about sharing information and intelligence among states, which means the state of Texas will have eyes on this side of the border." She added that the data collected through the Plataforma Centinela will be analyzed by both the states of Chihuahua and Texas.

Abbott provided an example of one way the collaboration will work: "We will identify hotspots where there will be an increase in the number of migrants showing up because it's a location chosen by cartels to try to put people across the border at that particular location. The Chihuahua officials will work in collaboration with the Texas Department of Public Safety, where DPS has identified that hotspot and the Chihuahua side will work from a law enforcement side to disrupt that hotspot."

In order to learn more about the scope of the project, EFF sent public records requests to several Texas agencies, including the Governor's Office, the Texas Department of Public Safety, the Texas Attorney General's Office, the El Paso County Sheriff, and the El Paso Police Department. Not one of the agencies produced records related to the Plataforma Centinela project.

Meanwhile, Texas is further beefing up its efforts to use technology at the border, including by enacting new laws that formally allow the Texas National Guard and State Guard to deploy drones at the border and authorize the governor to enter compacts with other states to share intelligence and resource to build "a comprehensive technological surveillance system" on state land to deter illegal activity at the border. In addition to the MOU with Chihuahua, Abbott also signed similar agreements with the states of Nuevo León and Coahuila in 2022. 

Two Sides, One Border

The Plataforma Centinela has enormous potential to violate the rights of one of the largest cross-border populations along the U.S.-Mexico border. But while law enforcement officials are eager to collaborate and traffic data back and forth, advocacy efforts around surveillance too often are confined to their respective sides.

The Spanish-language press in Mexico has devoted significant resources to investigating the Plataforma Centinela and raising the alarm over its lack of transparency and accountability, as well as its potential for corruption. Yet, the project has received virtually no attention or scrutiny in the United States. 

Fighting back against surveillance of cross-border communities requires cross-border efforts. EFF supports the efforts of advocacy groups in Ciudad Juarez and other regions of Chihuahua to expose the mistakes the Chihuahua government is making with the Plataforma Centinela and call out its mammoth surveillance approach for failing to address the root social issues. We also salute the efforts by local journalists to hold the government accountable. However, U.S-based journalists, activists, and policymakers—many of whom have done an excellent job surfacing criticism of Customs and Border Protection's so-called virtual wall—must also turn their attention to the massive surveillance that is building up on the Mexican side.

In reality, there really is no Mexican surveillance and U.S. surveillance. It’s one massive surveillance monster that, ironically, in the name of border enforcement, recognizes no borders itself. 

GAO Report Shows the Government Uses Face Recognition with No Accountability, Transparency, or Training

Federal agents are using face recognition software without training, policies, or oversight, according to the Government Accountability Office (GAO).

The government watchdog issued yet another report this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology.

The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents.

Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them.

Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.

Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk. The technology facilitates covert mass surveillance of the places we frequent and the people we know. It can be used to make judgments about how we feel and behave. Mass adoption of face recognition means being able to track people automatically as they go about their day visiting doctors, lawyers, houses of worship, as well as friends and family. It also means that law enforcement could, for example, fly a drone over a protest against police violence and walk away with a list of everyone in attendance. Either instance would create a chilling effect wherein people would be hesitant to attend protests or visit certain friends or romantic partners knowing there would be a permanent record of it.

GAO has issued multiple reports on federal agencies’ use of face recognition and, in each, they have found that agencies don’t track system access or reliably train their agents. The office has repeatedly outlined recommendations for how federal agencies should develop guidance for face recognition use that takes into account the civil rights and privacy issues created by the technology. GAO’s latest report makes clear that law enforcement agencies continue to fail to heed these warnings.

Face recognition is intended to facilitate tracking and indexing individuals for future and real-time reference, a system that can be easily abused. Even if it were 100% accurate — and it isn’t — face recognition would still be too invasive and threatening to our civil rights and civil liberties to use. The federal government should immediately put guardrails around who can use it for what and cease its use of this technology altogether.

❌
❌