Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierLibre anglophone

The FBI is Playing Politics with Your Privacy

A bombshell report from WIRED reveals that two days after the U.S. Congress renewed and expanded the mass-surveillance authority Section 702 of the Foreign Intelligence Surveillance Act, the deputy director of the Federal Bureau of Investigation (FBI), Paul Abbate, sent an email imploring agents to “use” Section 702 to search the communications of Americans collected under this authority “to demonstrate why tools like this are essential” to the FBI’s mission.

In other words, an agency that has repeatedly abused this exact authority—with 3.4 million warrantless searches of Americans’ communications in 2021 alone, thinks that the answer to its misuse of mass surveillance of Americans is to do more of it, not less. And it signals that the FBI believes it should do more surveillance–not because of any pressing national security threat—but because the FBI has an image problem.

The American people should feel a fiery volcano of white hot rage over this revelation. During the recent fight over Section 702’s reauthorization, we all had to listen to the FBI and the rest of the Intelligence Community downplay their huge number of Section 702 abuses (but, never fear, they were fixed by drop-down menus!). The government also trotted out every monster of the week in incorrect arguments seeking to undermine the bipartisan push for crucial reforms. Ultimately, after fighting to a draw in the House, Congress bent to the government’s will: it not only failed to reform Section 702, but gave the government authority to use Section 702 in more cases.

Now, immediately after extracting this expanded power and fighting off sensible reforms, the FBI’s leadership is urging the agency to “continue to look for ways” to make more use of this controversial authority to surveil Americans, albeit with the fig leaf that it must be “legal.” And not because of an identifiable, pressing threat to national security, but to “demonstrate” the importance of domestic law enforcement accessing the pool of data collected via mass surveillance. This is an insult to everyone who cares about accountability, civil liberties, and our ability to have a private conversation online. It also raises the question of whether the FBI is interested in keeping us safe or in merely justifying its own increased powers. 

Section 702 allows the government to conduct surveillance inside the United States by vacuuming up digital communications so long as the surveillance is directed at foreigners currently located outside the United States. Section 702 prohibits the government from intentionally targeting Americans. But, because we live in a globalized world where Americans constantly communicate with people (and services) outside the United States, the government routinely acquires millions of innocent Americans' communications “incidentally” under Section 702 surveillance. Not only does the government acquire these communications without a probable cause warrant, so long as the government can make out some connection to FISA’s very broad definition of “foreign intelligence,” the government can then conduct warrantless “backdoor searches” of individual Americans’ incidentally collected communications. 702 creates an end run around the Constitution for the FBI and, with the Abbate memo, they are being urged to use it as much as they can.

The recent reauthorization of Section 702 also expanded this mass surveillance authority still further, expanding in turn the FBI’s ability to exploit it. To start, it substantially increased the scope of entities who the government could require to turn over Americans’ data in mass under Section 702. This provision is written so broadly that it potentially reaches any person or company with “access” to “equipment” on which electronic communications travel or are stored, regardless of whether they are a direct provider, which could include landlords, maintenance people, and many others who routinely have access to your communications.

The reauthorization of Section 702 also expanded FISA’s already very broad definition of “foreign intelligence” to include counternarcotics: an unacceptable expansion of a national security authority to ordinary crime. Further, it allows the government to use Section 702 powers to vet hopeful immigrants and asylum seekers—a particularly dangerous authority which opens up this or future administrations to deny entry to individuals based on their private communications about politics, religion, sexuality, or gender identity.

Americans who care about privacy in the United States are essentially fighting a political battle in which the other side gets to make up the rules, the terrain…and even rewrite the laws of gravity if they want to. Politicians can tell us they want to keep people in the U.S. safe without doing anything to prevent that power from being abused, even if they know it will be. It’s about optics, politics, and security theater; not realistic and balanced claims of safety and privacy. The Abbate memo signals that the FBI is going to work hard to create better optics for itself so that it can continue spying in the future.   

No Country Should be Making Speech Rules for the World

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Free Speech Around the World | EFFector 36.6

Let's gather around the campfire and tell tales of the latest happenings in the fight for privacy and free expression online. Take care in roasting your marshmallows while we share ways to protect your data from political campaigns seeking to target you; seek nominees for our annual EFF Awards; and call for immediate action in the case of activist Alaa Abd El Fattah.

As the fire burns out, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.6 - Free Speech Around the World

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

What Can Go Wrong When Police Use AI to Write Reports?

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Speaking Freely : Nompilo Simanje

Nompilo Simanje is a lawyer by profession and is the Africa Advocacy and Partnerships Lead at the International Press Institute. She leads the IPI Africa Program which monitors and collects data on press freedom threats and violations across the continent, including threats to journalists’ safety and gendered attacks against journalists both online and offline to inform evidence-based advocacy. Nompilo is an expert on the intersection of technology, the law, and human rights. She has years of experience in advocacy and capacity building aimed at promoting media freedom, freedom of expression, access to information, and the right to privacy. She also currently serves on the Advisory Board of the Global Forum on Cyber Expertise. Simanje is an alumnus of the Open Internet for Democracy Leaders Program and the US State Department IVLP Program on Promoting Cybersecurity.

This interview has been edited for length and clarity.*

York: What does free expression mean to you? 

For me, free expression or free speech is the capacity for one to be able to communicate their views and their opinions without any fear or without thinking that there might be some reprisals or repercussions for freely engaging on any conversation or any issue which might be personal, but also even on any issue of public interest. 

What are some of the qualities that have made you passionate about free speech?

Being someone who works in the civil society sector, I think when I look at free speech and free expression, I view it as an avenue for the realization of several other rights. One key thing for me is that free expression encourages interactive dialogue, it encourages public dialogue, which is very important. Especially for democracy, but also for transparency and accountability. Being based in Africa, we are always having conversations around corruption, around accountability by government actors and public officials. And I feel that free expression is a vehicle for that, because it allows people to be able to question those that hold power and to criticize certain conduct by people that are in power. Those are some of the qualities that I feel are very important for me when I think about free expression. It enables transparency and accountability, but also holding those in power to account, which is something I believe is very important for democracies in Africa. 

So you work all around the African continent. Broadly speaking, what are some of the biggest online threats you’re seeing today? 

The digital age has been quite a revolutionary development, especially when you think about free expression. And I always talk about this when I engage on the topic of digital rights, but it has opened the avenue for people to communicate across boundaries, across borders, across countries, but, at the same time—in terms of the impact of threats and risks—they become equally huge as well. As part of the work that I have been doing, there are a few key things that I’ve seen online. One would be the issue of legislation—that countries have increased or upscaled their regulation of the online space. And one of the biggest threats for me has been lawfare, seeing how countries have been implementing old and new laws to undermine free expression online. For example, cybercrime laws or even existing criminal law code or penal codes. So I’ve seen that increasingly happening in Africa. 

Other key things that come to mind are online harassment, which is also happening in various forms. So just sometime last year at the 77th Session of the ACHPR (African Commission on Human and Peoples' Rights) we hosted a side event on the online safety of female journalists in Africa. And there were so many cases which were being shared about how female journalists are fearing online harassment. One big issue discussed was targeted disinformation. Where individuals spread false information about a certain individual as a way of discrediting them or undermining them or just attempting to silence them and ensure that they don’t communicate freely online. But also sometimes online harassment in the form of doxxing. Where personal details are shared online. Someone’s address. Someone’s email. And people are mobilized to attack that person. I’ve seen all those cases happening and I feel that online harassment especially towards female journalists and politicians continue to be some of the biggest threats to free expression in the region. In addition, of course, to what state actors are doing. 

I think also, generally, what I’m also seeing as part of the regulation aspect, is sometimes even the suspension of news websites. Where journalists are using those platforms—you know, like podcasts, Twitter spaces—to freely express. So this increase in regulation is one of the key things I feel continues to threaten online expression, particularly in the region.

You also work globally, you serve on a couple of advisory boards, and I’m curious, coming from an African perspective, how you see things like the Cybercrime Treaty or other international developments impacting the nations that you work in? 

It’s a brilliant question because the adjunct committee for the UN Cybercrime Treaty just recently met. I think one of the aspects I’ve noticed is that sometimes African civil society actors are not meaningfully participating in global processes. And as a result, they don’t get to share their experiences and get to reflect on how some developments at the global level will impact the region. 

Just taking on the example you shared about the UN Cybercrime Treaty, as part of my role at IPI, we actually submitted a letter to the adjunct committee with about 49 other civil society actors within Africa, highlighting to the committee that if this treaty is enacted in the way it was currently crafted, with wide scope in terms of the crimes and minimal human rights safeguards, it would actually undermine free expression. And this was informed by our experiences with cybercrime laws in the region. And we’re saying we have seen how some authoritarian governments in the region have been using cybercrime laws. So imagine having a global treaty or a global cybercrime convention. It can be a tool for other authoritarian governments to justify some of their conduct which has been targeted at undermining free expression. Some of the examples include criminalizing inciting public violence or criminalizing publishing falsehoods. We have seen that consistently in several countries and how those laws have been used to undermine expression. I definitely think that whenever there are global engagements about conventions that can undermine fundamental rights it’s very important for Africa to be represented, particularly civil society, because civil society is there to promote human rights and ensure that human rights are safeguarded. 

Also, there have been other key discussions happening, for example, with the open-ended working group on ICTs. We’ve had conversations about cyber capacity-building in the region and how that would also look for Africa where internet penetration is not at its highest and already there are additional divisions where everyone is not able to freely express themselves online. I think all those deliberations need to be taken into account and they need to be contextualized. My opinion is that when I look at global processes and I think about Africa, I always feel that it’s important for civil society actors and key stakeholders to contribute meaningfully to those processes, but also for us to contextualize some of those discussions and deliberate on how they will potentially impact us. Even when I think about the Global Digital Compact and all those issues around the Compact that the Compact seeks to address, we also need to contextualize them with our experiences with countries in the region which have ongoing conflicts and with countries in the region that are led by military regimes—especially in West Africa. All those issues need to be taken into account when we deliberate about global conventions or global policies. So that’s how I’ve been approaching these conversations around the global process, but trying to contextualize them based on what’s happening in the region and what our experiences have been with similar legislation and policies. 

I’m also really curious, has your work touched on issues of content moderation? 

Yes, but not broadly, because I think our interaction with the platforms has been quite minimal, but, yes, we have engaged platforms before. I think I’ll give you an example of Somalia. There’ve been so many reported cases by our partners at Somali Journalist Syndicate where individual accounts of journalists have been suspended, permanently suspended, and sometimes taken down, simply because political sympathizers of the government consistently report those accounts for expressing dissenting views. Or state actors have reached out to the platforms and asked them to intervene and suspend either pages or individual accounts. So we’ve had conversations with the platforms and we have issued public statements to highlight that, as far as content moderation is concerned, it is very important for the platforms to be transparent about requests that they’re receiving from governments, and also to be deliberate as far as media freedom is concerned. Especially where content relates to content or news that has been disseminated by media outlets or pages or accounts that have been utilized by journalists. Because in some countries you see governments consistently trying to undermine or ensure that journalists or media outlets do not fully utilize the online space. So that’s the angle that we have interacted with the platforms as far as content moderation is concerned—just ensuring that as they undertake their work they prioritize media freedom, they prioritize journalists, but also they understand the operating context, that there are countries that are quite authoritarian where dissenting voices are being targeted. So we always try to engage the platforms whenever we get an opportunity to raise awareness where platforms are suspending accounts or taking down content where such content genuinely relates to expressional protected speech. 

York: Did you have any formative experiences that helped shape your views on freedom of expression? 

Funny story actually. When I was in high school I was in certain positions of leadership as a head girl in my high school, but also serving in Junior Parliament. We had this institution put on by the Youth Council where young people in high school can form a shadow Parliament representing different constituencies across the country. I happened to be a part of that in high school. So, of course, that meant being in public spaces, and also generally my identity being known outside my circles. So what that also meant was that it opened an avenue for me to be targeted by trolls online. 

At some point when I was in high school people posted some defamatory, false information about me on an online platform. And over the years I’ve seen that post still there, still in existence. When that happened, I was in high school, I was still a child. But I was interacting on Facebook, you know, we have used Facebook for so many years, that’s the platform I think so many of us have been most familiar with from the time we were still kids. When this post was put up it was posted through a certain page that was a tabloid of sorts. And no one knew who was behind that page, no one knew who was the administrator of that page. What that meant for me was there was no recourse. Because I didn’t even know who was behind this post, who posted this defamatory and false information about me. 

I think from there it really triggered an interest in me about regulation of free expression online. How do you approach issues around anonymity and how far can we go in terms of protecting free expression online in instances where, indeed, rights of other people are also being undermined? It really helped to shape my thoughts around regulation of social media, regulation of content online. So I think, for me, the position even in terms of the work I’ve continued to do in my adult life around digital rights literacy, I’ve really tried to emphasize a digital citizenship where the key focus is really to ensure that we can freely express, but we need to ensure the rights of others. Which is why I strongly condemn hate speech. Which is why I strongly condemn targeted attacks, for instance, on female politicians and female journalists. Because I know that while we can freely express ourselves, there are certain limitations or boundaries that we shouldn’t cross. And I think I learned that from experiencing that targeted attack on me online. 

York: Is there anything I haven’t touched on yet that you’d like to talk about? 

I’d like to maybe just speak briefly about the implications of free expression being undermined especially in the online space. And I’m emphasizing this because we are in the digital age where the online space has really provided a platform for the full realization of so many fundamental rights. So one of the key things I’ve seen is the increase in self-censorship. For example, if individuals are being arrested over their Tweets and Facebook posts, news websites are being suspended, there’s also an increase in self-censorship. But also limited participation in public dialogue. We have so many elections happening in 2024, and we’ve had recent elections happen in the region, also. Nigeria was a big election. DRC was another big election. What I’ve been seeing is really limited participation, especially by high risk groups like women and LGBTQI communities. Especially, for example, when they’ve been targeted in Uganda through legislation. So there’s been limited participation and interactive dialogue in the region because of all these various developments that have been happening. 

Also, one aspect that comes to mind for me is the correlation between free expression and freedom of assembly and association. Because we are also interacting with groups and other like-minded people in the online space. So while we are freely expressing, the online space is also a platform for assembly and association. And some people are also being robbed of that experience, of freely associating online, because of the threats or the attacks that have been targeting free expression. I think it’s also important for Africa to think about these implications—that when you’re targeting free expression, you’re also targeting other fundamental rights. And I think that’s quite important for me to emphasize as part of this conversation. 

York: Who is your free speech hero? Someone who has really inspired you? 

I haven’t really thought about that actually! I don’t think I have a specific person in mind, but I generally just appreciate everyone who freely expresses their mind, especially on Twitter, because Twitter can be quite brutal at times. But there are several individuals that I look at and really admire for their tenacity in continuing to engage on the platforms even when they’re constantly being targeted. I won’t mention a specific person, but I think, from a Zimbabwen perspective, I would highlight that I’ve seen several female politicians in Zimbabwe being targeted. Actually, I will mention, there’s a female politician in Zimbabwe, Fadzayi Mahere, she’s also an advocate. I’ll mention her as a free speech hero. Because every time I speak about online attacks or online gender-based violence in digital rights trainings, I always mention her. That’s because I’ve seen how she has been able to stand against so many coordinated attacks from a political front and from a personal front. Just to highlight that last year she published a video which had been circulating and trending online about a case where police had allegedly assaulted a woman who had been carrying a child on her back. And she tweeted about that and she was actually arrested, charged, and convicted for, I think, “publishing falsehoods”, or, there’s a provision in the criminal law code that I think is like “publishing falsehoods to undermine public authority or the police service.” So I definitely think she is a press freedom hero, her story is quite an interesting story to follow in terms of her experiences in Zimbabwe as a young lawyer and as a politician, and a female politician at that. 

Podcast Episode: Building a Tactile Internet

Blind and low-vision people have experienced remarkable gains in information literacy because of digital technologies, like being able to access an online library offering more than 1.2 million books that can be translated into text-to-speech or digital Braille. But it can be a lot harder to come by an accessible map of a neighborhood they want to visit, or any simple diagram, due to limited availability of tactile graphics equipment, design inaccessibility, and publishing practices.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

Chancey Fleet wants a technological future that’s more organically attuned to people’s needs, which requires including people with disabilities in every step of the development and deployment process. She speaks with EFF’s Cindy Cohn and Jason Kelley about building an internet that’s just and useful for all, and why this must include giving blind and low-vision people the discretion to decide when and how to engage artificial intelligence tools to solve accessibility problems and surmount barriers. 

In this episode you’ll learn about: 

  • The importance of creating an internet that’s not text-only, but that incorporates tactile images and other technology to give everyone a richer, more fulfilling experience. 
  • Why AI-powered visual description apps still need human auditing. 
  • How inclusiveness in tech development is always a work in progress. 
  • Why we must prepare people with the self-confidence, literacy, and low-tech skills they need to get everything they can out of even the most optimally designed technology. 
  • Making it easier for everyone to travel the two-way street between enjoyment and productivity online. 

Chancey Fleet’s writing, organizing and advocacy explores how cloud-connected accessibility tools benefit and harm, empower and expose communities of disability. She is the Assistive Technology Coordinator at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library, where she founded and maintains the Dimensions Project, a free open lab for the exploration and creation of accessible images, models and data representations through tactile graphics, 3D models and nonvisual approaches to coding, CAD and “visual” arts. She is a former fellow and current affiliate-in-residence at Data & Society; she is president of the National Federation of the Blind’s Assistive Technology Trainers Division; and she was recognized as a 2017 Library Journal Mover and Shaker. 

Resources: 

 What do you think of “How to Fix the Internet?” Share your feedback here. 

Transcript

CHANCEY FLEET
The fact is, as I see it, that if you are presented with what seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. What I've already noticed is blind people in droves dumping their descriptions of personal images, sentimental images, generated by AI onto social media, and there is a certain hyper-normative quality to the language. Any scene that contains a child or a dog is heartwarming. Any sunset or sunrise is vibrant. Anything with a couch and a lamp is calm or cozy. Idiosyncrasies are left by the wayside.

Unflattering little aspects of an image are often unremarked upon, and I feel like I'm being served some Ikea pressboard of reality, and it is so much better than anything that we've had before on demand without having to involve a sighted human being. And it's good enough to mail, kind of like a Hallmark card, but do I want the totality of digital description online to slide into this hyper normative, serene anodyne description? I do not. I think that we need to do something about it.

CINDY COHN
That's Chancey Fleet describing one of the problems that has arisen as AI is increasingly used in assistive technologies. 

I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Activism Director. This is our podcast, How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we start to get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fight when they DO go wrong. But this show is about optimism, hope and bright ideas for the future.

According to a National Health Interview Survey from 2018, more than 32 million Americans reported that they had vision loss, including blindness. And as our population continues to age, this number only increases. And a big part of fixing the internet means fixing it so that it works properly for everyone who needs and wants to use it – blind, sighted, and everyone in between.

JASON KELLEY
Our guest today is Chancey Fleet. She is the Assistive Technology Coordinator for the New York Public Library, where she teaches people how to use assistive technology to make their lives easier and more accessible. She’s also the president of the Assistive Technology Trainer’s Division for the National Federation of the Blind. 

CINDY COHN
We started our conversation as we often do – by asking Chancey what the world could be like if we started getting it right for blind and low vision people. 

CHANCEY FLEET
The unifying feature of rightness for blind and low vision folks is that we encounter a digital commons that plays to our strengths, and that means that it's easy for us to find information that we can access and understand. That might mean that web content always has semantic structure that includes things like headings for navigation. 

But it also includes things that we don't have much of right now, like a non-visual way to access maps and diagrams and images, because of course, the internet hasn't been in text only mode for the rest of us for a really long time.

I think getting the internet right also means that we're able to find each other and build community because we're a really low incidence disability. So odds are your colleague, your neighbor, your family members aren't blind or low-vision, and so we really have to learn and produce knowledge and circulate knowledge with each other. And when the internet gets it right, that's something that's easy for us to do. 

CINDY COHN
I think that's so right. And it's honestly consistent with, I think, what every community wants, right? I mean, the Internet's highest and best use is to connect us to the people we wanna be connected to. And the way that it works best is if the people who are the users of it, the people who are relying on it have, not just a voice, but a role in how this works.

I've heard you talk about that in the context of what you call ‘ghostwritten code.’ Do you wanna explain what that is? Am I right? I think that's one of the things that has concerned you.

CHANCEY FLEET
Yeah, you are right. A lot of people who work in design and development are used to thinking of blind and disabled people in terms of user stories and personas, and they may know on paper what the web content accessibility guidelines, for instance, say that a blind or low vision user or a keyboard-only user, or a switch user needs. The problems crop up when they interpret the concrete aspects of those guidelines without having a lived experience that leads them to understand usability in the real world.

I can give you one example. A few years ago, Google rolled out a transcribe feature within Google Translate, which I was personally super excited about. And by the way, I'm a refreshable Braille user, which means I use a Braille display with my iPhone. And if you were running VoiceOver, the screen reader for iPhone, when you launched the transcribed feature, it actually scolded you that it would not proceed, that it would not transcribe, until you plugged in headphones because well-meaning developers and designers thought, well, VoiceOver users have phones that talk, and if those phones are talking, it's going to ruin the transcription, so we'll just prevent that from happening. They didn't know about me. They didn't know about refreshable Braille users or users that might have another way to use VoiceOver that didn't involve speech out loud.

And so that, I guess you could call it a bug, I would call it a service denial, was around for a few weeks until our community communicated back about it, and if there had been blind people in the room or Braille users in the room, that would've never happened.

JASON KELLEY
I think this will be really interesting and useful for the designers at EFF who think a lot in user personas and also about accessibility. And I think just hearing what happens when you get it wrong and how simple the mistake can be is really useful I think for folks to think about inclusion and also just how essential it is to make sure there's more in-depth testing and personas as you're saying. 

I wanna talk a little bit about the variety of things you brought up in your opening salvo, which I think we're gonna cover a lot of. But one of the points you mentioned was, or maybe you didn't say it this way in the opening, but you've written about it, and talked about it, which is tactile graphics and something that's called the problem of image poverty online.

And that basically, as you mentioned, the internet is a primarily text-based experience for blind and low-vision users. But there are these tools that, in a better future, will be more accessible, both available and usable and effective. And I wonder if you could talk about some of those tools like tablets and 3D printers and things like that.

CHANCEY FLEET
So it's wild to me the way that our access to information as blind folks has evolved given the tools that we've had. So, since the eighties or nineties we've had Braille embossers that are also capable of creating tactile graphics, which is a fancy way to say raise drawings.

A graphics-capable embosser can emboss up to a hundred dots per inch. So if you look at it. Visually, it's a bit pixelated, but it approaches the limits of tactile perception. And in this way, we can experience media that includes maybe braille in the form of labels, but also different line types, dotted lines, dashed lines, textured infills.

Tactile design is a little bit different from visual design because our perceptual acuity is lower. It's good to scale things up. And it's good to declutter items. We may separate layers of information out to separate graphics. If Braille were print, it would be a thirty-six point font, so we use abbreviations liberally when we need to squeeze some braille onto an image.

And of course, we can't use color to communicate anything semantic. So when the idea of a red line or a blue line goes away we start thinking about a solid line versus a dashed or dotted line. When we think about a pie chart, we think about maybe textures or labels in place of colors. But what's interesting to me is that although tactile graphics equipment has been on the market since at least the eighties, probably someone will come along and correct me that it's even sooner than that.

Most of that equipment is on the wrong side of an institutional locked door, so it belongs to a disability services office in a university. It belongs to the makers of standardized tests. It belongs to publishers. I've often heard my library patrons say something along the lines of, oh yeah, there was a graphics embosser in my school, but I never got to touch it, I never got to use it. 

Sometimes the software that's used to produce tactile graphics is, in itself, inaccessible. And so I think blind people have experienced pretty remarkable gains in general in regard to our information literacy because of digital technologies and the internet. For example, I can go to Bookshare.org, which is an online library for people with print disabilities and have my choice of a million books right now.

And those can automatically be translated to text-to-speech or to digital braille. But if I want a map of the neighborhood that I'm going to visit tomorrow, or if I want a glimpse of how electoral races play out, that can be really hard to come by. And I think it is a combination of the limited availability of tactile graphics equipment, inaccessibility of design and publishing practices for tactile graphics, and then this sort of vicious circular lack of demand that happens when people don't have access. 

When I ask most blind people, they'll say that they've maybe encountered two or three tactile graphics in the past year, maybe less. Um, a lot of us got more than that during our K-12 instruction. But what I find, at least for myself, is that when tactile graphics are so strongly associated with standardized testing and homework and never associated with my own curiosity or fun or playfulness or exploration, for a long time, that actually dampened down my desire to experience tactile graphics.

And so most of us would say probably, if I can be so bold as to think that I speak for the community for a second, most of us would say that yes, we have the right to an accessible web. Yes, we have the right to digital text. I think far fewer of us are comfortable saying, or understand the power of saying we also have a right to images and so in the best possible version of the internet that I imagine we have three things. We have tactile graphics equipment that is bought more frequently, and so there are economies of scale and the prices come down. We have tactile design and graphics design programs that are more accessible than what's on the market right now. And critically, we have enough access to tactile graphics online that people can find the kind of information that engages and compels them. And within 10 years or so, people are saying, we don't live in a text-only world, images aren't inherently visual, they are spacial, and we have a right to them.

JASON KELLEY
I read a piece that you had written about the kind of importance of data visualizations during the pandemic and how important it was for that sort of flatten the curve graph to be able to be seen or, or touched in this case, um, by as many people as possible. But, and, and that really struck me, but I also love this idea that we shouldn't have to get these tools only because they're necessary, but also because people deserve to be able to enjoy the experience of the internet.

CHANCEY FLEET
Right, and you never know when enjoyment is going to lead to something productive or when something productive you're doing spins out into enjoyment. Somebody sent me a book of tactile origami diagrams. It's a four volume book with maybe 40 models in it, and I've been working through them all. I can do almost all of them now, and it's really hard as a blind person to go online and find origami instructions that make any sense from an accessibility perspective.

There is a wonderful website called AccessOrigami.com. Lindy Vandermeer out of South Africa does great descriptive origami instruction. So it's all text directing you step by step by step. But the thing is, I'm a spatial thinker. I'm what you might think of as a visual thinker, and so I can get more out of a diagram that's showing me where to flip dot A to dot B, then I can in reading three paragraphs. It's faster, it's more fluid, it's more fun. And so I treasure this book and unfortunately every other blind person I show it to also treasures it and can't have it 'cause I've got one copy. And I just imagine a world in which, when there's a diagram on screen, we can use some kind of process to re-render it in a more optimal format for tactile exploration. That might mean AI or machine learning, and we can talk a little bit about that later. But a lot of what we learn about. What we're good at, what we enjoy, want, what we want more of in life. You know, we do find online these days, and I want to be able to dive into those moments of curiosity and interest without having to first engineer a seven step plan to get access to whatever it is that's on my screen.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Chancey Fleet.

CINDY COHN
So let's talk a little bit about AI and I'd love to hear your perspective on where AI is gonna be helpful and where we ought to be cautious.

CHANCEY FLEET
So if you are blind and reasonably online and you have a smartphone and you're somebody that's comfortable enough with your smartphone that like you download apps on a discretionary basis, there's a good chance that you've heard of a new feature in this app, be my eyes called be my AI, and it's a ChatGPT with computer vision powered describer.

You aim your camera at something, wait a few seconds, and a fairly rich description comes back. It's more detailed and nuanced than anything that AI or machine learning has delivered before, and so it strikes a lot of us as transformational and or uncanny, and it allows us to grab glimpses of what I would call a hypothesized visual world because as we all know, these AI make up stories out of whole cloth and include details that aren't there, and skip details that to the average human observer would be obviously relevant. So I can know that the description I'm getting is probably not prioritized and detailed in quite the same way that a human describer would approach it.

So what's interesting to me is that, since interconnected blind folks have such a dense social graph, we are all sort of diving into this together and advising each other on what's going well and what's not. And I think that a lot of us are deriving authentic value from this experience as bounded by caveats as it is. At the same time, I fear that when this technology scales, which it will, if other forces don't counteract it, it may become a convincing enough business case that organizations and institutions can skip. Human authoring of alt text to describe images online and substitute these rich seeming descriptions that are generated by an AI, and even if that's done in such a way that a human auditor can go in and make changes.

The fact is, as I see it, that if you are presented with. What seems on a quick read, like good enough alt text, you're unlikely to do much labor to make it better, more nuanced, or more complete. 

CINDY COHN
I think what I hear in the answer is it can be an augment to the humans doing the describing, um, but not a replacement for, and that's where the, you know, but it's cheaper part comes in. Right. And I think keeping our North Star on the, you know, using these systems in ways that assist people rather than replace people is coming up over and over again in the conversations around AI, and I'm hearing it in what you're saying as well.

CHANCEY FLEET
Absolutely, and let me say as a positive it is both my due diligence as an educator and my personal joy to experiment with moments where AI technologies can make it easier for me to find information or learn things. For example, if I wanna get a quick visual description of the Bluebird trains that the MTA used to run, that's a question that I might ask AI.

I never would've bothered a human being with it. It was not central enough. But if I'm reading something and I want a quick visual description to fill it in, I'll do that.

I also really love using AI tools to look up questions about different artistic or architectural styles, or even questions about code.

I'm studying Python right now because when I go to look for information online on these subjects, often I'm finding websites that are riddled with. Lack of semantic structure that have graphics that are totally unlabeled, that have carousels, that are hard for screen reader users to navigate. And so one really powerful and compelling thing that current Conversational AI offers is that it lives in a text box and it won't violate the conventions of a chat by throwing a bunch of unwanted visual or structural clutter my way.

And when I just want an answer and I'm willing to grant myself that I'm going to have to live with the consequences of trusting that answer, or do some lateral reference, do some double checking, it can be worth my while. And in the best possible world moving forward, I'd like us to be able to harness that efficiency and that facility that conversational AI has for avoiding the hyper visual in a way that empowers us, but doesn't foreclose opportunities to find things out in other ways.

CINDY COHN
As you're describing it, I'm envisioning, you know, my drunk friend, right? They might do okay telling me stuff, but I wouldn't rely on them for stuff that really matters.

CHANCEY FLEET
Exactly.

CINDY COHN
You've also talked a little bit about the role of data privacy and consent and the special concerns that blind people have around some of the technologies that are offered to them. But making sure that consent is real. I'd love for you to talk a little bit about that.

CHANCEY FLEET
When AI is deployed on the server side to fix accessibility problems in lieu of baking, accessibility in from the ground up in a website or an application, that does a couple of things. It avoids changing the culture at the company, the customer company itself, around accessibility. It also involves an ongoing cost and technology debt to the overlay company that an organization is using and it builds in the need for ongoing supervision of the AI. So in a lot of ways, I think that that's not optimal. What I think is optimal is for developers and designers, perhaps, to use AI tools to flag issues in need of human remediation, and to use AI tools for education to speed up their immersion into accessibility and usability concepts.

You know, AI can be used to make short work of things that used to take a little bit more time. When it comes to deploying AI tools to solve accessibility problems, I think that that is a suite of tools that is best left to the discretion of the user. So we can decide, on the user side, for example, when to turn on a browser extension that tries to make those remediations. Because when they're made for us at scale, that doesn't happen with our consent and it can have a lot of collateral impacts that organizations might not expect.

JASON KELLEY
The points you're making about being involved in different parts of the process. Right. It's clear that people that use these tools or that, that actually these tools are designed for should be able to decide when to deploy them.

And it's also clear that they should be more involved, as you've mentioned a few times, in the creation. And I wanted to talk a little bit about that idea of inclusion because it's sort of how we get to a place where consent is  actually, truly given. 

And it's also how we get to a place where these tools that are created do what they're supposed to do, and the companies that you're describing, um, build the, the web, the way that it should be built so that people can can access it.

We have to have inclusion in every step of the process to get to that place where these, all of these tools and the web and, and everything we're talking about actually works for everyone. Is inclusion sort of across the spectrum a solution that you see as well?

CHANCEY FLEET
I would say that inclusion is never a solution because inclusion is a practice and a process. It's something that's never done. It's never achieved, and it's never comprehensive and perfect. 

What I see as my role as an educator, when it comes to inclusion, is meeting people where they are trying to raise awareness – among library patrons and everyone else – I serve about what technologies are available and the costs and benefits of each, and helping people road map a path from their goals and their intentions to achieving the things that they want to do.

And so I think of inclusion as sort of a guiding frame and a constant set of questions that I ask myself about what I'm noticing, what I may not be noticing, what I might be missing, who's coming in, for example, for tech lessons, versus who we're not reaching. And how the goals of the people I serve might differ from my goals for them.

And it's all kind of a spider web of things that add up to inclusion as far as I'm concerned.

CINDY COHN
I like that framing of inclusion as kind of a process rather than an end state. And I think that framing is good because I think it really moves away from the checkbox kind of approach to things like, you know, did we get the disabled person in the room? Check! 

Everybody has different goals and different things that work for them and there isn't just one box that can be checked for a lot of these kinds of things.

CHANCEY FLEET
Blind library patrons and blind people in general are as diverse as any library patrons or people in general. And that impacts our literacy levels. It impacts our thoughts and the thoughts of our loved ones about disability. It impacts our educational attainment, and especially for those of us who lose our vision later in life, it impacts how we interact with systems and services.

I would venture to say that at this time in the U.S, if you lose your vision as an adult, or if you grow up blind in a school system, the quality of literacy and travel and independent living instruction you receive is heavily dependent on the quality of the systems and infrastructure around you, who you know, and who you know who is primed to be a disability advocate or a mentor.

And I see such different outcomes when it comes to technology based on those things. And so we can't talk about a best possible world in the technology sphere without also imagining a world that prepares people with the self-confidence, the literacy skills, and the supports for developing low tech skills that are necessary to get everything that one can get out of even the most optimally designed technology. 

A step by step app for walking directions can be as perfect as it gets. But if the person that you are equipping with that app is afraid to step out of their front door and start moving their cane back and forth and listening to the traffic and trusting their reflexes and their instincts because they have been taught how to trust those things, the app won't be used and there'll be people who are unreached and so technology can only succeed to the extent that the people using it are set up to succeed. And I think that that is where a lot of our toughest work resides.

CINDY COHN
We're trying to fix the internet here, but the internet rests on the rest of the world. And if the rest of the world isn't setting people up for success, technology can't swoop in and solve a lot of these problems.

It needs to rest upon a solid foundation. I think that's just a wonderful place to close because all of us sit on top of what John Perry Barlow called meatspace, right, and if meatspace isn't serving us, then the digital world can only, you know, it can't solve for the problems that are not digital.

JASON KELLEY
I would have loved to talk to Chancey for another hour. That was fantastic.

CINDY  COHN
Yeah, that was a really fun conversation. And I have to say, I just love the idea of the internet going tactile, right? That right now it's all very visual, and that we have the technology to make it tactile so that maps and other things that are, you know, pretty hard for people with low vision or blindness to navigate now, but we have technology, some of the, tools that she talked about that really could make the internet something you could feel as well as see? 

JASON KELLEY
Yeah, I didn't know before talking to her that these tools even existed. And when you hear about it, you're like, oh, of course they do. But it was clear, uh, It was clear from what she said that a lot of people don't have access to them. The tools are relatively new and they need to be spread out more.  But when that happens, hopefully that does happen,  it sort of then requires us to rethink how the internet is built in some ways in terms of the hierarchy of text and what kinds of graphics exist and protocols for converting that information into tactile experiences for people. 

CINDY COHN
Yeah, I think so. And  it does sit upon something that she mentioned. I mean, she said these machines exist and have existed for a long time, but they're mainly in libraries or other places where people can't use them in their everyday lives. And, and I think, you know, one of the things that we ended with in the conversation was really important, which is, you know, we're all sitting upon a society that doesn't make a lot of these tools as widely available as they need to. 

And, you know, the good news in that is that the hard problem has been solved, which is how do you build a machine like this? The problem that we ought to be able to address as a society is how do we make it available much more broadly? I use this quote a lot, but you know, the future is here. It's just not evenly distributed. Seemed really, really clear in the way that she talked about these tools that like most blind people have used once or twice in school, but then don't get to use and turn part of their everyday life 

JASON KELLEY
Yeah. The, the way I heard this was that we have this problem solved sort of at an institutional level where you can access these tools at an institution, but not at the individual level. And it's really.  It is helpful to hear and and optimistic to hear that they will exist in theory in people's homes if we can just get that to happen. And I think what was really rare for this conversation is that it, like you said, we actually do have the technology to do these things a lot of times we're talking about what we need to improve or change about the technology and and how that technology doesn't quite exist or will always be problematic and in this case, sure, the technology can always get better, but  it sounds like we're actually  At a point where we have a lot of the problems solved, whether it's using tactile tablets or, um,  creating ways for people to  use technology to guide each other through places, whether that's through like a person, through Be My Eyes or even in some cases an AI with the Be My AI version of that.

But we just haven't gotten to the point where those things work for everyone. And everyone has  a level of technological proficiency that lets them use those things. And that's something that clearly we'll need to work on in the future.

CINDY COHN
Yeah, but she also pointed out the work that needs to be done about making sure that we're continuing to build the tech that actually serves this community. And she, you know, and they're talking about, you know, ghostwritten code and things like that, where, you know, people who don't have the experience are writing things and building things based upon what they think the people who are blind might want. So, you know, on the one hand, there's good news because a lot of really good technology already exists, but I think she also didn't let us off the hook as a society about something that we, we see all across the board, which is, you know, it need, we need to have the direct input of the people who are going to be using the tools in the building of the tools, lest we end up on a whole other path with things that other than what people actually need. And, you know, this is one of the kind of old, you know, what did they say? The lessons will be repeated until they are learned. This is one of those things where over and over again, we find that the need for people who are building technologies to not just talk to the people who are going to be using them, but really embed those people in the development is one of the ways we stay true to our, to our goal, which is to build stuff that will actually be useful to people.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback, we'd love to hear from you. Visit EFF.org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some limited edition merch like tshirts or buttons or stickers and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Attribution 3.0 unported by their creators. In this episode, you heard Probably Shouldn't by J.Lang, commonGround by airtone and Klaus by Skill_Borrower

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN

And I’m Cindy Cohn.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

EFF Zine on Surveillance Tech at the Southern Border Shines Light on Ever-Growing Spy Network

Par : Karen Gullo
6 mai 2024 à 11:13
Guide Features Border Tech Photos, Locations, and Explanation of Capabilities

SAN FRANCISCO—Sensor towers controlled by AI, drones launched from truck-bed catapults, vehicle-tracking devices disguised as traffic cones—all are part of an arsenal of technologies that comprise the expanding U.S surveillance strategy along the U.S.-Mexico border, revealed in a new EFF zine for advocates, journalists, academics, researchers, humanitarian aid workers, and borderland residents.

Formally released today and available for download online in English and Spanish, “Surveillance Technology at the U.S.-Mexico Border” is a 36-page comprehensive guide to identifying the growing system of surveillance towers, aerial systems, and roadside camera networks deployed by U.S.-law enforcement agencies along the Southern border, allowing for the real-time tracking of people and vehicles.

The devices and towers—some hidden, camouflaged, or moveable—can be found in heavily populated urban areas, small towns, fields, farmland, highways, dirt roads, and deserts in California, Arizona, New Mexico, and Texas.

The zine grew out of work by EFF’s border surveillance team, which involved meetings with immigrant rights groups and journalists, research into government procurement documents, and trips to the border. The team located, studied, and documented spy tech deployed and monitored by the Department of Homeland Security (DHS), Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), National Guard, and Drug Enforcement Administration (DEA), often working in collaboration with local law enforcement agencies.

“Our team learned that while many people had an abstract understanding of the so-called ‘virtual wall,’ the actual physical infrastructure was largely unknown to them,” said EFF Director of Investigations Dave Maass. “In some cases, people had seen surveillance towers, but mistook them for cell phone towers, or they’d seen an aerostat flying in the sky and not known it was part of the U.S. border strategy.

“That's why we put together this zine; it serves as a field guide to spotting and identifying the large range of technologies that are becoming so ubiquitous that they are almost invisible,” said Maass.

The zine also includes a copy off EFF’s pocket guide to crossing the U.S. border and protecting information on smart phones, computers, and other digital devices.

The zine is available for republication and remixing under EFF’s Creative Commons Attribution License and features photography by Colter Thomas and Dugan Meyer, whose exhibit “Infrastructures of Control,”—which incorporates some of EFF’s border research—opened in April at the University of Arizona. EFF has previously released a gallery of images of border surveillance that are available for publications to reuse, as well as a living map of known surveillance towers that make up the so-called “virtual wall.”

To download the zine:
https://www.eff.org/pages/zine-surveillance-technology-us-mexico-border

For more on border surveillance:
https://www.eff.org/issues/border-surveillance-technology

For EFF’s searchable Atlas of Surveillance:
https://atlasofsurveillance.org/ 

 

Contact: 
Dave
Maass
Director of Investigations

CCTV Cambridge, Addressing Digital Equity in Massachusetts

Here at EFF digital equity is something that we advocate for, and we are always thrilled when we hear a member of the Electronic Frontier Alliance is advocating for it as well. Simply put, digital equity is the condition in which everyone has access to technology that allows them to participate in society; whether it be in rural America or the inner cities—both places where big ISPs don’t find it profitable to make such an investment. EFF has long advocated for affordable, accessible, future-proof internet access for all. I recently spoke with EFA member CCTV Cambridge, as they partnered with the Massachusetts Broadband Institute to tackle this issue and address the digital divide in their state:

How did the partnership with the Massachusetts Broadband Institute come about, and what does it entail?

Mass Broadband Institute and Mass Hire Metro North are the key funding partners. We were moving forward with lifting up digital equity and saw an opportunity to apply for this funding, which is going to several communities in the Metro North area. So, this collaboration was generated in Cambridge for the partners in this digital equity work. Key program activities will entail hiring and training “Digital Navigators” to be placed in the Cambridge Public Library and Cambridge Public Schools, working in partnership with navigators at CCTV and Just A Start. CCTV will employ a coordinator as part of the project, who will serve residents and coordinate the digital navigators across partners to build community, skills, and consistency in support for residents. Regular meetings will be coordinated for Digital Navigators across the city to share best practices, discuss challenging cases, exchange community resources, and measure impact from data collection. These efforts will align with regional initiatives supported through the Mass Broadband Institute Digital Navigator coalition.

What is CCTV Cambridge’s approach to digital equity and why is it an important issue?

CCTV’s approach to digital equity has always been about people over tech. We really see the Digital Navigators as more like digital social workers rather than IT people in a sense that technology is required to be a fully civically engaged human, someone who is connected to your community and family, someone who can have a sense of well being and safety in the world. We really feel like what digital equity means is not just being able to use the tools but to be able to have access to the tools that make your life better. You really can’t operate in an equal way in the world without the access to technology, you can’t make a doctor’s appointment, talk to your grandkids on zoom, you can’t even park your car without an app! You can’t be civically engaged without access to tech. We risk marginalizing a bunch of folks if we don’t, as a community, bring them into digital equity work. We’re community media, it’s in our name, and digital equity is the responsibility of the community. It’s not okay to leave people behind.

It’s amazing to see organizations like CCTV Cambridge making a difference in the community, what do you envision as the results of having the Digital Navigators?

Hopefully we’re going to increase community and civic engagement in Cambridge, particularly amongst people who might not have the loudest voice. We’re going to reach people we haven't reached in the past, including people who speak languages other than English and haven’t had exposure to community media. It’s a really great opportunity for intergenerational work which is also a really important community building tool.

How can people both locally in Massachusetts and across the country plug-in and support?

People everywhere are welcomed and invited to support this work through donations, which you can do by visiting cctvcambridge.org! When the applications open for the Digital Navigators, share in your networks with people you think would love to do this work; spread the word on social media and follow us on all platforms @cctvcambridge! 

The U.S. House Version of KOSA: Still a Censorship Bill

A companion bill to the Kids Online Safety Act (KOSA) was introduced in the House last month. Despite minor changes, it suffers from the same fundamental flaws as its Senate counterpart. At its core, this bill is still an unconstitutional censorship bill that restricts protected online speech and gives the government the power to target services and content it finds objectionable. Here, we break down why the House version of KOSA is just as dangerous as the Senate version, and why it’s crucial to continue opposing it. 

Core First Amendment Problems Persist

EFF has consistently opposed KOSA because, through several iterations of the Senate bill, it continues to open the door to government control over what speech content can be shared and accessed online. Our concern, which we share with others, is that the bill’s broad and vague provisions will force platforms to censor legally protected content and impose age-verification requirements. The age verification requirements will drive away both minors and adults who either lack the proper ID, or who value their privacy and anonymity.   

The House version of KOSA fails to resolve these fundamental censorship problems.

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

Dangers for Everyone, Especially Young People

One of the key concerns with KOSA has been its potential to harm the very population it aims to protect—young people. KOSA’s broad censorship requirements would limit minors’ access to critical information and resources, including educational content, social support groups, and other forms of legitimate speech. This version does not alleviate that concern. For example, this version of KOSA could still: 

  • Suppress search results for young people seeking sexual health and reproductive rights information; 
  • Block content relevant to the history of oppressed groups, such as the history of slavery in the U.S; 
  • Stifle youth activists across the political spectrum by preventing them from connecting and advocating on their platforms; and 
  • Block young people seeking help for mental health or addiction problems from accessing resources and support. 

As thousands of young people have told us, these concerns are just the tip of the iceberg. Under the guise of protecting them, KOSA will limit minors’ ability to self-explore, to develop new ideas and interests, to become civically engaged citizens, and to seek community and support for the very harms KOSA ostensibly aims to prevent. 

What’s Different About the House Version?

Although there are some changes in the House version of KOSA, they do little to address the fundamental First Amendment problems with the bill. We review the key changes here.

1. Duty of Care Provision   

We’ve been vocal about our opposition to KOSA’s “duty of care” censorship provision. This section outlines a wide collection of harms to minors that platforms have a duty to prevent and mitigate by exercising “reasonable care in the creation and implementation of any design feature” of their product. The list includes self-harm, suicide, eating disorders, substance abuse, depression, anxiety, and bullying, among others. As we’ve explained before, this provision would cause platforms to broadly over-censor the internet so they don’t get sued for hosting otherwise legal content that the government—in this case the FTC—claims is harmful.

The House version of KOSA retains this chilling effect, but limits the "duty of care" requirement to what it calls “high impact online companies,” or those with at least $2.5 billion in annual revenue or more than 150 million global monthly active users. So while the Senate version requires all “covered platforms” to exercise reasonable care to prevent the specific harms to minors, the House version only assigns that duty of care to the biggest platforms.

While this is a small improvement, its protective effect is ultimately insignificant. After all, the vast majority of online speech happens on just a handful of platforms, and those platforms—including Meta, Snap, X, WhatsApp, and TikTok—will still have to uphold the duty of care under this version of KOSA. Smaller platforms, meanwhile, still face demanding obligations under KOSA’s other sections. When government enforcers want to control content on smaller websites or apps, they can just use another provision of KOSA—such as one that allows them to file suits based on failures in a platform’s design—to target the same protected content.

2. Tiered Knowledge Standard 

Because KOSA’s obligations apply specifically to users who are minors, there are open questions as to how enforcement would work. How certain would a platform need to be that a user is, in fact, a minor before KOSA liability attaches? The Senate version of the bill has one answer for all covered platforms: obligations attach when a platform has “actual knowledge” or “knowledge fairly implied on the basis of objective circumstances” that a user is a minor. This is a broad, vague standard that would not require evidence that a platform actually knows a user is a minor for it to be subject to liability. 

The House version of KOSA limits this slightly by creating a tiered knowledge standard under which platforms are required to have different levels of knowledge based on the platform’s size. Under this new standard, the largest platforms—or "high impact online companies”—are required to carry out KOSA’s provisions with respect to users they “knew or should have known” are minors. This, like the Senate version’s standard, would not require proof that a platform actually knows a user is a minor for it to be held liable. Mid-sized platforms would be held to a slightly less stringent standard, and the smallest platforms would only be liable where they have actual knowledge that a user was under 17 years old. 

While, again, this change is a slight improvement over the Senate’s version, the narrowing effect is small. The knowledge standard is still problematically vague, for one, and where platforms cannot clearly decipher when they will be liable, they are likely to implement dangerous age verification measures anyway to avoid KOSA’s punitive effects.

Most importantly, even if the House’s tinkering slightly reduces liability for the smallest platforms, this version of the bill still incentivizes large and mid-size platforms—which, again, host the vast majority of all online speech—to implement age verification systems that will threaten the right to anonymity and create serious privacy and security risks for all users.

3. Exclusion for Non-Interactive Platforms

The House bill excludes online platforms where chat, comments, or interactivity is not the predominant purpose of the service. This could potentially narrow the number of platforms subject to KOSA's enforcement by reducing some of the burden on websites that aren't primarily focused on interaction.

However, this exclusion is legally problematic because its unclear language will again leave platforms guessing as to whether it applies to them. For instance, does Instagram fall into this category or would image-sharing be its predominant purpose? What about TikTok, which has a mix of content-sharing and interactivity? This ambiguity could lead to inconsistent enforcement and legal challenges—the mere threat of which tend to chill online speech.

4. Definition of Compulsive Usage 

Finally, the House version of KOSA also updates the definition of “compulsive usage” from any “repetitive behavior reasonably likely to cause psychological distress” to any “repetitive behavior reasonably likely to cause a mental health disorder,” which the bill defines as anything listed in the Diagnostic and Statistical Manual of Mental Disorders, or DSM. This change pays lip service to concerns we and many others have expressed that KOSA is overbroad, and will be used by state attorneys general to prosecute platforms for hosting any speech they deem harmful to minors. 

However, simply invoking the name of the healthcare professionals’ handbook does not make up for the lack of scientific evidence that minors’ technology use causes mental health disorders. This definition of compulsive usage still leaves the door open for states to go after any platform that is claimed to have been a factor in any child’s anxiety or depression diagnosis. 

KOSA Remains a Censorship Threat 

Despite some changes, the House version of KOSA retains its fundamental constitutional flaws.  It encourages government-directed censorship, dangerous digital age verification, and overbroad content restrictions on all internet users, and further harms young people by limiting their access to critical information and resources. 

Lawmakers know this bill is controversial. Some of its proponents have recently taken steps to attach KOSA as an amendment to the five-year reauthorization of the Federal Aviation Administration, the last "must-pass" legislation until the fall. This would effectively bypass public discussion of the House version. Just last month Congress attached another contentious, potentially unconstitutional bill to unrelated legislation, by including a bill banning TikTok inside of a foreign aid package. Legislation of this magnitude deserves to pass—or fail—on its own merits. 

We continue to oppose KOSA—in its House and Senate forms—and urge legislators to instead seek alternatives such as comprehensive federal privacy law that protect young people without infringing on the First Amendment rights of everyone who relies on the internet.  

TAKE ACTION

THE "KIDS ONLINE SAFETY ACT" ISN'T SAFE FOR KIDS OR ADULTS

On World Press Freedom Day (and Every Day), We Fight for an Open Internet

Today marks World Press Freedom Day, an annual celebration instituted by the United Nations in 1993 to raise awareness of press freedom and remind governments of their duties under Article 19 of the Universal Declaration of Human Rights. This year, the day is dedicated to the importance of journalism and freedom of expression in the context of the current global environmental crisis.

Journalists everywhere face challenges in reporting on climate change and other environmental issues. Whether lawsuits, intimidation, arrests, or disinformation campaigns, these challenges are myriad. For instance, journalists and human rights campaigners attending the COP28 Summit held in Dubai last autumn faced surveillance and intimidation. The Committee to Protect Journalists (CPJ) has documented arrests of environmental journalists in Iran and Venezuela, among other countries. And in 2022, a Guardian journalist was murdered while on the job in the Brazilian Amazon.

The threats faced by journalists are the same as those faced by ordinary internet users around the world. According to CPJ, there are 320 journalists jailed worldwide for doing their job. And ranked among the top jailers of journalists last year were China, Myanmar, Belarus, Russia, Vietnam, Israel, and Iran; countries in which internet users also face censorship, intimidation, and in some cases, arrest. 

On this World Press Freedom Day, we honor the journalists, human rights defenders, and internet users fighting for a better world. EFF will continue to fight for the right to freedom of expression and a free and open internet for every internet user, everywhere.



Biden Signed the TikTok Ban. What's Next for TikTok Users?

Over the last month, lawmakers moved swiftly to pass legislation that would effectively ban TikTok in the United States, eventually including it in a foreign aid package that was signed by President Biden. The impact of this legislation isn’t entirely clear yet, but what is clear: whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. 

What Happens Next?

At the moment, TikTok isn’t “banned.” The law gives ByteDance 270 days to divest TikTok before the ban would take effect, which would be on January 19th, 2025. In the meantime, we expect courts to determine that the bill is unconstitutional. Though there is no lawsuit yet, one on behalf of TikTok itself is imminent.

There are three possible outcomes. If the law is struck down, as it should be, nothing will change. If ByteDance divests TikTok by selling it, then the platform would still likely be usable. However, there’s no telling whether the app’s new owners would change its functionality, its algorithms, or other aspects of the company. As we’ve seen with other platforms, a change in ownership can result in significant changes that could impact its audience in unexpected ways. In fact, that’s one of the given reasons to force the sale: so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation. This is despite the fact that it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. 

Lastly, if ByteDance refuses to sell, users in the U.S. will likely see it disappear from app stores sometime between now and that January 19, 2025 deadline. 

How Will the Ban Be Implemented? 

The law limits liability to intermediaries—entities that “provide services to distribute, maintain, or update” TikTok by means of a marketplace, or that provide internet hosting services to enable the app’s distribution, maintenance, or updating. The law also makes intermediaries responsible for its implementation. 

The law explicitly denies to the Attorney General the authority to enforce it against an individual user of a foreign adversary controlled application, so users themselves cannot be held liable for continuing to use the application, if they can access it. 

Will I Be Able to Download or Use TikTok If ByteDance Doesn’t Sell? 

It’s possible some U.S. users will find routes around the ban. But the vast majority will probably not, significantly shifting the platform's user base and content. If ByteDance itself assists in the distribution of the app, it could also be found liable, so even if U.S. users continue to use the platform, the company’s ability to moderate and operate the app in the U.S. would likely be impacted. Bottom line: for a period of time after January 19, it’s possible that the app would be usable, but it’s unlikely to be the same platform—or even a very functional one in the U.S.—for very long.

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.  Enacting this legislation has undermined this long standing, democratic principle. It has also undermined the U.S. government’s moral authority to call out other nations for when they shut down internet access or ban social media apps and other online communications tools. 

There are a few reasons legislators have given to ban TikTok. One is to change the type of content on the app—a clear First Amendment violation. The second is to protect data privacy. Our lawmakers should work to protect data privacy, but this was the wrong approach. They should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries. They should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation. Instead, as happens far too often, our government’s actions are vastly overreaching while also deeply underserving the public. 

Speaking Freely: Rebecca MacKinnon

*This interview has been edited for length and clarity.

Rebecca MacKinnon is Vice President, Global Advocacy at the Wikimedia Foundation, the non-profit that hosts Wikipedia. Author of Consent of the Networked: The Worldwide Struggle For Internet Freedom (2012), she is co-founder of the citizen media network Global Voices, and  founding director of Ranking Digital Rights, a research and advocacy program at New America. From 1998-2004 she was CNN’s Bureau Chief in Beijing and Tokyo. She has taught at the University of Hong Kong and the University of Pennsylvania, and held fellowships at Harvard, Princeton, and the University of California. She holds an AB magna cum laude in Government from Harvard and was a Fulbright scholar in Taiwan.

David Greene: Can you introduce yourself and give us a bit of your background? 

My name is Rebecca MacKinnon, I am presently the Vice President for Global Advocacy at the Wikimedia Foundation, but I’ve worn quite a number of hats working in the digital rights space for almost twenty years. I was co-founder of Global Voices, which at the time we called it International Bloggers’ Network, which is about to hit its twentieth anniversary. I was one of the founding board members of the Global Networking Initiative, GNI. I wrote a book called “Consent of the Networked: The Worldwide Struggle for Internet Freedom,” which came out more than a decade ago. It didn’t sell very well, but apparently it gets assigned in classes still so I still hear about it. I was also a founding member of Ranking Digital Rights, which is a ranking of the big tech companies and the biggest telecommunications companies on the extent to which they are or are not protecting their users’ freedom of expression and privacy. I left that in 2021 and ended up with the Wikimedia Foundation, and it’s never a dull moment! 

Greene: And you were a journalist before all of this, right? 

Yes, I worked for CNN for twelve years in Beijing for nine years where I ended up Bureau Chief and Correspondent, and in Tokyo for almost three years where I was also Bureau Chief and Correspondent. That’s also where I first experienced the magic of the global internet in a journalistic context and also experienced the internet arriving in China and the government immediately trying to figure out both how to take advantage of it economically but also to control it enough that the Communist Party would not lose power. 

Greene: At what point did it become apparent that the internet would bring both benefits and threats to freedom of expression?

At the beginning I think the media, industry, policymakers, kind of everybody, assumed—you know, this is like in 1995 when the internet first showed up commercially in China—everybody assumed “there’s no way the Chinese Communist Party can survive this,” and we were all a bit naive. And our reporting ended up influencing naive policies in that regard. And perhaps naive understanding of things like Facebook revolutions and things like that in the activism world. It really began to be apparent just how authoritarianism was adapting to the internet and starting to adapt the internet. And how China was really Exhibit A for how that was playing out and could play out globally. That became really apparent in the mid-to-late 2000s as I was studying Chinese blogging communities and how the government was controlling private companies, private platforms, to carry out censorship and surveillance work. 

Greene: And it didn’t stop with China, did it? 

It sure didn’t! And in the book I wrote I only had a chapter on China and talked about how if the trajectory the Western democratic world was on just kind of continued in a straight line we were going to go more in China’s direction unless policymakers, the private sector, and everyone else took responsibility for making sure that the internet would actually support human rights. 

Greene: It’s easy to talk about authoritarian threats, but we see some of the same concerns in democratic countries as well. 

We’re all just one bad election away from tyranny, aren’t we? This is again why when we’re talking to lawmakers, not only do we ask them to apply a Wikipedia test—if this law is going to break Wikipedia, then it’s a bad law—but also, how will this stand up to a bad election? If you think a law is going to be good for protecting children or fighting disinformation under the current dominant political paradigm, what happens if someone who has no respect for the rule of law, no respect for democratic institutions or processes ends up in power? And what will they do with that law? 

Greene: This happens so much within disinformation, for example, and I always think of it in terms of, what power are we giving the state? Is it a good thing that the state has this power? Well, let’s switch things up and go to the basics. What does free speech mean to you? 

People talk about is it free as in speech? Is it free as in beer? What does “free” mean? I am very much in the camp that freedom of expression needs to be considered in the context of human rights. So my free speech does not give me freedom to advocate for a pogrom against the neighboring neighborhood. That is violating the rights of other people. And I actually think that Article 19 of the Declaration of Human Rights—it may not be perfect—but it gives us a really good framework to think about what is the context of freedom of expression or free speech as situated with other rights? And how do we make sure that, if there are going to be limits on freedom of expression to prevent me from calling for a pogrom of my neighbors, then the limitations placed on my speech are necessary and proportionate and cannot be abused? And therefore it’s very important that whoever is imposing those limits is being held accountable, that their actions are sufficiently transparent, and that any entity’s actions to limit my speech—whether it’s a government or an internet service provider—that I understand who has the power to limit my speech or limit what I can know or limit what I can access, so that I can even know what I don’t know! So that I know what is being kept from me. I also know who has the authority to restrict my speech, under what circumstances, so that I know what I can do to hold them accountable. That is the essence of freedom of speech within human rights and where power is held appropriately accountable. 

Greene: How do you think about the ways that your speech might harm people? 

You can think of it in terms of the other rights in the Universal Declaration. There’s the right to privacy. There’s the right to assembly. There’s the right to life! So for me to advocate for people in that building over there to go kill people in that other building, that’s violating a number of rights that I should not be able to violate. But what’s complicated, when we’re talking about rules and rights and laws and enforcement of laws and governance online, is that we somehow think it can be more straightforward and black and white than governance in the physical world is. So what do we consider to be appropriate law enforcement in the city of San Francisco? It’s a hot topic! And reasonable people of a whole variety of backgrounds reasonably disagree and will never agree! So you can’t just fix crime in San Francisco the way you fix the television. And nobody in their right mind would expect that you should expect that, right? But somehow in the internet space there’s so much policy conversation around making the internet safe for children. But nobody’s running around saying, “let’s make San Francisco safe for children in the same way.” Because they know that if you want San Francisco to be 100% safe for children, you’re going to be Pyongyang, North Korea! 

Greene: Do you think that’s because with technology some people just feel like there’s this techno-solutionism? 

Yeah, there’s this magical thinking. I have family members who think that because I can fix something with their tech settings I can perform magic. I think because it’s new, because it’s a little bit mystifying for many people, and because I think we’re still in the very early stages of people thinking about governance of digital spaces and digital activities as an extension of real world activities. And they’re thinking more about, okay, it’s like a car we need to put seatbelts on.

Greene: I’ve heard that from regulators many times. Does the fact that the internet is speech, does that make it different from cars? 

Yeah, although increasingly cars are becoming more like the internet! Because a car is essentially a smartphone that can also be a very lethal weapon. And it’s also a surveillance device, it’s also increasingly a device that is a conduit for speech. So actually it’s going the other way!

Greene: I want to talk about misinformation a bit. You’re at Wikimedia, and so, independent of any concern people have about misinformation, Wikipedia is the product and its goal is to be accurate. What do we do with the “problem” of misinformation?

Well, I think it’s important to be clear about what is misinformation and what is disinformation. And deal with them—I mean they overlap, the dividing line can be blurry—but, nonetheless, it’s important to think about both in somewhat different ways. Misinformation being inaccurate information that is not necessarily being spread maliciously with intent to mislead. It might just be, you know, your aunt seeing something on Facebook and being like, “Wow, that’s crazy. I’m going to share it with 25 friends.” And not realizing that they’re misinformed. Whereas disinformation is when someone is spreading lies for a purpose. Whether it’s in an information warfare context where one party in a conflict is trying to convince a population of something about their own government which is false, or whatever it is. Or misinformation about a human rights activist and, say, an affair they allegedly had and why they deserve whatever fate they had… you know, just for example. That’s disinformation. And at the Wikimedia Foundation—just to get a little into the weeds because I think it helps us think about these problems—Wikipedia is a platform whose content is not written by staff of the Wikimedia Foundation. It’s all contributed by volunteers, anybody can be a volunteer. They can go on Wikipedia and contribute to a page or create a page. Whether that content stays, of course, depends on whether the content they’ve added adheres to what constitutes well-sourced, encyclopedic content. There’s a whole hierarchy of people whose job it is to remove content that does not fit the criteria. And one could talk about that for several podcasts. But that process right there is, of course, working to counter misinformation. Because anything that’s not well-sourced—and they have rules about what is a reliable source and what isn’t—will be taken down. So the volunteer Wikipedians, kind of through their daily process of editing and enforcing rules, are working to eliminate as much misinformation as possible. Of course, it’s not perfect. 

Greene: [laughing] What do you mean it’s not perfect? It must be perfect!

What is true is a matter of dispute even between scientific journals or credible news sources, or what have you. So there’s lots of debates and all those debates are in the history tab of every page which are public, about what source is credible and what the facts are, etc. So this is kind of the self-cleaning oven that’s dealing with misinformation. The human hive mind that’s dealing with this. Disinformation is harder because you have a well-funded state actor who not only may be encouraging people—not necessary people who are employed by that actor themselves, but people who are kind of nationalistic and supporters of that government or politician or people who are just useful idiots—to go on and edit Wikipedia to promote certain narratives. But that’s kind of the least of it. You also, of course, have threats, credible, physical threats against editors who are trying to delete the disinformation and staff of the Foundation who are trying to support editors in dealing with investigating and identifying what is actually a disinformation campaign and supports volunteers in addressing that, sometimes with legal support, sometimes with technical support and other support. But people are in jail in one country in particular right now because they were fighting disinformation on the projects in their language. In Belarus, we had people, volunteers, who were jailed for the same reason. We have people who are under threat in Russia, and you have governments who will say, “Wikipedia contains disinformation about our, for example, Special Military Exercise in Ukraine because they’re calling it ‘an invasion’ which is disinformation, so therefore they’re breaking the law against disinformation so we have to threaten them.” So the disinformation piece—fighting it can become very dangerous. 

Greene: What I hear is there are threats to freedom of expression in efforts to fight disinformation and, certainly in terms of state actors, those might be malicious. Are there any well-meaning efforts to fight disinformation that also bring serious threats to freedom of expression? 

Yeah, the people who say, “Okay, we should just require the platforms to remove all content that is anything from COVID disinformation to certain images that might falsely present… you know, deepfake images, etc.” Content-focused efforts to fight misinformation and disinformation will result in over-censorship because you can almost never get all the nuance and context right. Humor, satire, critique, scientific reporting on a topic or about disinformation itself or about how so-and-so perpetrated disinformation on X, Y, Z… you have to actually talk about it. But if the platform is required to censor the disinformation you can’t even use that platform to call out disinformation, right? So content-based efforts to fight disinformation go badly and get weaponized. 

Greene: And, as the US Supreme Court has said, there’s actually some social value to the little white lie. 

There can be. There can be. And, again, there’s so many topics on which reasonable people disagree about what the truth is. And if you start saying that certain types of misinformation or disinformation are illegal, you can quickly have a situation where the government is becoming arbiter of the truth in ways that can be very dangerous. Which brings us back to… we’re one bad election away from tyranny.

Greene: In your past at Ranking Digital Rights you looked more at the big corporate actors rather than State actors. How do you see them in terms of freedom of expression—they have their own freedom of expression rights, but there’s also their users—what does that interplay look to you? 

Especially in relation to the disinformation thing, when I was at Ranking Digital Rights we put out a report that also related to regulation. When we’re trying to hold these companies accountable, whether we’re civil society or government, what’s the appropriate approach? The title of the report was, “It’s Not the Content, it’s the Business Model.” Because the issue is not about the fact that, oh, something bad appears on Facebook. It’s how it’s being targeted, how it’s being amplified, how that speech and the engagement around it is being monetized, that’s where most of the harm takes place. And here’s where privacy law would be rather helpful! But no, instead we go after Section 230. We could do a whole other podcast on that, but… I digress. 

I think this is where bringing in international human rights law around freedom of expression is really helpful. Because the US constitutional law, the First Amendment, doesn’t really apply to companies. It just protects the companies from government regulation of their speech. Whereas international human rights law does apply to companies. There’s this framework, The UN Guiding Principles on Business and Human Rights, where nation-states have the ultimate responsibility—duty—to protect human rights, but companies and platforms, whether you’re a nonprofit or a for-profit, have a responsibility to respect human rights. And everybody has a responsibility to provide remedy, redress. So in that context, of course, it doesn’t contradict the First Amendment at all, but it sort of adds another layer to corporate accountability that can be used in a number of ways. And that is being used more actively in the European context. But Article 19 is not just about your freedom of speech, it’s also your freedom of access to information, which is part of it, and your freedom to form an opinion without interference. Which means that if you are being manipulated and you don’t even know it—because you are on this platform that’s monetizing people’s ability to manipulate you—that’s a violation of your freedom of expression under international law. And that’s a problem that companies, platforms of any kind—including if Wikimedia were to allow that to happen, which they don’t—anyone should be held accountable for. 

Greene: Just in terms of the role of the State in this interplay, because you could say that companies should operate within a human rights framing, but then we see different approaches around the world. Is it okay or is it too much power for the state to require them to do that? 

Here’s the problem. If the States were perfect in achieving their human rights duties, then we wouldn’t have a problem and we could totally trust states to regulate companies in our interest and in ways that protect our human rights. But there is no such state. There are some that are further away on the spectrum than others, but they’re all on a spectrum and nobody is at that position of utopia, and they will never get there. And so, given that all states in large ways or small, in different ways, are making demands of internet platforms, companies generally, that reasonable numbers of people believe violates their rights, then we need accountability. And that holding the state accountable for what it’s demanding of the private sector, making sure that’s transparent and that the state does not have absolute power is of utmost importance. And when you have situations where a government is just blatantly violating rights, and a company—even a well-meaning company that wants to do the right thing— is just stuck between a rock and a hard place. You can be really transparent about the fact that you’re complying with bad law, but you’re stuck in this place where if you refuse to comply then your employees go to jail. Or other bad things happen. And so what do you do other than just try and let people know? And then the state tells you, “Oh, you can’t tell people because that's a state secret.” So what do you do then? Do you just stop operating? So one can be somewhat sympathetic. Some of the corporate accountability rhetoric has gone a little overboard in not recognizing that if the state’s are failing to do their job, we have a problem. 

Greene: What’s the role of either the State or the companies if you have two people and one person is making it hard for the other to speak? Whether through heckling or just creating an environment where the other person doesn’t feel safe speaking? Is there a role for either the State or the companies where you have two peoples’ speech rights butting up against each other? 

We have this in private physical spaces all the time. If you’re at a comedy show and somebody gets up and starts threatening the stand-up comedian, obviously, security throws them out! I think in physical space we have some general ideas about that, that work okay. And that we can apply in virtual space, although it’s very contextual and, again, somebody has to make a decision—whose speech is more important than whose safety? Choices are going to be made. They’re not always going to be, in hindsight, the right choices, because sometimes you have to act really quickly and you don’t know if somebody’s life is in danger or not. Or how dangerous is this person speaking? But you have to err on the side of protecting life and limb. And then you might have realized at the end of the day that wasn’t the right choice. But are you being transparent about what your processes are—what you’re going to do under what circumstances? So people know, okay, well this is really predictable. They said they were going to x if I did y, and I did y and they did indeed take action, and if I think that they unfairly took action then there’s some way of appealing. That it’s not just completely opaque and unaccountable. 

This is a very overly simplistic description of very complex problems, but I’m now working at a platform. Yes, it’s a nonprofit, public interest platform, but our Trust and Safety team are working with volunteers who are enforcing rules and every day—well, I don’t know if it’s every day because they’re the Trust and Safety team so they don’t tell me exactly what’s going on—but there are frequent decisions around people’s safety. And what enables the volunteer community to basically both trust each other enough, and trust the platform operator enough, for the whole thing not to collapse due to mistrust and anger is that you’re being open and transparent enough about what you’re doing and why you’re doing it so that if you did make a mistake there’s a way to address it and be honest about it. 

Greene: So at least at Wikimedia you have the overriding value of truthfulness. At another platform should they value wanting to preserve places for people who otherwise wouldn’t have places to speak? People who are historically or culturally don’t have the opportunities to speak. How should they handle these instances of people being heckled down or shouted down off of a site? From your perspective, how should they respond to that? Should they make an effort to preserve these spaces? 

This is where I think in Silicon Valley in particular you often hear this thing that the technology is neutral— “we treat everybody the same.” —

Greene: And it’s not true.

Oh, of course it’s not true! But that’s the rhetoric. But that is held up as being “the right thing.” But that’s like saying, “Okay, we’re going to administer public housing in a way” — and it’s not a perfect comparison—being completely blind to the context and the socio-economic and political realities of the human beings that you are taking action upon is sort of like, again, if you’re operating a public housing system, or whatever, and you’re not taking into account at all the socio-economic backgrounds or ethnic backgrounds of people for whom you’re making decisions, you’re going to be perpetuating and, most likely, amplifying social injustice. So people who run public housing or universities and so on are quite familiar with this notion that being neutral is actually not neutral. It’s perpetuating existing social, economic, and political power imbalances. And we found that’s absolutely the case with social media claiming to be neutral. And the vulnerable people end up losing out. That’s what the research has shown and the activism has shown. 

And, you know, in the Wikimedia community there are debates about this. There are people who have been editing for a long time who say, “we have to be neutral.” But on the other hand—what’s very clear—is the greater diversity of viewpoints and backgrounds and languages and genres, etc of the people contributing to an article on a given topic the better it is. So if you want something to actually have integrity, you can’t just have one type of person working on it. And so there’s all kinds of reasons why it’s important as a platform operator that we do everything we can to ensure that this is a welcoming space for people of all backgrounds. That people who are under threat feel safe contributing to the platforms and not just rich white guys in Northern Europe. 

Greene: And at the same time we can’t expect them to be more perfect than the real world, also, right? 

Well, yeah, but you do have to recognize that the real world is the real world and there are these power dynamics going on that you have to take into account and you can decide to amplify them by pretending they don’t exist, or you can work actively to compensate in a manner that is consistent with human rights standards. 

Greene: Okay, one more question for you. Who is your free speech hero and why? 

Wow, that’s a good question, nobody has asked me that before in that very direct way. I think I really have to say sort of a group of people who really set me on the path of caring deeply for the rest of my life about free speech. Those are the people in China, most of whom I met when I was a journalist there, who stood up to tell the truth despite tremendous threats like being jailed, or worse. And oftentimes the determination that I would witness from even very ordinary people that “I am right, and I need to say this. And I know I’m taking a risk, but I must do it.” And it’s because of my interactions with such people in my twenties when I was starting out as a journalist in China that set me on this path. And I am grateful to them all, including several who are no longer on this earth including Liu Xiaobo, who received a Nobel prize when he was in jail before he died. 



Congress Should Just Say No to NO FAKES

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

Speaking Freely: Obioma Okonkwo

23 avril 2024 à 15:05

This interview has been edited for clarity and length.*

Obioma Okonkwo is a lawyer and human rights advocate. She is currently the Head of Legal at Media Rights Agenda (MRA), a non-governmental organization based in Nigeria whose focus is to promote and defend freedom of expression, press freedom, digital rights and access to information within Nigeria and across Africa. She is passionate about advancing freedom of expression, media freedom, access to information, and digital rights. She also has extensive experience in litigating, researching, advocating and training around these issues. Obioma is an alumnus of the Open Internet for Democracy Leaders Programme, a fellow of the African School of Internet Governance, and a Media Viability Ambassador with the Deutsche Welle Akademie.

 York: What does free speech or free expression mean to you?

In my view, free speech is an intrinsic right that allows citizens, journalists and individuals to express themselves freely without repressive restriction. It is also the ability to speak, be heard, and participate in social life as well as political discussion, and this includes the right to disseminate information and the right to know. Considering my work around press freedom and media rights, I would also say that free speech is when the media can gather and disseminate information to the public without restrictions.

 York: Can you tell me about an experience in your life that helped shape your views on free speech?

 An experience that shaped my views on free speech happened in 2013, while I was in University. Some of my schoolmates were involved in a ghastly car accident—as a result of a bad road—which resulted in their death. This led the students to start an online campaign demanding that the government should repair the road and compensate the victims’ families. Due to this campaign, the road was repaired and the victims’ families were compensated.  Another instance is the #End SARS protest, a protest against police brutality and corrupt practices in Nigeria. People were freely expressing their opinions both offline and online on this issue and demanding for a reform of the Nigerian Police Force. These incidents have helped shape my views on how important the right to free speech is in any given society considering that it gives everyone an avenue to hold the government accountable, demand for justice, as well as share their views about how they feel about certain issues that affect them as an individual or group.  

 York: I know you work a bit on press freedom in Nigeria and across Africa. Can you tell me a bit about the situation for press freedom in the context in which you’re working?

 The situation for press freedom in Africa—and particularly Nigeria—is currently an eye sore. The legal and political environment is becoming repressive against press freedom and freedom of expression as governments across the region are now posing themselves as authoritarian. And they have been making several efforts to gag the media by enacting draconian laws, arresting and arbitrarily detaining journalists, imposing fines, and closing media outlets, amongst many other actions.

In my country, Nigeria, the government has resorted to using laws like the Cybercrime Act of 2015 and the Criminal Code Act, among other laws, to silence journalists who are either exposing their corrupt practices, sharing dissenting views, or holding them accountable to the people. For instance, journalists like Agba Jalingo, Ayodele Samuel, Emmanuel Ojo and Dare Akogun – just to mention a few who have been arrested, detained, or charged to court under these laws. In the case of Agba Jalingo, he was arrested and detained for over 100 days after he exposed the corrupt practices of the Governor of Cross River, a state in Nigeria.

 The case is the same in many African countries including Benin, Ghana, and Senegal. Journalists are arrested, detained, and sent to court for performing their journalistic duty. Ignace Sossou, a journalist in Benin, was sent to court and imprisoned under the Digital Code for posting the statement of the Minister of justice  on his Facebook’s account. The reality right now is that governments across the region are at war against press freedom and journalists who are purveyors of information.

 Although this is what press freedom looks like across the region, civil society organizations are fighting back to protect press freedom and freedom of  expression.  To create an enabling environment for press freedom, my organization, Media Rights Agenda (MRA) has been making several efforts such as instituting lawsuits before the national and regional courts challenging these draconian laws; providing pro bono legal representation to journalists who are arrested, detained, or charged; and engaging various stakeholders on this issue. 

 York: Are you working on the issue of online regulation and can you tell us the situation of online speech in the region?

 As the Head of Legal with MRA, I am actively working around the issue of online regulation to ensure that the rights to press freedom, freedom of expression, access to information, and digital rights are promoted and protected online. The region is facing an era of digital authoritarianism as there is a crackdown on online speech. In the context of my country, the Nigerian Government has made several attempts to regulate the internet or introduce social media bills under the guise of combating cybercrimes, hate speech, and mis/disinformation. However, diverse stakeholders – including civil society organizations like my organization – have, on many occasions, fought against these attempts to regulate online speech for the reason that these proposed bills will not only limit freedom of expression, press freedom, and other digital rights. They will also shrink the civic space online, as some of their provisions are overly broad and governments are known for using laws like this arbitrarily to silence dissenting voices and witch hunt journalists, opposition entities, or individuals.

 An example is when diverse stakeholders challenged the National Information and Technology Development Agency (NITDA), an agency saddled with the duty of creating a framework for the planning and regulation of information technology practices activities and systems in Nigeria over the draft regulation, “Code of Practices for Interactive Computer Service Platforms/Internet Intermediaries.” They challenged the draft regulation on the basis that it must contain some provisions that recognize freedom of expression, privacy, press freedom and other human rights concerns. Although the agency took into consideration some of the suggestions made by these stakeholders, there are still concerns that individuals, activists, and human rights defenders might be surveilled, amongst other things.

 The government of Nigeria is relying on laws like the Cybercrime Act, Criminal Code Act and many more to stifle online speech. And the Ghanaian government is no different as they are also relying on the Electronic Communication Act to suppress freedom of expression and hound critical journalists under the pretense of battling fake news. Countries like Zimbabwe, Sudan, Uganda, and Morocco have also enacted laws to silence dissent and repress citizens’ internet use especially for expression.

 York: Can you also tell me a little bit more about the landscape for civil society where you work? Are there any creative tactics or strategies from civil society that you work with?

 Nigeria is home to a wide variety of civil society organizations (CSOs) and non-governmental organizations (NGOs). The main legislation that regulates CSOs are federal laws such as the Nigerian Constitution, which guarantees freedom of association, and the Companies and Allied Matters Act (CAMA), which provides every group or association with legal personality.

 CSOs in Nigeria face quite a number of legal and political hurdles. For example, CSOs that wish to operate as a company limited by guarantee need to seek the consent of the Attorney-General of the Federation which may be rejected. While CSOs operating as incorporated trustees are mandated to carry out some obligations which can be tedious and time consuming. On several occasions, the Nigerian Government has made attempts to pressure and even subvert CSOs and to single out certain CSOs for special adverse treatment. Despite receiving foreign funding support, the Nigerian government finds it convenient to berate or criticize CSOs as being “sponsored” by foreign interests, with the underlying suggestion that such organizations are unpatriotic and – by criticizing government – are being paid to act contrary to Nigeria’s interests.

 There are lots of strategies or tactics CSOs are using to address the issues they are working on, including issuing press statements, engaging diverse stakeholders, litigation, capacity-building efforts, and advocacy.  

 York: Do you have a free expression hero?

 Yes, I do. All the critical journalists out there are my free expression heroes. I also consider Julian Assange as a free speech hero for his belief in openness and transparency as well as taking personal risk to expose the corrupt acts of the powerful, an act necessary in a democratic society. 

❌
❌