Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

Speaking Freely: Prasanth Sugathan

Par : David Greene
13 décembre 2024 à 14:37

Interviewer: David Greene

This interview has been edited for length and clarity.*

Prasanth Sugathan is Legal Director at Software Freedom Law Center, India. (SFLC.in). Prasanth is a lawyer with years of practice in the fields of technology law, intellectual property law, administrative law and constitutional law. He is an engineer turned lawyer and has worked closely with the Free Software community in India. He has appeared in many landmark cases before various Tribunals, High Courts and the Supreme Court of India. He has also deposed before Parliamentary Committees on issues related to the Information Technology Act and Net Neutrality.

David Greene: Why don’t you go ahead and introduce yourself. 

Sugathan: I am Prasanth Sugathan, I am the Legal Director at the Software Freedom Law Center, India. We are a nonprofit organization based out of New Delhi, started in the year 2010. So we’ve been working at this for 14 years now, working mostly in the area of protecting rights of citizens in the digital space in India. We do strategic litigation, policy work, trainings, and capacity building. Those are the areas that we work in. 

Greene: What was your career path? How did you end up at SFLC? 

That’s an interesting story. I am an engineer by training. Then I was interested in free software. I had a startup at one point and I did a law degree along with it. I got interested in free software and got into it full time. Because of this involvement with the free software community, the first time I think I got involved in something related to policy was when there was discussion around software patents. When the patent office came out with a patent manual and there was this discussion about how it could affect the free software community and startups. So that was one discussion I followed, I wrote about it, and one thing led to another and I was called to speak at a seminar in New Delhi. That’s where I met Eben and Mishi from the Software Freedom Law Center. That was before SFLC India was started, but then once Mishi started the organization I joined as a Counsel. It’s been a long relationship. 

Greene: Just in a personal sense, what does freedom of expression mean to you? 

Apart from being a fundamental right, as evident in all the human rights agreements we have, and in the Indian Constitution,  freedom of expression is the most basic aspect for a democratic nation. I mean without free speech you can not have a proper exchange of ideas, which is most important for a democracy. For any citizen to speak what they feel, to communicate their ideas, I think that is most important. As of now the internet is a medium which allows you to do that. So there definitely should be minimum restrictions from the government and other agencies in relation to the free exchange of ideas on this medium. 

Greene: Have you had any personal experiences with censorship that have sort of informed or influenced how you feel about free expression? 

When SFLC.IN was started in 2010 our major idea was to support the free software community. But then how we got involved in the debates on free speech and privacy on the internet was when in 2011 there were the IT Rules were introduced by the government as a draft for discussion and finally notified. This was on  regulation of intermediaries, these online platforms. This was secondary legislation based on the Information Technology Act (IT Act) in India, which is the parent law. So when these discussions happened we got involved in it and then one thing led to another. For example, there was a provision in the IT Act called Section 66-A which criminalized the sending of offensive messages through a computer or other communication devices. It was, ostensibly, introduced to protect women. And the irony was that two women were arrested under this law. That was the first arrest that happened, and it was a case of two women being arrested for the comments that they made about a leader who expired. 

This got us working on trying to talk to parliamentarians, trying to talk to other people about how we could maybe change this law. So there were various instances of content being taken down and people being arrested, and it was always done under Section 66-A of the IT Act. We challenged the IT Rules before the Supreme Court. In a judgment in a 2015 case called Shreya Singhal v. Union of India the Supreme Court read down the rules relating to intermediary liability. As for the rules, the platforms could be asked to take down the content. They didn’t have much of an option. If they don’t do that, they lose their safe harbour protection. The Court said it can only be actual knowledge and what actual knowledge means is if someone gets a court order asking them to take down the content. Or let’s say there’s direction from the government. These are the only two cases when content could be taken down.

Greene: You’ve lived in India your whole life. Has there ever been a point in your life when you felt your freedom of expression was restricted? 

Currently we are going through such a phase, where you’re careful about what you’re speaking about. There is a lot of concern about what is happening in India currently. This is something we can see mostly impacting people who are associated with civil society. When they are voicing their opinions there is now a kind of fear about how the government sees it, whether they will take any action against you for what you say, and how this could affect your organization. Because when you’re affiliated with an organization it’s not just about yourself. You also need to be careful about how anything that you say could affect the organization and your colleagues. We’ve had many instances of nonprofit organizations and journalists being targeted. So there is a kind of chilling effect when you really don’t want to say something you would otherwise say strongly. There is always a toning down of what you want to say. 

Greene: Are there any situations where you think it’s appropriate for governments to regulate online speech? 

You don’t have an absolute right to free speech under India’s Constitution. There can be restrictions as stated under Article 19(2) of the Constitution. There can be reasonable restrictions by the government, for instance, for something that could lead to violence or something which could lead to a riot between communities. So mostly if you look at hate speech on the net which could lead to a violent situation or riots between communities, that could be a case where maybe the government could intervene. And I would even say those are cases where platforms should intervene. We have seen a lot of hate speech on the net during India’s current elections as there have been different phases of elections going on for close to two months. We have seen that happening with not just political leaders but with many supporters of political parties publishing content on various platforms which aren’t really in the nature of hate speech but which could potentially create situations where you have at least two communities fighting each other. It’s definitely not a desirable situation. Those are the cases where maybe platforms themselves could regulate or maybe the government needs to regulate. In this case, for example, when it is related to elections, the Election Commission also has its role, but in many cases we don’t see that happening. 

Greene: Okay, let’s go back to hate speech for a minute because that’s always been a very difficult problem. Is that a difficult problem in India? Is hate speech well-defined? Do you think the current rules serve society well or are there problems with it? 

I wouldn’t say it’s well-defined, but even in the current law there are provisions that address it. So anything which could lead to violence or which could lead to animosity between two communities will fall in the realm of hate speech. It’s not defined as such, but then that is where your free speech rights could be restricted. That definitely could fall under the definition of hate speech. 

Greene: And do you think that definition works well? 

I mean the definition is not the problem. It’s essentially a question of how it is implemented. It’s a question of how the government or its agency implements it. It’s a question of how platforms are taking care of it. These are two issues where there’s more that needs to be done. 

Greene: You also talked about misinformation in terms of elections. How do we reconcile freedom of expression concerns with concerns for preventing misinformation? 

I would definitely say it’s a gray area. I mean how do you really balance this? But I don’t think it’s a problem which cannot be addressed. Definitely there’s a lot for civil society to do, a lot for the private sector to do. Especially, for example, when hate speech is reported to the platforms. It should be dealt with quickly, but that is where we’re seeing the worst difference in how platforms act on such reporting in the Global North versus what happens in the Global South. Platforms need to up their act when it comes to handling such situations and handling such content. 

Greene: Okay, let’s talk about the platforms then. How do you feel about censorship or restrictions on freedom of expression by the platforms? 

Things have changed a lot as to how these platforms work. Now the platforms decide what kind of content gets to your feed and how the algorithms work to promote content which is more viral. In many cases we have seen how misinformation and hate speech goes viral. And content that is debunking the misinformation which is kind of providing the real facts, that doesn’t go as far. The content that debunks misinformation doesn’t go viral or come up in your feed that fast. So that definitely is a problem, the way platforms are dealing with it. In many cases it might be economically beneficial for them to make sure that content which is viral and which puts forth misinformation reaches more eyes. 

Greene: Do you think that the platforms that are most commonly used in India—and I know there’s no TikTok in India— serve free speech interests or not? 

When the Information Technology Rules were introduced and when the discussions happened, I would say civil society supported the platforms, essentially saying these platforms ensured we can enjoy our free speech rights, people can enjoy their free speech rights and express themselves freely. How the situation changed over a period of time is interesting. Definitely these platforms are still important for us to express these rights. But when it comes to, let’s say, content being regulated, some platforms do push back when the government asks them to take down the content, but we have not seen that much. So whether they’re really the messiahs for free speech, I doubt. Over the years, we have seen that it is most often the case that when the government tells them to do something, it is in their interest to do what the government says. There has not been much pushback except for maybe Twitter challenging it in the court.  There have not been many instances where these platforms supported users. 

Greene: So we’ve talked about hate speech and misinformation, are there other types of content or categories of online speech that are either problematic in India now or at least that regulators are looking at that you think the government might try to do something with? 

One major concern which the government is trying to regulate is about deepfakes, with even the Prime Minister speaking about it. So suddenly that is something of a priority for the government to regulate. So that’s definitely a problem, especially when it comes to public figures and particularly women who are in politics who often have their images manipulated. In India we see that at election time. Even politicians who have been in the field for a long time, their images have been misused and morphed images have been circulated. So that’s definitely something that the platforms need to act on. For example, you cannot have the luxury of, let’s say, taking 48 hours to decide what to do when something like that is posted. This is something which platforms have to deal with as early as possible. We do understand there’s a lot of content and a lot of reporting happening, but in some cases, at least, there should be some prioritization of these reporting related to non-consensual sexual imagery. Maybe then the priority should go up. 

Greene: As an engineer, how do you feel about deepfake tech? Should the regulatory concerns be qualitatively different than for other kinds of false information? 

When it comes to deepfakes, I would say the problem is that it has become more mainstream. It has become very easy for a person to use these tools that have become more accessible. Earlier you needed to have specialized knowledge, especially when it came to something like editing videos. Now it’s become much easier. These tools are made easily available. The major difference now is how easy it is to access these applications. There can not be a case of fully regulating or fully controlling a technology. It’s not essentially a problem with the technology, because there would be a lot of ethical use cases. Just because something is used for a harmful purpose doesn’t mean that you completely block the technology. There is definitely a case for regulating AI and regulating deepfakes, but that doesn’t mean you put a complete stop to it. 

Greene: How do you feel about TikTok being banned in India? 

I think that’s less a question of technology or regulation and more of a geopolitical issue. I don’t think it has anything to do with the technology or even the transfer of data for that matter. I think it was just a geopolitical issue related to India/ China relations. The relations have kind of soured with the border disputes and other things, I think that was the trigger for the TikTok ban. 

Greene: What is your most significant legal victory from a human rights perspective and why? 

The victory that we had in the fight against the 2011 Rules and the portions related to intermediary liability, which was shot down by the Supreme Court. That was important because when it came to platforms and when it came to people expressing their critical views online, all of this could have been taken down very easily. So that was definitely a case of free speech rights being affected without much recourse. So that was a major victory. 

Greene: Okay, now we ask everyone this question. Who is your free speech hero and why?

I can’t think of one person, but I think of, for example, when the country went through a bleak period in the 1970s and the government declared a national state of emergency. During that time we had journalists and politicians who fought for free speech rights with respect to the news media. At that time even writing something in the publications was difficult. We had many cases of journalists who were fighting this, people who had gone to jail for writing something, who had gone to jail for opposing the government or publicly criticizing the government. So I don’t think of just one person, but we have seen journalists and political leaders fighting back during that state of emergency. I would say those are the heroes who could fight the government, who could fight law enforcement. Then there was the case of Justice H.R. Khanna, a judge who stood up for citizen’s rights and gave his dissenting opinion against the majority view, which cost him the position of Chief Justice. Maybe I would say he’s a hero, a person who was clear about constitutional values and principles.

EFF Speaks Out in Court for Citizen Journalists

12 décembre 2024 à 17:11

No one gets to abuse copyright to shut down debate. Because of that, we at EFF represent Channel 781, a group of citizen journalists whose YouTube channel was temporarily shut down following copyright infringement claims made by Waltham Community Access Corporation (WCAC). As part of that case, the federal court in Massachusetts heard oral arguments in Channel 781 News v. Waltham Community Access Corporation, a pivotal case for copyright law and digital journalism. 

WCAC, Waltham’s public access channel, records city council meetings on video. Channel 781, a group of independent journalists, curates clips of those meetings for its YouTube channel, along with original programming, to spark debate on issues like housing policy and real estate development. WCAC sent a series of DMCA takedown notices that accused Channel 781 of copyright infringement, resulting in YouTube deactivating Channel 781’s channel just days before a critical municipal election.

Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its DMCA takedown notices. We argued that using clips of government meetings from the government access station to engage in public debate is an obvious fair use under copyright. Also, by excerpting factual recordings and using captions to improve accessibility, the group aims to educate the public, a purpose distinct from WCAC’s unannotated broadcasts of hours-long meetings. The lawsuit alleges that WCAC’s takedown requests knowingly misrepresented the legality of Channel 781's use, violating Section 512(f) of the DMCA.

Fighting a Motion to Dismiss

In court this week, EFF pushed back against WCAC’s motion to dismiss the case. We argued to District Judge Patti Saris that Channel 781’s use of video clips of city government meetings was an obvious fair use, and that by failing to consider fair use before sending takedown notices to YouTube, WCAC violated the law and should be liable for damages.

If Judge Saris denies WCAC’s motion, we will move on to proving our case. We’re confident that the outcome will promote accountability for copyright holders who misuse the powerful notice-and-takedown mechanism that the DMCA provides, and also protect citizen journalists in their use of digital tools.

EFF will continue to provide updates as the case develops. Stay tuned for the latest news on this critical fight for free expression and the protection of digital rights.

X's Last-Minute Update to the Kids Online Safety Act Still Fails to Protect Kids—or Adults—Online

Late last week, the Senate released yet another version of the Kids Online Safety Act, written, reportedly, with the assistance of X CEO Linda Yaccarino in a flawed attempt to address the critical free speech issues inherent in the bill. This last minute draft remains, at its core, an unconstitutional censorship bill that threatens the online speech and privacy rights of all internet users. 

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Update Fails to Protect Users from Censorship or Platforms from Liability

The most important update, according to its authors, supposedly minimizes the impact of the bill on free speech. As we’ve said before, KOSA’s “duty of care” section is its biggest problem, as it would force a broad swath of online services to make policy changes based on the content of online speech. Though the bill’s authors inaccurately claim KOSA only regulates designs of platforms, not speech, the list of harms it enumerates—eating disorders, substance use disorders, and suicidal behaviors, for example—are not caused by the design of a platform. 

The authors have failed to grasp the difference between immunizing individual expression and protecting a platform from the liability that KOSA would place on it.

KOSA is likely to actually increase the risks to children, because it will prevent them from accessing online resources about topics like addiction, eating disorders, and bullying. It will result in services imposing age verification requirements and content restrictions, and it will stifle minors from finding or accessing their own supportive communities online. For these reasons, we’ve been critical of KOSA since it was introduced in 2022. 

This updated bill adds just one sentence to the “duty of care” requirement:“Nothing in this section shall be construed to allow a government entity to enforce subsection a [the duty of care] based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment to the Constitution of the United States.” But the viewpoint of users was never impacted by KOSA’s duty of care in the first place. The duty of care is a duty imposed on platforms, not users. Platforms must mitigate the harms listed in the bill, not users, and the platform’s ability to share users’ views is what’s at risk—not the ability of users to express those views. Adding that the bill doesn’t impose liability based on user expression doesn’t change how the bill would be interpreted or enforced. The FTC could still hold a platform liable for the speech it contains.

Let’s say, for example, that a covered platform like reddit hosts a forum created and maintained by users for discussion of overcoming eating disorders. Even though the speech contained in that forum is entirely legal, often helpful, and possibly even life-saving, the FTC could still hold reddit liable for violating the duty of care by allowing young people to view it. The same could be true of a Facebook group about LGBTQ issues, or for a post about drug use that X showed a user through its algorithm. If a platform’s defense were that this information is protected expression, the FTC could simply say that they aren’t enforcing it based on the expression of any individual viewpoint, but based on the fact that the platform allowed a design feature—a subreddit, Facebook group, or algorithm—to distribute that expression to minors. It’s a superfluous carveout for user speech and expression that KOSA never penalized in the first place, but which the platform would still be penalized for distributing. 

It’s particularly disappointing that those in charge of X—likely a covered platform under the law—had any role in writing this language, as the authors have failed to grasp the world of difference between immunizing individual expression, and protecting their own platform from the liability that KOSA would place on it.  

Compulsive Usage Doesn’t Narrow KOSA’s Scope 

Another of KOSA’s issues has been its vague list of harms, which have remained broad enough that platforms have no clear guidance on what is likely to cross the line. This update requires that the harms of “depressive disorders and anxiety disorders” have “objectively verifiable and clinically diagnosable symptoms that are related to compulsive usage.” The latest text’s definition of compulsive usage, however, is equally vague: “a persistent and repetitive use of a covered platform that significantly impacts one or more major life activities, including socializing, sleeping, eating, learning, reading, concentrating, communicating, or working.” This doesn’t narrow the scope of the bill. 

 The bill doesn’t even require that the impact be a negative one. 

It should be noted that there is no clinical definition of “compulsive usage” of online services. As in past versions of KOSA, this updated definition cobbles together a definition that sounds just medical, or just legal, enough that it appears legitimate—when in fact the definition is devoid of specific legal meaning, and dangerously vague to boot. 

How could the persistent use of social media not significantly impact the way someone socializes or communicates? The bill doesn’t even require that the impact be a negative one. Comments on an Instagram photo from a potential partner may make it hard to sleep for several nights in a row; a lengthy new YouTube video may impact someone’s workday. Opening a Snapchat account might significantly impact how a teenager keeps in touch with her friends, but that doesn’t mean her preference for that over text messages is “compulsive” and therefore necessarily harmful. 

Nonetheless, an FTC weaponizing KOSA could still hold platforms liable for showing content to minors that they believe results in depression or anxiety, so long as they can claim the anxiety or depression disrupted someone’s sleep, or even just changed how someone socializes or communicates. These so-called “harms” could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. 

Dangerous Censorship Bills Do Not Belong in Must-Pass Legislation

The latest KOSA draft comes as incoming nominee for FTC Chair, Andrew Ferguson—who would be empowered to enforce the law, if passed—has reportedly vowed to protect free speech by “fighting back against the trans agenda,” among other things. As we’ve said for years (and about every version of the bill), KOSA would give the FTC under this or any future administration wide berth to decide what sort of content platforms must prevent young people from seeing. Just passing KOSA would likely result in platforms taking down protected speech and implementing age verification requirements, even if it's never enforced; the FTC could simply express the types of content they believe harms children, and use the mere threat of enforcement to force platforms to comply.  

No representative should consider shoehorning this controversial and unconstitutional bill into a continuing resolution. A law that forces platforms to censor truthful online content should not be in a last minute funding bill.

TELL CONGRESS: VOTE NO ON KOSA

no kosa in last minute funding bills

Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?

11 décembre 2024 à 09:00

The Brazilian Supreme Court is on the verge of deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. 

The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.

The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.

After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.

However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.

The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.  

Some Background About Marco Civil’s Intermediary Liability Regime

The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generally prioritize their economic interests and security over preserving users’ protected expression and over-remove content to avoid legal battles and regulatory scrutiny. The enforcement overreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.

The provision was in line with the Special Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.

A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges, changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.

The Cases Under Trial and The Reach of the Supreme Court’s Decision

The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.

In the second case, the plaintiff sued Facebook after the company didn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers. 

Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases. 

The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention. 

There’s a lot at stake for users’ rights online in the outcomes of these cases. 

The Many Perils and Pitfalls on the Way

Brazil’s platform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector, draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes. 

We’ve written about intermediary liability trends around the globe, how to move forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users. 

One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content. 

While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence of less-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.

But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring is hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based detection of copyright infringement that have deeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.  

Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages.  Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.  

Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.

Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use of takedown mechanisms to disappear critical content online.

It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request. 

As we said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants. 

Introducing EFF’s New Video Series: Gate Crashing

10 décembre 2024 à 14:56

The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that. 

There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy. 

In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way. 

play
Privacy info. This embed will serve content from youtube.com

Speaking Freely: Tomiwa Ilori

Par : David Greene
10 décembre 2024 à 13:40

Interviewer: David Greene

*This interview has been edited for length and clarity.

Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and  a Senior ICFP Fellow at HURIDOCS.  His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are personal. 

Greene: Why don’t you start by introducing yourself?

Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects. 

Greene: Can you tell us more about the B-Tech project? 

The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs) into the tech sector. The project looks at, for example, how  social media platforms can apply human rights due diligence frameworks or processes to their products and services more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies? 

Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally? 

I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath. Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to me. 

Greene: Is there a single experience or several experiences that shaped your views about freedom of expression? 

Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language, language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices in unique and diverse ways. 

Greene: Can you talk more about whether there is something about  Yorùbá language or culture that is uniquely supportive of freedom of expression? 

There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom. 

Greene: Have you ever had a personal experience with censorship? 

Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those who were unable to afford it and for me that is problematic and qualifies as a kind of censorship. 

Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for the first time. According to the internet service provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too. 

Greene: Do you think there is ever a time when the government can justify an internet shutdown? 

The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech, misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns. 

Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity. 

This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in context. Using hate speech as an example. There’s the RabatAction Plan on Hate Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved  in the regulation of problematic content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network. 

I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet shutdowns. 

Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what types of online speech you think might be appropriately regulated by governments? 

For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However, these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent. 

Greene: In Nigeria or in Kenya or just on the continent in general? 

Yes, there are countries that still have lèse-majesté laws in criminal laws and penal codes. We’ve had countries like Nigeria that were  trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil society actors. 

So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.” 

There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by states. 

I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform  regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for profit. 

Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s something we need to worry about, who has a role in regulating it? 

State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of journalists, human rights defenders and activists. So it has really not solved any problems. 

When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of useful solutions that could be applied effectively. 

Greene: What do you feel the role of the companies should be? 

Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.

One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what happens is that corporate-speak is louder than the human rights language in many of these standards. 

That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to  educate and develop capacity on how their  platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be able to understand their contexts and be able to address the harms on their platforms better. 

Greene: What’s the most pressing free speech issue in the region in which you work? 

Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified. 

A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what, how,  when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew on. 

Greene: And finally, who is your free speech hero? 

His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.

Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

This Bill Could Put A Stop To Censorship By Lawsuit

Par : Joe Mullin
5 décembre 2024 à 13:38

For years now, deep-pocketed individuals and corporations have been turning to civil lawsuits to silence their opponents. These Strategic Lawsuits Against Public Participation, or SLAPPs, aren’t designed to win on the merits, but rather to harass journalists, activists, and consumers into silence by suing them over their protected speech. While 34 states have laws to protect against these abuses, there is still no protection at a federal level. 

Today, Reps. Jamie Raskin (D-MD) and Kevin Kiley (R-CA) introduced the bipartisan Free Speech Protection Act. This bill is the best chance we’ve seen in many years to secure strong federal protection for journalists, activists, and everyday people who have been subject to harassing meritless lawsuits. 

take action

Tell Congress We Don't want a weaponized court system

The Free Speech Protection Act is a long overdue tool to protect against the use of SLAPP lawsuits as legal weapons that benefit the wealthy and powerful. This bill will help everyday Americans of all political stripes who speak out on local and national issues. 

Individuals or companies who are publicly criticized (or even simply discussed) will sometimes use SLAPP suits to intimidate their critics. Plaintiffs who file these suits don’t need to win on the merits, and sometimes they don’t even intend to see the case through. But the stress of the lawsuit and the costly legal defense alone can silence or chill the free speech of defendants. 

State anti-SLAPP laws work. But since state laws are often not applicable in federal court, people and companies can still maneuver to manipulate the court system, filing cases in federal court or in states with weak or nonexistent anti-SLAPP laws. 

SLAPPs All Around 

SLAPP lawsuits in federal court are increasingly being used to target activists and online critics. Here are a few recent examples: 

Coal Ash Company Sued Environmental Activists

In 2016, activists in Uniontown, Alabama—a poor, predominantly Black town with a median per capita income of around $8,000—were sued for $30 million by a Georgia-based company that put hazardous coal ash into Uniontown’s residential landfill. The activists were sued over statements on their website and Facebook page, which said things like the landfill “affected our everyday life,” and, “You can’t walk outside, and you cannot breathe.” The plaintiff settled the case after the ACLU stepped in to defend the activist group. 

Shiva Ayyadurai Sued A Tech Blog That Reported On Him

In 2016, technology blog Techdirt published articles disputing Shiva Ayyadurai’s claim to have “invented email.” Techdirt founder Mike Masnick was hit with a $15 million libel lawsuit in federal court. Masnick, an EFF Award winner,  fought back in court and his reporting remains online, but the legal fees had a big effect on his business. With a strong federal anti-SLAPP law, more writers and publishers will be able to fight back against bullying lawsuits without resorting to crowd-funding. 

Logging Company Sued Greenpeace 

In 2016, environmental non-profit Greenpeace was sued along with several individual activists by Resolute Forest Products. Resolute sued over blog post statements such as Greenpeace’s allegation that Resolute’s logging was “bad news for the climate.” (After four years of litigation, Resolute was ordered to pay nearly $1 million in fees to Greenpeace—because a judge found that California’s strong anti-SLAPP law should apply.) 

Congressman Sued His Twitter Critics And Media Outlets 

In 2019, anonymous Twitter accounts were sued by Rep. Devin Nunes, then a congressman representing parts of Central California. Nunes used lawsuits to attempt to unmask and punish two Twitter users who used the handles @DevinNunesMom and @DevinCow to criticize his actions as a politician. Nunes filed these actions in a state court in Henrico County, Virginia. The location had little connection to the case, but Virginia’s weak anti-SLAPP law has enticed many plaintiffs there. 

Over the next few years, Nunes went on to sue many other journalists who published critical articles about him, using state and federal courts to sue CNN, The Washington Post, his hometown paper The Fresno Bee, MSNBC, a group of his own constituents, and others. Nearly all of these lawsuits were dropped or dismissed by courts. If a federal anti-SLAPP law were in place, more defendants would have a chance of dismissing such lawsuits early and recouping their legal fees. 

Fast Relief From SLAPPs

The Free Speech Protection Act gives defendants of SLAPP suits a powerful tool to defend themselves.

The bill would allow a defendant sued for speaking out on a matter of public concern to file a special motion to dismiss, which the court must generally decide on within 90 days. If the court grants the speaker-defendant’s motion, the claims are dismissed. In many situations, defendants who prevail on an anti-SLAPP motion will be entitled to have the plaintiff reimburse them for their legal fees. 

take action

Tell Congress to pass the free speech protection act

EFF has been defending the rights of online speakers for more than 30 years. A strong federal anti-SLAPP law will bring us closer to the vision of an internet that allows anyone to speak out and organize for change, especially when they speak against those with more power and resources. Anti-SLAPP laws enhance the rights of all. We urge Congress to pass The Free Speech Protection Act. 

Let's Answer the Question: "Why is Printer Ink So Expensive?"

5 décembre 2024 à 12:01

Did you know that most printer ink isn’t even expensive to make? Why then is it so expensive to refill the ink on your printer? 

The answer is actually pretty simple: monopolies, weird laws, and companies exploiting their users for profit. If this sounds mildly infuriating and makes you want to learn ways to fight back, then head over to our new site, Digital Rights Bytes! We’ve even created a short video to explain what the heck is going on here.  

We’re answering the common tech questions that may be bugging you. Whether you’re hoping to learn something new or want to share resources with your family and friends, Digital Rights Bytes can be your one-stop-shop to learn more about the technology you use every day.  

Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. If you’ve got additional questions you’d like us to tackle in the future, let us know on your favorite social platform using the hashtag #DigitalRightsBytes! 

Location Tracking Tools Endanger Abortion Access. Lawmakers Must Act Now.

Par : Lisa Femia
4 décembre 2024 à 17:06

EFF wrote recently about Locate X, a deeply troubling location tracking tool that allows users to see the precise whereabouts of individuals based on the locations of their smartphone devices. Developed and sold by the data surveillance company Babel Street, Locate X collects smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices. The tool features a navigable map with red dots, each representing an individual device. Users can then follow the location of specific devices as they move about the map.

Locate X–and other similar services–are able to do this by taking advantage of our largely unregulated location data market.

Unfettered location tracking puts us all at risk. Law enforcement agencies can purchase their way around warrant requirements and bad actors can pay for services that make it easier to engage in stalking and harassment. Location tracking tools particularly threaten groups especially vulnerable to targeting, such as immigrants, the LGBTQ+ community, and even U.S. intelligence personnel abroad. Crucially, in a post-Dobbs United States, location surveillance also poses a serious danger to abortion-seekers across the country.

EFF has warned before about how the location data market threatens reproductive rights. The recent reports on Locate X illustrate even more starkly how the collection and sale of location data endangers patients in states with abortion bans and restrictions.

In late October, 404 Media reported that privacy advocates from Atlas Privacy, a data removal company, were able to get their hands on Locate X and use it to track an individual device’s location data as it traveled across state lines to visit an abortion clinic. Although the tool was designed for law enforcement, the advocates gained access by simply asserting that they planned to work with law enforcement in the future. They were then able to use the tool to track an individual device as it traveled from an apparent residence in Alabama, where there is a complete abortion ban, to a reproductive health clinic in Florida, where abortion is banned after 6 weeks of pregnancy. 

Following this report, we published a guide to help people shield themselves from tracking tools like Locate X. While we urge everyone to take appropriate technical precautions for their situation, it’s far past time to address the issue at its source. The onus shouldn’t be on individuals to protect themselves from such invasive surveillance. Tools like Locate X only exist because U.S. lawmakers have failed to enact legislation that would protect our location data from being bought and sold to the highest bidder. 

Thankfully, there’s still time to reshape the system, and there are a number of laws legislators could pass today to help protect us from mass location surveillance. Remember: when our location information is for sale, so is our safety. 

Blame Data Brokers and the Online Advertising Industry

There are a vast array of apps available for your smartphone that request access to your location. Sharing this information, however, may allow your location data to be harvested and sold to shadowy companies known as data brokers. Apps request access to device location to provide various features, but once access has been granted, apps can mishandle that information and are free to share and sell your whereabouts to third parties, including data brokers. These companies collect data showing the precise movements of hundreds of millions of people without their knowledge or meaningful consent. They then make this data available to anyone willing to pay, whether that’s a private company like Babel Street (and anyone they in turn sell to) or government agencies, such as law enforcement, the military, or ICE.

This puts everyone at risk. Our location data reveals far more than most people realize, including where we live and work, who we spend time with, where we worship, whether we’ve attended protests or political gatherings, and when and where we seek medical care—including reproductive healthcare.

Without massive troves of commercially available location data, invasive tools like Locate X would not exist.

For years, EFF has warned about the risk of law enforcement or bad actors using commercially available location data to track and punish abortion seekers. Multiple data brokers have specifically targeted and sold location information tied to reproductive healthcare clinics. The data broker SafeGraph, for example, classified Planned Parenthood as a “brand” that could be tracked, allowing investigators at Motherboard to purchase data for over 600 Planned Parenthood facilities across the U.S.

Meanwhile, the data broker Near sold the location data of abortion-seekers to anti-abortion groups, enabling them to send targeted anti-abortion ads to people who visited clinics. And location data firm Placer.ai even once offered heat maps showing where visitors to Planned Parenthood clinics approximately lived. Sale to private actors is disturbing given that several states have introduced and passed abortion “bounty hunter” laws, which allow private citizens to enforce abortion restrictions by suing abortion-seekers for cash.

Government officials in abortion-restrictive states are also targeting location information (and other personal data) about people who visit abortion clinics. In Idaho, for example, law enforcement used cell phone data to charge a mother and son with kidnapping for aiding an abortion-seeker who traveled across state lines to receive care. While police can obtain this data by gathering evidence and requesting a warrant based on probable cause, the data broker industry allows them to bypass legal requirements and buy this information en masse, regardless of whether there’s evidence of a crime.

Lawmakers Can Fix This

So far, Congress and many states have failed to enact legislation that would meaningfully rein in the data broker industry and protect our location information. Locate X is simply the end result of such an unregulated data ecosystem. But it doesn’t have to be this way. There are a number of laws that Congress and state legislators could pass right now that would help protect us from location tracking tools.

1. Limit What Corporations Can Do With Our Data

A key place to start? Stronger consumer privacy protections. EFF has consistently pushed for legislation that would limit the ability of companies to harvest and monetize our data. If we enforce strict rules on how location data is collected, shared, and sold, we can stop it from ending up in the hands of private surveillance companies and law enforcement without our consent.

We urge legislators to consider comprehensive, across-the-board data privacy laws. Companies should be required to minimize the collection and processing of location data to only what is strictly necessary to offer the service the user requested (see, for example, the recently-passed Maryland Online Data Privacy Act). Companies should also be prohibited from processing a person’s data, except with their informed, voluntary, specific, opt-in consent.

We also support reproductive health-specific data privacy laws, like Rep. Sara Jacobs’ proposed “My Body My Data” Act. Laws like this would create important protections for a variety of reproductive health data, even beyond location data. Abortion-specific data privacy laws can provide some protection against the specific problem posed by Locate X. But to fully protect against location tracking tools, we must legally limit processing of all location data and not just data at sensitive locations, such as reproductive healthcare clinics.

While a limited law might provide some help, it would not offer foolproof protection. Imagine this scenario: someone travels from Alabama to New York for abortion care. With a data privacy law that protects only sensitive, reproductive health locations, Alabama police could still track that person’s device on the journey to New York. Upon reaching the clinic in New York, their device would disappear into a sensitive location blackout bubble for a couple of hours, then reappear outside of the bubble where police could resume tracking as the person heads home. In this situation, it would be easy to infer where the person was during those missing two hours, giving Alabama police the lead they need.

The best solution is to minimize all location data, no exceptions.

2. Limit How Law Enforcement Can Get Our Data

Congress and state legislatures should also pass laws limiting law enforcement’s ability to access our location data without proper legal safeguards.

Much of our mobile data, like our location data, is information law enforcement would typically need a court order to access. But thanks to the data broker industry, law enforcement can skip the courts entirely and simply head to the commercial market. The U.S. government has turned this loophole into a way to gather personal data on individuals without a search warrant

Lawmakers must close this loophole—especially if they’re serious about protecting abortion-seekers from hostile law enforcement in abortion-restrictive states. A key way to do this is for Congress to pass the Fourth Amendment is Not For Sale Act, which was originally introduced by Senator Ron Wyden in 2021 and made the important and historic step of passing the U.S. House of Representatives earlier this year. 

Another crucial step is to ban law enforcement from sending “geofence warrants” to corporate holders of location data. Unlike traditional warrants, a geofence warrant doesn’t start with a particular suspect or even a device or account; instead police request data on every device in a given geographic area during a designated time period, regardless of whether the device owner has any connection to the crime under investigation.This could include, of course, an abortion clinic. 

Notably, geofence warrants are very popular with law enforcement. Between 2018 and 2020, Google alone received more than 5,700 demands of this type from states that now have anti-abortion and anti-LGBTQ legislation on the books.

Several federal and state courts have already found individual geofence warrants to be unconstitutional and some have even ruled they are “categorically prohibited by the Fourth Amendment.” But instead of waiting for remaining courts to catch up, lawmakers should take action now, pass legislation banning geofence warrants, and protect all of us–abortion-seekers included–from this form of dragnet surveillance.

3. Make Your State a Data Sanctuary

In the wake of the Dobbs decision, many states stepped up to serve as health care sanctuaries for people seeking abortion care that they could not access in their home states. To truly be a safe refuge, these states must also be data sanctuaries. A state that has data about people who sought abortion care must protect that data, and not disclose it to adversaries who would use it to punish them for seeking that healthcare. California has already passed laws to this effect, and more states should follow suit.

What You Can Do Right Now

Even before lawmakers act, there are steps you can take to better shield your location data from tools like Locate X.  As noted above, we published a Locate X-specific guide several weeks ago. There are also additional tips on EFF’s Surveillance Self-Defense site, as well as many other resources available to provide more guidance in protecting your digital privacy. Many general privacy practices also offer strong protection against location tracking. 

But don’t stop there: we urge you to make your voice heard and contact your representatives. While these precautions offer immediate protection, only stronger laws will ensure comprehensive location privacy in the long run.

Top Ten EFF Digital Security Resources for People Concerned About the Incoming Trump Administration

In the wake of the 2024 election in the United States, many people are concerned about tightening up their digital privacy and security practices. As always, we recommend that people start making their security plan by understanding their risks. For most people in the U.S., the threats that they face and the methods by which they are likely to be surveilled or harassed have not changed, but the consequences of digital privacy or security failures may become much more serious, especially for vulnerable populations such as journalists, activists, LGBTQ+ people, people seeking or providing abortion-related care, Black or Indigenous people, and undocumented immigrants.

EFF has decades of experience in providing digital privacy and security resources, particularly for vulnerable people. We’ve written a lot of resources over the years and here are the top ten that we think are most useful right now:

1. Surveillance Self-Defense

https://ssd.eff.org/

Our Surveillance Self-Defense guides are a great place to start your journey of securing yourself against digital threats. We know that it can be a bit overwhelming, so we recommend starting with our guide on making a security plan so you can familiarize yourself with the basics and decide on your specific needs. Or, if you’re planning to head out to a protest soon and want to know the most important ways to protect yourself, check out our guide to Attending a Protest. Many people in the groups most likely to be targeted in the upcoming months will need advice tailored to their specific threat models, and for that we recommend the Security Scenarios module as a quick way to find the right information for your particular situation. 

2. Street-Level Surveillance

https://sls.eff.org/ 

If you are creating your security plan for the first time, it’s helpful to know which technologies might realistically be used to spy on you. If you’re going to be out on the streets protesting or even just existing in public, it’s important to identify which threats to take seriously. Our Street-Level Surveillance team has spent years studying the technologies that law enforcement uses and has made this handy website where you can find information about technologies including drones, face recognition, license plate readers, stingrays, and more.

3. Atlas Of Surveillance

https://atlasofsurveillance.org/ 

Once you have learned about the different types of surveillance technologies police can acquire from our Street-Level surveillance guides, you might want to know which technologies your local police has already bought. You can find that in our Atlas of Surveillance, a crowd-sourced map of police surveillance technologies in the United States. 

4. Doxxing: Tips To Protect Yourself Online & How to Minimize Harm

https://www.eff.org/deeplinks/2020/12/doxxing-tips-protect-yourself-online-how-minimize-harm

Surveillance by governments and law enforcement is far from the only kind of threat that people face online. We expect to see an increase in doxxing and harassment of vulnerable populations by vigilantes, emboldened by the incoming administration’s threatened policies. This guide is our thinking around the precautions you may want to take if  you are likely to be doxxed and how to minimize the harm if you’ve been doxxed already.

5. Using Your Phone in Times of Crisis

https://www.eff.org/deeplinks/2022/03/using-your-phone-times-crisis

Using your phone in general can be a cause for anxiety for many people. We have a short guide on what considerations you should make when you are using your phone in times of crisis. This guide is specifically written for people in war zones, but may also be useful more generally. 

6. Surveillance-Self Defense for Campus Protests

https://www.eff.org/deeplinks/2024/06/surveillance-defense-campus-protests 

One prediction we can safely make for 2025 is that campus protests will continue to be important. This blog post is our latest thinking about how to put together your security plan before you attend a protest on campus.

7. Security Education Companion

https://www.securityeducationcompanion.org/

For those who are already comfortable with Surveillance Self-Defense, you may be getting questions from your family, friends, or community about what to do now. You may even consider giving a digital security training session to people in your community, and for that you will need guidance and training materials. The Security Education Companion has everything you need to get started putting together a training plan for your community, from recommended lesson plans and materials to guides on effective teaching.

8. Police Location Tracking

https://www.eff.org/deeplinks/2024/11/creators-police-location-tracking-tool-arent-vetting-buyers-heres-how-protect 

One police surveillance technology we are especially concerned about is location tracking services. These are data brokers that get your phone's location, usually through the same invasive ad networks that are baked into almost every app, and sell that information to law enforcement. This can include historical maps of where a specific device has been, or a list of all the phones that were at a specific location, such as a protest or abortion clinic. This blog post goes into more detail on the problem and provides a guide on how to protect yourself and keep your location private.

9. Should You Really Delete Your Period Tracking App?

https://www.eff.org/deeplinks/2022/06/should-you-really-delete-your-period-tracking-app

As soon as the Supreme Court overturned Roe v. Wade, one of the most popular bits of advice going around the internet was to “delete your period tracking app.” Deleting your period tracking app may feel like an effective countermeasure in a world where seeking abortion care is increasingly risky and criminalized, but it’s not advice that is grounded in the reality of the ways in which governments and law enforcement currently gather evidence against people who are prosecuted for their pregnancy outcomes. This blog post provides some more effective ways of protecting your privacy and sensitive information. 

10. Why We Can’t Just Tell You Which Messenger App to Use

https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation

People are always asking us to give them a recommendation for the best end-to-end encrypted messaging app. Unfortunately, this is asking for a simple answer to an extremely nuanced question. While the short answer is “probably Signal most of the time,” the long answer goes into why that is not always the case. Since we wrote this in 2018, some companies have come and gone, but our thinking on this topic hasn’t changed much.

Bonus external guide

https://digitaldefensefund.org/learn

Our friends at the Digital Defense Fund have put together an excellent collection of guides aimed at particularly vulnerable people who are thinking about digital security for the first time. They have a comprehensive collection of links to other external guides as well.

***

EFF is committed to keeping our privacy and security advice accurate and up-to-date, reflecting the needs of a variety of vulnerable populations. We hope these resources will help you keep yourself and your community safe in dangerous times.

Speaking Freely: Aji Fama Jobe

Par : David Greene
3 décembre 2024 à 14:26

*This interview has been edited for length and clarity.

Aji Fama Jobe is a digital creator, IT consultant, blogger, and tech community leader from The Gambia. She helps run Women TechMakers Banjul, an organization that provides visibility, mentorship, and resources to women and girls in tech. She also serves as an Information Technology Assistant with the World Bank Group where she focuses on resolving IT issues and enhancing digital infrastructure. Aji Fama is a dedicated advocate working to leverage technology to enhance the lives and opportunities of women and girls in Gambia and across Africa.

Greene: Why don’t you start off by introducing yourself? 

My name is Aji Fama Jobe. I’m from Gambia and I run an organization called Women TechMakers Banjul that provides resources to women and girls in Gambia, particularly in the Greater Banjul area. I also work with other organizations that focus on STEM and digital literacy and aim to impact more regions and more people in the world. Gambia is made up of six different regions and we have host organizations in each region. So we go to train young people, especially women, in those communities on digital literacy. And that’s what I’ve been doing for the past four or five years. 

Greene: So this series focuses on freedom of expression. What does freedom of expression mean to you personally? 

For me it means being able to express myself without being judged. Because most of the time—and especially on the internet because of a lot of cyber bullying—I tend to think a lot before posting something. It’s all about, what will other people think? Will there be backlash? And I just want to speak freely. So for me it means to speak freely without being judged. 

Greene: Do you feel like free speech means different things for women in the Gambia than for men? And how do you see this play out in the work that you do? 

In the Gambia we have freedom of expression, the laws are there, but the culture is the opposite of the laws. Society still frowns on women who speak out, not just in the workspace but even in homes. Sometimes men say a woman shouldn’t speak loud or there’s a certain way women should express. It’s the culture itself that makes women not speak up in certain situations. In our culture it’s widely accepted that you let the man or the head of the family—who’s normally a man, of course—speak. I feel like freedom of speech is really important when it comes to the work we do. Because women should be able to speak freely. And when you speak freely it gives you that confidence that you can do something. So it’s a larger issue. What our organization does on free speech is address the unconscious bias in the tech space that impacts working women. I work as an IT consultant and sometimes when we’re trying to do something technical people always assume IT specialists are men. So sometimes we just want to speak up and say, “It’s IT woman, not IT guy.” 

Greene: We could say that maybe socially we need to figure this out, but now let me ask you this. Do you think the government has a role in regulating online speech? 

Those in charge of policy enforcement don’t understand how to navigate these online pieces. It’s not just about putting the policies in place. They need to train people how to navigate this thing or how to update these policies in specific situations. It’s not just about what the culture says. The policy is the policy and people should follow the rules, not just as civilians but also as policy enforcers and law enforcement. They need to follow the rules, too. 

Greene: What about the big companies that run these platforms? What’s their role in regulating online speech? 

With cyber-bullying I feel like the big companies need to play a bigger role in trying to bring down content sometimes. Take Facebook for example. They don’t have many people that work in Africa and understand Africa with its complexities and its different languages. For instance, in the Gambia we have 2.4 million people but six or seven languages. On the internet people use local languages to do certain things. So it’s hard to moderate on the platform’s end, but also they need to do more work. 

Greene: So six local languages in the Gambia? Do you feel there’s any platform that has the capability to moderate that? 

In the Gambia? No. We have some civil society that tries to report content, but it’s just civil society and most of them do it on a voluntary basis, so it’s not that strong. The only thing you can do is report it to Facebook. But Facebook has bigger countries and bigger issues to deal with, and you end up waiting in a lineup of those issues and then the damage has already been done. 

Greene: Okay, let’s shift gears. Do you consider the current government of the Gambia to be democratic? 

I think it is pretty democratic because you can speak freely after 2016 unlike with our last president. I was born in an era when people were not able to speak up. So I can only compare the last regime and the current one. I think now it’s more democratic because people are able to speak out online. I can remember back before the elections of 2016 that if you said certain things online you had to move out of the country. Before 2016 people who were abroad would not come back to Gambia for fear of facing reprisal for content they had posted online. Since 2016 we have seen people we hadn’t seen for like ten or fifteen years. They were finally able to come back. 

Greene: So you lived in the country under a non-democratic regime with the prior administration. Do you have any personal stories you could tell about life before 2016 and feeling like you were censored? Or having to go outside of the country to write something? 

Technically it was a democracy but the fact was you couldn’t speak freely. What you said could get you in trouble—I don’t consider that a democracy. 

During the last regime I was in high school. One thing I realized was that there were certain political things teachers wouldn’t discuss because they had to protect themselves. At some point I realized things changed because before 2016 we didn’t say the president’s name. We would give him nicknames, but the moment the guy left power we felt free to say his name directly. I experienced censorship from not being able to say his name or talk about him. I realized there was so much going on when the Truth, Reconciliation, and Reparations Commission (TRC) happened and people finally had the confidence to go on TV and speak about their stories. 

As a young person I learned that what you see is not everything that’s happening. There were a lot of things that were happening but we couldn’t see because the media was restricted. The media couldn’t publish certain things. When he left and through the TRC we learned about what happened. A lot of people lost their lives. Some had to flee. Some people lost their mom or dad or some got raped. I think that opened my world. Even though I’m not politically inclined or in the political space, what happened there impacted me. Because we had a political moment where the president didn’t accept the elections, and a lot of people fled and went to Senegal. I stayed like three or four months and the whole country was on lockdown. So that was my experience of what happens when things don’t go as planned when it comes to the electoral process. That was my personal experience. 

Greene: Was there news media during that time? Was it all government-controlled or was there any independent news media? 

We had some independent news media, but those were from Gambians outside of the country. The media that was inside the country couldn’t publish anything against the government. If you wanted to know what was really happening, you had to go online. At some point, WhatsApp was blocked so we had to move to Telegram and other social media. I also realized that at some point because my dad was in Iraq and I had to download a VPN so I could talk to him and tell him what was happening in the country because my mom and I were there. That’s why when people censor the internet I’m really keen on that aspect because I’ve experienced that. 

Greene: What made you start doing the work you’re doing now? 

First, when I started doing computer science—I have a computer science background—there was no one there to tell me what to do or how to do it. I had to navigate things for myself or look for people to guide me. I just thought, we don’t have to repeat the same thing for other people. That’s why we started Women TechMakers. We try to guide people and train them. We want employers to focus on skills instead of gender. So we get to train people, we have a lot of book plans and online resources that we share with people. If you want to go into a certain field we try to guide you and send you resources. That’s one of the things we do. Just for people to feel confident in their skills. And everyday people say to me, “Because of this program I was able to get this thing I wanted,” like a job or an event. And that keeps me going. Women get to feel confident in their skills and in the places they work, too. Companies are always looking for diversity and inclusion. Like, “oh I have two female developers.” At the end of the day you can say you have two developers and they’re very good developers. And yeah, they’re women. It’s not like they’re hired because they’re women, it’s because they’re skilled. That’s why I do what I do. 

Greene: Is there anything else you wanted to say about freedom of speech or about preserving online open spaces? 

I work with a lot of technical people who think freedom of speech is not their issue. But what I keep saying to people is that you think it’s not your issue until you experience it. But freedom of speech and digital rights are everybody’s issues. Because at the end of the day if you don’t have that freedom to speak freely online or if you are not protected online we are all vulnerable. It should be everybody’s responsibility. It should be a collective thing, not just government making policies. But also people need to be aware of what they’re posting online. The words you put out there can make or break someone, so it’s everybody’s business. That’s how I see digital rights and freedom of expression. As a collective responsibility. 

Greene: Okay, our last question that we ask everybody. Who is your free speech hero? 

My mom’s elder sister. She passed away in 2015, but her name is Mariama Jaw and she was in the political space even during the time when people were not able to speak. She was my hero because I went to political rallies with her and she would say what people were not willing to say. Not just in political spaces, but in general conversation, too. She’s somebody who would tell you the truth no matter what would happen, whether her life was in danger or not. I got so much inspiration from her because a lot of women don’t go into politics or do certain things and they just want to get a husband, but she went against all odds and she was a politician, a mother and sister to a lot of people, to a lot of women in her community.

🍿 Today’s Double Feature: Privacy and Free Speech

Par : Aaron Jue
3 décembre 2024 à 03:33

It’s Power Up Your Donation Week! Right now, your contribution to the Electronic Frontier Foundation will go twice as far to protect digital privacy, security, and free speech rights for everyone. Will you donate today to get a free 2X match?

Power Up!

Give to EFF and get a free donation match

Thanks to a fund made by a group of dedicated supporters, your donation online gets an automatic match up to $307,200 through December 10! This means every dollar you give equals two dollars to fight surveillance, oppose censorship, defend encryption, promote open access to information, and much more. EFF makes every cent count.

Lights, Laptops, Action!

Who has time to decode tech policy, understand the law, then figure out how to change things for the users? EFF does. The purpose of every attorney, activist, and technologist at EFF is to watch your back and make technology better. But you are the superstar who makes it possible with your support.

'Fix Copyright' member shirt inspired by Steamboat Willie entering the public domain.

With the help of people like you, EFF has been able to help unravel legal and ethical questions surrounding the rise of AI; keep policymakers on the road to net neutrality; encourage the Fifth Circuit Court of Appeals to rule that location-based geofence warrants are unconstitutional; and explain why banning TikTok and passing laws like the Kids Online Safety Act (KOSA) will not achieve internet safety.

The world struggles to get tech right, but EFF’s experts advocate for you every day of the year. Take action by renewing your EFF membership! You can set the stage for civil liberties and human rights online for everyone. Please give today and let your donation go twice as far for digital rights!

Power Up!

Support internet freedom
(and get an Instant match!)

Already an EFF Member?

Strengthen the community when you help us spread the word about Power Up Your Donation Week! Here’s some sample language that you can share:

Donate to EFF this week for an instant match! Double your impact on digital privacy, security, and free speech rights for everyone. https://eff.org/power-up

Bluesky | Email | Facebook | LinkedIn | X
(More at eff.org/social)

Each of us has the power to help in the movement for internet freedom. Our future depends on forging a web where we can have private conversations and explore the world online with confidence, so I thank you for your moral support and hope to have you on EFF's side as a member, too.

________________________

EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating ELEVEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Amazon and Google Must Keep Their Promises on Project Nimbus

2 décembre 2024 à 14:52

When a company makes a promise, the public should be able to rely on it. Today, nearly every person in the U.S. is a customer of either Amazon or Google—and many of us are customers of both technology giants. Both of these companies have made public promises that they will ensure their technologies are not being used to facilitate human rights violations. These promises are not just corporate platitudes; they’re commitments to every customer and to society at large.  

It’s a reasonable thing to ask if these promises are being kept. And it’s especially important since Amazon and Google have been increasingly implicated by reports that their technologies, specifically their joint cloud computing initiative called Project Nimbus, are being used to facilitate mass surveillance and human rights violations of Palestinians in the Occupied Territories of the West Bank, East Jerusalem, and Gaza. This was the basis of our public call in August 2024 for the companies to come clean about their involvement.   

But we didn’t just make a public call. We sent letters directly to the Global Head of Public Policy at Amazon and to Google’s Global Head of Human Rights in late September. We detailed what these companies have promised and asked them to tell us by November 1, 2024 how they were complying. We hoped that they could clear up the confusion, or at least explain where we, or the reporting we were relying on, were wrong.  

But instead, they failed to respond. This is unfortunate, since it leads us to question how serious they were in their promises. And it should lead you to question that too.

Project Nimbus: Technology at the Expense of Human Rights

Project Nimbus provides advanced cloud and AI capabilities to the Israeli government, tools that an increasing number of credible reports suggest are being used to target civilians under pervasive surveillance in the Occupied Palestinian Territories. This is more than a technical collaboration—it’s a human rights crisis in the making as evidenced by data-driven targeting programs like Project Lavender and Where’s Daddy, which have reportedly led to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers, and ordinary families. 

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation.

The consequences are serious. Vulnerable communities in Gaza and the West Bank suffer violations of their human rights, including their rights to privacy, freedom of movement, and free association, all of which can be fostered and furthered by pervasive surveillance. These documented violations underscore the ethical responsibility of Amazon and Google, whose technologies are at the heart of this surveillance scheme. 

Amazon and Google’s Promises

Amazon and Google have made public commitments to align with the UN Guiding Principles on Business and Human Rights and their own AI ethics frameworks. These frameworks are supposed to ensure that their technologies do not contribute to harm. But their silence on these pressing concerns speaks volumes, undermining trust in their supposed dedication to these principles and casting doubt on their sincerity.

Unanswered Letters, Unanswered Accountability

When we sent letters to Amazon and Google, it was with direct, actionable questions about their involvement in Project Nimbus. We asked for transparency about their contracts, clients, and risk assessments. We called for evidence that due diligence had been conducted and demanded explanations of the steps taken to prevent their technologies from facilitating abuse.

Our core demands were straightforward and tied directly to the company’s commitments:

  • Disclose the scope of their involvement in Project Nimbus.
  • Provide evidence of risk assessments tied to this project.
  • Explain how they are addressing credible reports of misuse.

Despite these reasonable and urgent requests, which are tied directly to the companies’ stated legal and ethical commitments, both companies have remained silent, and their silence isn’t just an insufficient response—it’s an alarming one.

Why Transparency Cannot Wait

Transparency is not a luxury when human rights are at risk—it’s an ethical and legal obligation. For both of these companies, it’s an obligation they have promised to the rest of us. For global companies that wield immense power, silence in the face of abuse is inexcusable.

The Fight for Accountability

EFF is making these letters public to highlight the human rights obligations Amazon and Google have undertaken and to raise reasonable questions they should answer in light of public reports about the misuse of their technologies in the Occupied Palestinian Territories. We aren’t the first ones to raise concerns, but, having raised these questions publicly, and now having given the companies a chance to clarify, we are increasingly concerned about their complicity.   

Google and Amazon have promised all of us—their customers and noncustomers alike—that they would take steps to ensure that their technologies support a future where technology empowers rather than oppresses. It’s increasingly clear that those promises are being ignored, if not entirely broken. EFF will continue to push for transparency and accountability.

One Down, Many to Go with Pre-Installed Malware on Android

27 novembre 2024 à 17:56

Last year, we investigated a Dragon Touch children’s tablet (KidzPad Y88X 10) and confirmed that it was linked to a string of fully compromised Android TV Boxes that also had multiple reports of malware, adware, and a sketchy firmware update channel. Since then, Google has taken the (now former) tablet distributor off of their list of Play Protect certified phones and tablets. The burden of catching this type of threat should not be placed on the consumer. Due diligence by manufacturers, distributors, and resellers is the only way to tackle this issue of pre-installed compromised devices making their way into the hands of unknowing customers. But in order to mitigate this issue, regulation and transparency need to be a part of the strategy. 

As of October, Dragon Touch is not selling any tablets on their website anymore. However, there is lingering inventory still out there in places like Amazon and Newegg. There are storefronts that exist only on reseller sites for better customer reach, but considering Dragon Touch also wiped their blog of any mention of their tablets, we assume a little more than a strategy shift happened here.

We wrote a guide to help parents set up their kid’s Android devices safely, but it’s difficult to choose which device to purchase to begin with. Advising people to simply buy a more expensive iPad or Amazon Fire Tablet doesn’t change the fact people are going to purchase low-budget devices. Lower budget devices can be just as reputable if the ecosystem provided a path for better accountability.

Who is Responsible?

There are some tools in development for consumer education, like the newly developed, voluntary Cyber Trust Mark by the FCC. This label would aim to inform consumers of the capabilities and guarantee that minimum security standards were met for an IoT device. However, the consumer holding the burden to check for pre-installed malware is absolutely ridiculous. Responsibility should fall to regulators, manufacturers, distributors, and resellers to check for this kind of threat.

More often than not, you can search for low budget Android devices on retailers like Amazon or Newegg, and find storefront pages with little transparency on who runs the store and whether or not they come from a reputable distributor. This is true for more than just Android devices, but considering how many products are created for and with the Android ecosystem, working on this problem could mean better security for thousands of products.

Yes, it is difficult to track hundreds to thousands of distributors and all of their products. It is hard to keep up with rapidly developing threats in the supply chain. You can’t possibly know of every threat out there.

With all due respect to giant resellers, especially the multi-billion dollar ones: tough luck. This is what you inherit when you want to “sell everything.” You also inherit the responsibility and risk of each market you encroach or supplant. 

Possible Remedy: Firmware Transparency

Thankfully, there is hope on the horizon and tools exist to monitor compromised firmware.

Last year, Google presented Android Binary Transparency in response to pre-installed malware. This would help track firmware that has been compromised with these two components:

  • An append-only log of firmware information that is immutable, globally observable, consistent, and auditable. Assured with cryptographic properties.
  • A network of participants that invest in witnesses, log health, and standardization.

Google is not the first to think of this concept. This is largely extracting lessons of success from Certificate Transparency. Yet, better support directly from the Android ecosystem for Android images would definitely help. This would provide an ecosystem of transparency of manufacturers and developers that utilize the Android Open Source Project (AOSP) to be just as respected as higher-priced brands.

We love open source here at EFF and would like to continue to see innovation and availability in devices that aren’t necessarily created by bigger, more expensive names. But there needs to be an accountable ecosystem for these products so that pre-installed malware can be more easily detected and not land in consumer hands so easily. Right now you can verify your Pixel device if you have a little technical skill. We would like verification to be done by regulators and/or distributors instead of asking consumers to crack out their command lines to verify themselves.

It would be ideal to see existing programs like Android Play Protect certified run a log like this with open-source log implementations, like Trillian. This way, security researchers, resellers, and regulating bodies could begin to monitor and query information on different Android Original Equipment Manufacturers (OEMs).

There are tools that exist to verify firmware, but right now this ecosystem is a wishlist of sorts. At EFF, we like to imagine what could be better. While a hosted comprehensive log of Android OEMs doesn’t currently exist, the tools to create it do. Some early participants for accountability in the Android realm include F-Droid’s Android SDK Transparency Log and the Guardian Project’s (Tor) Binary Transparency Log.

Time would be better spent on solving this problem systemically, than researching whether every new electronic evil rectangle or IoT device has malware or not.

A complementary solution with binary transparency is the Software Bill of Materials (SBOMs). Think of this as a “list of ingredients” that make up software. This is another idea that is not very new, but has gathered more institutional and government support. The components listed in an SBOM could highlight issues or vulnerabilities that were reported for certain components of a software. Without binary transparency though, researchers, verifiers, auditors, etc. could still be left attempting to extract firmware from devices that haven’t listed their images. If manufacturers readily provided these images, SBOMs can be generated more easily and help create a less opaque market of electronics. Low budget or not.

We are glad to see some movement from last year’s investigations. Right in time for Black Friday. More can be done and we hope to see not only devices taken down more swiftly when reported, especially with shady components, but better support for proactive detection. Regardless of how much someone can spend, everyone deserves a safe, secure device that doesn’t have malware crammed into it.

Tell the Senate: Don’t Weaponize the Treasury Department Against Nonprofits

Par : Jason Kelley
27 novembre 2024 à 14:04

Last week the House of Representatives passed a dangerous bill that would allow the Secretary of Treasury to strip a U.S. nonprofit of its tax-exempt status. If it passes the Senate and is signed into law, H.R. 9495 would give broad and easily abused new powers to the executive branch. Nonprofits would not have a meaningful opportunity to defend themselves, and could be targeted without disclosing the reasons or evidence for the decision. 

This bill is an existential threat to nonprofits of all stripes. Future administrations could weaponize the powers in this bill to target nonprofits on either end of the political spectrum. Even if they are not targeted, the threat alone could chill the activities of some nonprofit organizations.

The bill’s authors have combined this attack on nonprofits, originally written as H.R. 6408, with other legislation that would prevent the IRS from imposing fines and penalties on hostages while they are held abroad. These are separate matters. Congress should separate these two bills to allow a meaningful vote on this dangerous expansion of executive power. No administration should be given this much power to target nonprofits without due process. 

tell your senator

Protect nonprofits

Over 350 civil liberties, religious, reproductive health, immigrant rights, human rights, racial justice, LGBTQ+, environmental, and educational organizations signed a letter opposing the bill as written. Now, we need your help. Tell the Senate not to pass H.R. 9495, the so-called “Stop Terror-Financing and Tax Penalties on American Hostages Act.”

EFF Tells the Second Circuit a Second Time That Electronic Device Searches at the Border Require a Warrant

Par : Sophia Cope
26 novembre 2024 à 15:53

EFF, along with ACLU and the New York Civil Liberties Union, filed a second amicus brief in the U.S. Court of Appeals for the Second Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.

The case, U.S. v. Smith, involved a traveler who was stopped at Newark airport after returning from a trip to Jamaica. He was detained by border officers at the behest of the FBI and his cell phone was forensically searched. He had been under investigation for his involvement in a conspiracy to control the New York area emergency mitigation services (“EMS”) industry, which included (among other things) insurance fraud and extortion. He was subsequently prosecuted and sought to have the evidence from his cell phone thrown out of court.

As we wrote about last year, the district court made history in holding that border searches of cell phones require a warrant and therefore warrantless device searches at the border violate the Fourth Amendment. However, the judge allowed the evidence to be used in Mr. Smith’s prosecution because, the judge concluded, the officers had a “good faith” belief that they were legally permitted to search his phone without a warrant.

The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2023, U.S. Customs and Border Protection (CBP) conducted 41,767 device searches.

The Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.

The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country.

In our brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here—and that the district court was correct in applying Riley. In that case, the Supreme Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Supreme Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.

Travelers’ privacy interests in their cell phones and laptops are, of course, the same as those considered in Riley. Modern devices, a decade later, contain even more data points that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.

In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.

First, physical contraband (like drugs) can’t be found in digital data.

Second, digital contraband (such as child pornography) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet. As the Smith court stated, “Stopping the cell phone from entering the country would not … mean stopping the data contained on it from entering the country” because any data that can be found on a cell phone—even digital contraband—“very likely does exist not just on the phone device itself, but also on faraway computer servers potentially located within the country.”

Finally, searching devices for evidence of contraband smuggling (for example, text messages revealing the logistics of an illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes, as was the case of the FBI investigating Mr. Smith’s involvement in the EMS conspiracy) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution.

If the Second Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband. This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband, while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).

In our brief, we also highlighted two other district courts within the Second Circuit that required a warrant for border device searches: U.S. v. Sultanov (2024) and U.S. v. Fox (2024). We plan to file briefs in their appeals, as well. Earlier this month, we filed a brief in another Second Circuit border search case, U.S. v. Kamaldoss. We hope that the Second Circuit will rise to the occasion in one of these cases and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.

Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"

26 novembre 2024 à 12:58

Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of release, without needing to go to a store. 

But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the differences between owning physical and digital media, and why we need to break down that barrier. 

Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure you actually own the digital media you pay for, and how you can take action, too. 

Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes. 

Organizing for Digital Rights in the Pacific Northwest

21 novembre 2024 à 19:19

Recently I traveled to Portland, Oregon to speak at the PDX People’s Digital Safety Fair, meet up with five groups in the Electronic Frontier Alliance, and attend BSides PDX 2024. Portland’s first ever Digital Safety Fair was a success and five of our six EFA organizations in the area participated: Personal Telco Project, Encode Justice Oregon, PDX Privacy, TA3M Portland, and Community Broadband PDX. I was able to reaffirm our support with these organizations, and table with most of them as they met local people interested in digital rights. We distributed EFF toolkits as a resource, and we made sure EFA brochures and stickers had a presence on all their tables. A few of these organizations were also present at BSides PDX, and it was great seeing them being leaders in the local infosec and cybersecurity community.

PDX Privacy’s mission is to bring about transparency and control in the acquisition and use of surveillance systems in the Portland Metro area, whether personal data is captured by the government or by commercial entities. Transparency is essential to ensure privacy protections, community control, fairness, and respect for civil rights.

TA3M Portland is an informal meetup designed to connect software creators and activists who are interested in censorship, surveillance, and open technology.

The Oregon Chapter of Encode Justice, the world’s first and largest youth movement for human-centered artificial intelligence, works to mobilize policymakers and the public for guardrails to ensure AI fulfills its transformative potential. Its mission is to ensure we encode justice and safety into the technologies we build.

(l to r) Pictured here with the PDXPrivacy’s Seth, Boaz and new President, Nate. Pictured with Chris Bushick, legendary Portland privacy advocate of TA3M PDX. Pictured with the leaders of Encode Justice Oregon.

There's growing momentum in the Seattle and Portland areas

Community Broadband PDX’s focus is on expanding the existing dark fiber broadband network in Portland to all residents, creating an open-source model where the city owns the fiber, and it’s controlled by local nonprofits and cooperatives, not large ISP’s.

Personal Telco is dedicated to the idea that users have a central role in how their communications networks are operated. This is done by building our own networks that we share with our communities, and by helping to educate others in how they can, too.

At the People’s Digital Safety Fair I spoke in the main room on the campaign to bring high-speed broadband to Portland, which is led by Community Broadband PDX and the Personal TelCo Project. I made a direct call to action for those in attendance to join the campaign. My talk culminated with, “What kind of ACTivist would I be if I didn’t implore you to take an ACTion? Everybody pull out your phones.” Then I guided the room to the website for Community Broadband PDX and to the ‘Join Us’ page where people in that moment signed up to join the campaign, spread the word with their neighbors, and get organized by the Community Broadband PDX team. You can reach out to them at cbbpdx.org and personaltelco.net. You can get in touch with all the groups mentioned in this blog with their hyperlinks above, or use our EFA allies directory to see who’s organizing in your area. 

(l to r) BSidesPDX 2024 swag and stickers. A photo of me speaking at the People’s Digital Privacy Fair on broadband access in PDX. Pictured with Jennifer Redman, President of Community Broadband PDX and former broadband administrator for the city of Portland, OR. A picture of the Personal TelCo table with EFF toolkits printed and EFA brochures on hand. Pictured with Ted, Russell Senior, and Drew of Personal Telco Project. Lastly, it's always great to see a member and active supporter of EFF interacting with one of our EFA groups.

It’s very exciting to see what members of the EFA are doing in Portland! I also went up to Seattle and met with a few organizations, including one now in talks to join the EFA. With new EFA friends in Seattle, and existing EFA relationships fortified, I'm excited to help grow our presence and support in the Pacific Northwest, and have new allies with experience in legislative engagement. It’s great to see groups in the Pacific Northwest engaged and expanding their advocacy efforts, and even greater to stand by them as they do!

Electronic Frontier Alliance members get support from a community of like-minded grassroots organizers from across the US. If your group defends our digital rights, consider joining today. https://efa.eff.org

❌
❌