Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Today The UK Parliament Undermined The Privacy, Security, And Freedom Of All Internet Users 

Par : Joe Mullin
19 septembre 2023 à 15:50

The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide

A Backdoor That Undermines Encryption

A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content.This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor. 

These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years

It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety. 

The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world. 

Censorship and Age-Gating

Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions. 

The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary. 

In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users. 

EFF at FIFAfrica 2023

25 septembre 2023 à 15:42

EFF is excited to be in Dar es Salaam, Tanzania for this year's iteration of the Forum on Internet Freedom in Africa (FIFAfrica), organized by CIPESA (Collaboration on International ICT Policy for East and Southern Africa) between 27-29 September 2023.

FIFAfrica is a landmark event in the region that convenes an array of stakeholders from across internet governance and online rights to discuss and collaborate on opportunities for advancing privacy, protecting free expression, and enhancing the free flow of information online. FIFAfrica also offers a space to identify new and important digital rights issues, as well as exploring avenues to engage with these debates across national, regional, and global spaces.

We hope you have an opportunity to connect with us at the panels listed below. In addition to these, EFF will be attending many other events at FIFAfrica. We look forward to meeting you there!

THURSDAY 28 SEPTEMBER 

Combatting Disinformation for Democracy 

2pm to 3:30pm local time 
Location: Hyatt Hotel - Kibo 

Hosted by: CIPESA

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nompilo Simanje, Africa Advocacy and Partnerships Lead, International Press Institute 
  • Obioma Okonkwo, Head, Legal Department, Media Rights Agenda
  • Daniel O’Maley, Senior Digital Governance Specialist, Center for International Media Assistance 

In an age of falsehoods, facts, and freedoms marked by the rapid spread of information and the proliferation of digital platforms, the battle against disinformation has never been more critical. This session brings together experts and practitioners at the forefront of this fight, exploring the pivotal roles that media, fact checkers, and technology play in upholding truth and combating the spread of false narratives. 

This panel will delve into the multifaceted challenges posed by disinformation campaigns, examining their impact on societies, politics, and public discourse. Through an engaging discussion, the session will spotlight innovative strategies, cutting-edge technologies, and collaborative initiatives employed by media organizations, tech companies, and civil society to safeguard the integrity of information.

FRIDAY 29 SEPTEMBER

Platform Accountability in Africa: Content Moderation and Political Transitions

11am to 12:30pm local time
Location: Hyatt Hotel - Kibo 

Hosted by: Meta Oversight Board, CIPESA, Open Society Foundations 

Speakers

  • Paige Collings, Senior Speech and Privacy Activist, Electronic Frontier Foundation 
  • Nerima Wako, Executive Director, SIASA PLACE
  • Abigail Bridgman, Deputy Vice President, Content Review and Policy, Meta Oversight Board 
  • Afia Asantewaa Asare-Kyei, Member, Meta Oversight Board

Social media platforms are often criticized for failing to address significant and seemingly preventable harms stemming from online content. This is especially true during volatile political transitions, where disinformation, violence incitement, and hate speech on the basis of gender, religion, ethnicity, and other characteristics, are highly associated with increased real-life harms.

This session will discuss best practices for combating harmful online content through the lens of the most urgent and credible threats to political transitions on the African continent. With critical general, presidential, and legislative elections fast approaching, as well as the looming threat of violent political transitions, the panelists will highlight current trends of online content, the impact of harmful content, and chart a path forward for the different stakeholders. The session will also assess the various roles that different institutions, stakeholders, and experts can play to strike the balance between addressing harms and respecting the human rights of users under such a context.

EFF, ACLU and 59 Other Organizations Demand Congress Protect Digital Privacy and Free Speech

26 septembre 2023 à 16:50

Earlier this week, EFF joined the ACLU and 59 partner organizations to send a letter to Senate Majority Leader Chuck Schumer urging the Senate to reject the STOP CSAM Act. This bill threatens encrypted communications and free speech online, and would actively harm LGBTQ+ people, people seeking reproductive care, and many others. EFF has consistently opposed this legislation. This bill has unacceptable consequences for free speech, privacy, and security that will affect how we connect, communicate, and organize.

TAKE ACTION

TELL CONGRESS NOT TO OUTLAW ENCRYPTED APPS

The STOP CSAM Act, as amended, would lead to censorship of First Amendment protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality. Even today, without this bill, platforms regularly remove content that has vague ties to sex or sexuality for fear of liability. This would only increase if STOP CSAM incentivized apps and websites to exercise a heavier hand at content moderation.

If enacted, the STOP CSAM Act will also make it more difficult to communicate using end-to-end encryption. End-to-end encrypted communications cannot be read by anyone but the sender or recipient — that means authoritarian governments, malicious third parties, and the platforms themselves can’ read user messages. Offering encrypted services could open apps and websites up to liability, because a court could find that end-to-end encryption services are likely to be used for CSAM, and that merely offering them is reckless.

Congress should not pass this law, which will undermine security and free speech online. Existing law already requires online service providers who have actual knowledge of CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC), a quasi-government entity that works closely  with law enforcement agencies. Congress and the FTC have many tools already at their disposal to tackle CSAM, some of which are not used. 

EFF's Comment to the Meta Oversight Board on Polish Anti-Trans Facebook Post 

27 septembre 2023 à 11:33

EFF recently submitted comments in response to the Meta Oversight Board’s request for input on a Facebook post in Polish from April 2023 that targeted trans people. The Oversight Board was created by Meta in 2020 as an appellate body and has 22 members from around the world who review contested content moderation decisions made by the platform.  

Our comments address how Facebook’s automated systems failed to prioritize content for human review. From our observations—and the research of many within the digital rights community—this is a common deficiency made worse during the pandemic, when Meta decreased the number of workers moderating content on its platforms. In this instance, the content was eventually sent for human review and was still assessed to be non-violating and therefore not escalated further. Facebook kept the content online despite 11 different users reporting the content 12 times and only removed the content once the Oversight Board decided to take the case for review. 

As EFF has demonstrated, Meta has at times over-removed legal LGBTQ+ related content whilst simultaneously keeping content online that depicts hate speech toward the LGBTQ+ community. This is often because the content—as in this specific case—is not an explicit depiction of such hate speech, but rather a message that is embedded in a wider context that automated content moderation tools and inadequately trained human moderators are simply not equipped to consider. These tools do not have the ability to recognize nuance or the context of statements, and human reviewers are not provided the training to remove content that depicts hate speech beyond a basic slur. 

This incident serves as part of the growing body of evidence that Facebook’s systems are inadequate in detecting seriously harmful content, particularly that which targets marginalized and vulnerable communities. Our submission looks at the various reasons for these shortcomings and makes the case that Facebook should have removed the content—and should keep it offline.

Read the full submission in the PDF below.

EFF to D.C. Circuit: Animal Rights Activists Shouldn’t Be Censored on Government Social Media Pages Because Agency Disagrees With Their Viewpoint

Par : Sophia Cope
28 septembre 2023 à 16:16

Intern Muhammad Essa contributed to this post.

EFF, along with the Foundation for Individual Rights and Expression (FIRE), filed a brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court ruling that upheld the censorship of public comments on a government agency’s social media pages. The district court’s decision is problematic because it undermines our right to freely express opinions on issues of public importance using a modern and accessible way to communicate with government representatives.

People for the Ethical Treatment of Animals (PETA) sued the National Institutes of Health (NIH), arguing that NIH blocks their comments against animal testing in scientific research on the agency’s Facebook and Instagram pages, thus violating of the First Amendment. NIH provides funding for research that involves testing on animals from rodents to primates.

NIH claims to apply a general rule prohibiting public comments that are “off topic” to the agency’s social media posts—yet the agency implements this rule by employing keyword filters that include words such as cruelty, revolting, tormenting, torture, hurt, kill, and stop. These words are commonly found in comments that express a viewpoint that is against animal testing and sympathetic to animal rights.

First Amendment law makes it clear that when a government agency opens a forum for public participation, such as the interactive spaces of the agency’s social media pages, it is prohibited from censoring a particular viewpoint in that forum. Any speech restrictions that it may apply must be viewpoint-neutral, meaning that the restrictions should apply equally to all viewpoints related to a topic, not just to the viewpoint that the agency disagrees with.

EFF’s brief argues that courts must approach with scepticism a government agency’s claim that its “off topic” speech restriction is viewpoint-neutral and is only intended to exclude irrelevant comments. How such a rule is implemented could reveal that it is in fact a guise for unconstitutional viewpoint discrimination. This is the case here and the district court erred in ruling for the government.

For example, EFF’s brief argues that NIH’s automated keyword filters are imprecise—they are incapable of accurately implementing an “off topic” rule because they are incapable of understanding context and nuance, which is necessary when comparing a comment to a post. Also, NIH’s keyword filters and the agency’s manual enforcement of the “off topic” rule are highly underinclusive—that is, other people's comments that are “off topic” to a post are often allowed to remain on the agency’s social media pages. Yet PETA’s comments against animal testing are reliably censored.

Imprecise and underinclusive enforcement of the “off topic” rule suggests that NIH’s rule is not viewpoint-neutral but is really a means to block PETA activists from engaging with the agency online.

EFF’s brief urges the D.C. Circuit to reject the district court’s erroneous holding and rule in favor of the plaintiffs. This would protect everyone’s right to express their opinions freely online. The free exchange of opinions informs public policy and is a crucial characteristic of a democratic society. A genuine representative government must not be afraid of public criticism.

Get Real, Congress: Censoring Search Results or Recommendations Is Still Censorship

Par : Jason Kelley
28 septembre 2023 à 18:29

Updated October 20, 2023: Removed two sentences for clarity. 

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

For the past two years, Congress has been trying to revise the Kids Online Safety Act (KOSA) to address criticisms from EFF, human and digital rights organizations, LGBTQ groups, and others, that the core provisions of the bill will censor the internet for everyone and harm young people. All of those changes fail to solve KOSA’s inherent censorship problem: As long as the “duty of care” remains in the bill, it will still force platforms to censor perfectly legal content. (You can read our analyses here and here.)

Despite never addressing this central problem, some members of Congress are convinced that a new change will avoid censoring the internet: KOSA’s liability is now theoretically triggered only for content that is recommended to users under 18, rather than content that they specifically search for. But that’s still censorship—and it fundamentally misunderstands how search works online. 

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults

As a reminder, under KOSA, a platform would be liable for not “acting in the best interests of a [minor] user.” To do this, a platform would need to “tak[e] reasonable measures in its design and operation of products and services to prevent and mitigate” a long list of societal ills, including anxiety, depression, eating disorders, substance use disorders, physical violence, online bullying and harassment, sexual exploitation and abuse, and suicidal behaviors. As we have said, this will be used to censor what young people and adults can see on these platforms. The bills’ coauthors agree, writing that KOSA “will make platforms legally responsible for preventing and mitigating harms to young people online, such as content promoting suicide, eating disorders, substance abuse, bullying, and sexual exploitation.” 

Our concern, and the concern of others, is that this bill will be used to censor legal information and restrict the ability for minors to access it, while adding age verification requirements that will push adults off the platforms as well. Additionally, enforcement provisions in KOSA give power to state attorneys general to decide what is harmful to minors, a recipe for disaster that will exacerbate efforts already underway to restrict access to information online (and offline). The result is that platforms will likely feel pressured to remove enormous amounts of information to protect themselves from KOSA’s crushing liability—even if that information is not harmful.

The ‘Limitation’ section of the bill is intended to clarify that KOSA creates liability only for content that the platform recommends. In our reading, this is meant to refer to the content that a platform shows a user that doesn’t come from an account the user follows, is not content the user searches for, and is not content that the user deliberately visits (such as by clicking a URL). In full, the ‘Limitation’ section states that the law is not meant to prevent or preclude “any minor from deliberately and independently searching for, or specifically requesting, content,” nor should it prevent the “platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.” 

In layman’s terms, minors will supposedly still have the freedom to follow accounts, search for, and request any type of content, but platforms won’t have the freedom to share some types of content to them. Again, that fundamentally misunderstands how social media works—and it’s still censorship. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Courts Have Agreed: Recommendations are Protected

If, as the bills’ authors write, they want to hold platforms accountable for “knowingly driving toxic, addicting, and dangerous content” to young people, why stop at search—which can also show toxic, addicting, or dangerous content? We think this section was added for two reasons. 

First, members of Congress have attacked social media platforms’ use of automated tools to present content for years, claiming that it causes any number of issues ranging from political strife to mental health problems. The evidence supporting those claims is unclear (and the reverse may be true). 

Second, and perhaps more importantly, the authors of the bill likely believe pinning liability on recommendations will allow them to square a circle and get away with censorship while complying with the First Amendment. It will not.

Platforms’ ability to “filter, screen, allow, or disallow content;” “pick [and] choose” content; and make decisions about how to “display,” “organize,” or “reorganize” content is protected by 47 U.S.C. § 230 (“Section 230”), and the First Amendment. (We have written about this in various briefs, including this one.) This “Limitation” in KOSA doesn’t make the bill any less censorious. 

Search Results Are Recommendations

Practically speaking, there is also no clear distinction between “recommendations” and “search results.” The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is. Accuracy and relevance in search results are algorithmically generated, and any modern search method uses an automated process to determine the search results and the order in which they are presented, which it then recommends to the user. 

KOSA’s authors also assume, incorrectly, that content on social media can easily be organized, tagged, or described in the first place, such that it can be shown when someone searches for it, but not otherwise. But content moderation at infinite scale will always fail, in part because whether content fits into a specific bucket is often subjective in the first place.

The coauthors of KOSA seem to think that content which is shown as a result of a search is not a recommendation by the platform. But of course it is.

For example: let’s assume that using KOSA, an attorney general in a state has made it clear that a platform that recommends information related to transgender healthcare will be sued for increasing the risk of suicide in young people. (Because trans people are at a higher risk of suicide, this is one of many ways that we expect an attorney general could torture the facts to censor content—by claiming that correlation is causation.) 

If a young person in that state searches social media for “transgender healthcare,” does this mean that the platform can or cannot show them any content about “transgender healthcare” as a result? How can a platform know which content is about transgender healthcare, much less whether the content matches the attorney general’s views on the subject, or whether they have to abide by that interpretation in search results? What if the user searches for “banned healthcare?” What if they search for “trans controversy?” (Most people don’t search for the exact name of the piece of content they want to find, and most pieces of content on social media aren’t “named” at all.) 

In this example, and in an enormous number of other cases, platforms can’t know in advance what content a person is searching for—and will, at the risk of showing something controversial that the person did not intend to find, remove it entirely—from recommendations as well as search results. If liability exists for showing it, platforms will remove users’ ability to access all content that relates to a dangerous topic rather than risk showing it in the occasional instance when they can determine, for certain, that is what the user is looking for. This blunt response will not only harm children who need access to information, but adults who also may seek the same content online.

“Nerd Harder” to Remove Content Will Never Work

Third, as we have written before, it is impossible for platforms to know what types of content they would be liable for recommending (or showing in search results) in the first place. Because there is no definition of harmful or depressing content that doesn’t include a vast amount of protected expression, almost any content could fit into the categories that platforms would have to censor.  This would include truthful news about what’s going on in the world, such as wars, gun violence, and climate change. 

This Limitation section will have no meaningful effect on the censorial nature of the law. If KOSA passes, the only real option for platforms would be to institute age verification and ban minors entirely, or to remove any ‘recommendations’ and ‘search’ functions almost entirely for minors. As we’ve said repeatedly, these efforts will also impact adult users who either lack the ability to prove they are not minors or are deterred from doing so. Most smaller platforms would be pressured to ban minors entirely, while larger ones, with more money for content moderation and development, would likely block them from finding enormous swathes of content unless they have the exact URL to locate it. In that way, KOSA’s censorship would further entrench the dominant social media platforms.

Congress should be smart enough to recognize this bait-and-switch fails to solve KOSA’s many faults. We urge anyone who cares about free speech and privacy online to send a message to Congress voicing your opposition. 

TAKE ACTION

TELL CONGRESS YOU WON'T ACCEPT INTERNET CENSORSHIP

Are you a young person fighting back against bad bills like KOSA? Become an EFF member at a new, discounted Neon membership level specifically for you--stickers included! 

Is Your State’s Child Safety Law Unconstitutional? Try Comprehensive Data Privacy Instead

Comprehensive data privacy legislation is the best way to hold tech companies accountable in our surveillance age, including for harm they do to children. Well-written privacy legislation has the added benefit of being constitutional—unlike the flurry of laws that restrict content behind age verification requirements that courts have recently blocked. Such misguided laws do little to protect kids while doing much to invade everyone’s privacy and speech.

Courts have issued preliminary injunctions blocking laws in Arkansas, California, and Texas because they likely violate the First Amendment rights of all internet users. EFF has warned that such laws were bad policy and would not withstand court challenges. Nonetheless, different iterations of these child safety proposals continue to be pushed at the state and federal level.

The answer is to re-focus attention on comprehensive data privacy legislation, which would address the massive collection and processing of personal data that is the root cause of many problems online. Just as important, it is far easier to write data privacy laws that are constitutional. Laws that lock online content behind age gates can almost never withstand First Amendment scrutiny because they frustrate all internet users’ rights to access information and often impinge on people’s right to anonymity.

It Is Comparatively Easy to Write Data Privacy Laws That Are Constitutional

EFF has long pushed for strong comprehensive commercial data privacy legislation and continues to do so. Data privacy legislation has many components. But at its core, it should minimize the amount of personal data that companies process, give users certain rights to control their personal data, and allow consumers to sue when the law is violated.

EFF has argued that privacy laws pass First Amendment muster when they have a few features that ensure the law reasonably fits its purpose. First, they regulate the commercial processing of personal data. Second, they do not impermissibly restrict the truthful publication of matters of public concern. And finally, the government’s interest and law’s purpose is to protect data privacy; expand the free expression that privacy enables; and protect the security of data against insider threats, hacks, and eventual government surveillance. If so, the privacy law will be constitutional if the government shows a close fit between the law’s goals and its means.

EFF made this argument in support of the Illinois Biometric Information Privacy Act (BIPA), and a law in Maine that limits the use and disclosure of personal data collected by internet service providers. BIPA, in particular, has proved wildly important to biometric privacy. For example, it led to a settlement that prohibits the company Clearview AI from selling its biometric surveillance services to law enforcement in the state. Another settlement required Facebook to pay hundreds of millions of dollars for its policy (since repealed) of extracting faceprints from users without their consent.

Courts have agreed. Privacy laws that have been upheld under the First Amendment, or cited favorably by courts, include those that regulate biometric data, health data, credit reports, broadband usage data, phone call records, and purely private conversations.

The Supreme Court, for example, has cited the federal 1996 Health Insurance Portability and Accountability Act (HIPAA) as an example of a “coherent” privacy law, even when it struck down a state law that targeted particular speakers and viewpoints. Additionally, when evaluating the federal Wiretap Act, the Supreme Court correctly held that the law cannot be used to prevent a person from publishing legally obtained communications on matters of public concern. But it otherwise left in place the wiretap restrictions that date back to 1934, designed to protect the confidentiality of private conversations.

It Is Nearly Impossible to Write Age Verification Requirements That Are Constitutional. Just Ask Arkansas, California, and Texas

Federal Courts have recently granted preliminary injunctions that block laws in Arkansas, California, and Texas from going into effect because they likely violate the First Amendment rights of all internet users. While the laws differ from each other, they all require (or strongly incentivize) age verification for all internet users.

The Arkansas law requires age verification for users of certain social media companies, which EFF strongly opposes, and bans minors from those services without parental consent. The court blocked it. The court reasoned that the age verification requirement would deter everyone from accessing constitutionally protected speech and burden anonymous speech. EFF and ACLU filed an amicus brief against this Arkansas law.

In California, a federal court recently blocked the state’s Age-Appropriate Design Code (AADC) under the First Amendment. Significantly, the AADC strongly incentivized websites to require users to verify their age. The court correctly found that age estimation is likely to “exacerbate” the problem of child security because it requires everyone “to divulge additional personal information” to verify their age. The court blocked the entire law, even some privacy provisions we’d like to see in a comprehensive privacy law if they were not intertwined with content limitations and age-gating. EFF does not agree with the court’s reasoning in its entirety because it undervalued the state’s legitimate interest in and means of protecting people’s privacy online. Nonetheless, EFF originally asked the California governor to veto this law, believing that true data privacy legislation has nothing to do with access restrictions.

The Texas law requires age verification for users of websites that post sexual material, and exclusion of minors. The law also requires warnings about sexual content that the court found unsupported by evidence. The court held both provisions are likely unconstitutional. It explained that the age verification requirement, in particular, is “constitutionally problematic because it deters adults’ access to legal sexually explicit material, far beyond the interest of protecting minors.” EFF, ACLU, and other groups filed an amicus brief against this Texas law.

Support Comprehensive Privacy Legislation That Will Stand the Test of Time

Courts will rightly continue to strike down similar age verification and content blocking laws, just as they did 20 years ago. Lawmakers can and should avoid this pre-determined fight and focus on passing laws that will have a lasting impact: strong, well-written comprehensive data privacy.

EFF Urges Second Circuit to Affirm Injunction of New York’s Dangerous Online “Hateful Conduct” Law

5 octobre 2023 à 19:40

EFF, along with the ACLU, urged the U.S. Court of Appeals for the Second Circuit to find a New York statute that compels platforms to moderate online speech that falls within the state’s particular definition of “hateful conduct” unconstitutional.

The statute itself requires covered social media platforms to develop a mechanism that allows users to report incidents of “hateful conduct” (as defined by the state), and to publish a policy detailing how the platform will address such incidents in direct responses provided to each individual complainant. Noncompliance with the statute is enforceable through Attorney General investigations, subpoenas, and daily fines of $1000 per violation. The statute is part of a broader scheme by New York officials, including the Governor and the Attorney General, to unlawfully coerce online platforms into censoring speech that the state deems “hateful.”

The bill was rushed through the New York legislature in the aftermath of last year’s tragic mass shooting at a Buffalo, NY supermarket. At the same time, the state launched an investigation into social media platforms’ “civil or criminal liability for their role in promoting, facilitating, or providing a platform to plan or promote violence.” In the months that followed, state officials alleged that it was their perceived “lack of oversight, transparency, and accountability” over social media platforms’ content moderation policies that had caused such “dangerous and corrosive ideas to spread,” and held up this “hateful conduct” law as the regulatory solution to online hate speech. And, when the investigation into such platform liability concluded, Attorney General Letitia James called for platforms to be held accountable and threatened to push for measures that would ensure they take “reasonable steps to prevent unlawful violent criminal content from appearing on their platforms.”

EFF and ACLU filed a friend-of-the-court brief in support of the plaintiffs: Eugene Volokh, a First Amendment scholar who runs the legal blog Volokh Conspiracy, the video sharing site Rumble, and the social media site Local. In the brief we urged the court to affirm the trial court’s preliminary injunction of the law. As we have explained many times before, any government involvement in online intermediaries’ content moderation processes—regardless of the form or degree—raises serious First Amendment and broader human rights concerns.

Despite the New York officials’ seemingly good intention here, there are several problems with this law.

First, the law broadly defines “hateful conduct” as the “use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons,” a definition that could encompass a broad range of speech not typically considered “hate speech.” 

Next, the bill unconstitutionally compels platforms’ speech by forcing them to replace their own editorial policies with the state’s. Social media platforms and other online intermediaries subject to this bill have a long-protected First Amendment right to curate the speech that others publish on their sites—regardless of whether they curate a lot or a little, and regardless of whether their editorial philosophy is readily discernible or consistently applied. Here, by requiring publishers to develop, publish, and enforce an editorial standard at all—much less one that must adopt the state’s view of “hateful conduct”—this statute unlawfully compels speech and chills platforms’ First Amendment-protected exercise of editorial freedom.

Finally, the thinly veiled threats from officials designed to coerce websites to adopt the state’s editorial position is unconstitutional coercion.

We agree that many internet users want the online platforms they use to moderate certain hateful speech; but those decisions must be made by the platforms themselves, not the government. Platforms’ editorial freedom is staunchly protected by the First Amendment; to allow government to manipulate social media curation for its own purposes threatens fundamental freedoms. Therefore, to protect our online spaces, we must strictly scrutinize all government attempts to co-opt platforms’ content moderation policies—whether by preventing moderation, as in Texas and Florida, or by compelling moderation, as New York has done here.

Internet Access Shouldn't Be a Bargaining Chip In Geopolitical Battles

We at EFF are horrified by the events transpiring in the Middle East: Hamas’ deadly attack on southern Israel last weekend and Israel’s ongoing retributive military attack and siege on Gaza. While we are not experts in military strategy or international diplomacy, we do have expertise with how human rights and civil liberties should be protected on the internet—even in times of conflict and war. 

That is why we are deeply concerned that a key part of Israel’s response has been to target telecommunications infrastructure in Gaza, including effectively shutting down the internet 

Here are a few reasons why:  

Shutting down telecommunications deprives civilians of a life-saving tool for sharing information when they need it the most. 

In wartime, being able to communicate directly with the people you trust is instrumental to personal safety and protection, and may ultimately mean the difference between life and death. But right now, the millions of people in Gaza, who are already facing a dire humanitarian crisis, are experiencing oppressive limitations on their access to the internet—stifling their ability to find out where their families are, obtain basic information about resources and any promised humanitarian aid, share safer border crossings, and other crucial information.  

The internet was built, in part, to make sure that communications like this are possible. And despite its use for spreading harmful content and misinformation, the internet is particularly imperative in moments of war and conflict when sharing and receiving real-time and up to date information is critical for survival. For example, what was previously a safe escape route may no longer be safe even a few hours later, and news printed in a broadsheet may no longer be reliable or relevant the following day.  

The internet enables this flow of information to remain active and alert to new realities. Shutting down access to internet services creates impossible obstacles for the millions of people trapped in Gaza. It is eroding access to the lifeline that millions of civilians need to stay alive.  

Shutting down telecommunications will not silence Hamas.  

We also understand the impulse to respond to Hamas’ shocking use of the internet to terrorize Israelis, including by taking over Facebook pages of people they have taken hostage to live stream and post horrific footage. We urge social media and other platforms to act quickly when those occur, which typically they can already do under their respective terms of use. But the Israeli government’s reaction of shutting down all internet communications in Gaza is a wrongheaded response and one that will impact exactly the wrong people.  

Hamas is sufficiently well-resourced to maneuver through any infrastructural barriers, including any internet shutdowns imposed by the Israeli government. Further, since Israel isn’t able to limit the voice of Hamas, the internet shutdown effectively allows Hamas to dominate the Palestinian narrative in the public vernacular—eliminating the voices of activists, journalists, and ordinary people documenting their realities and sharing facts in real-time.  

Shutting down telecommunications sets a dangerous precedent.  

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power to cut off that access. Shutdowns have become a blunt instrument that aid state violence and deprive free speech, and are routinely deployed by authoritarian governments that do not care about the rule of law or human rights. For example, limiting access to the internet was a vital component of the Syrian government’s repressive strategy in 2013, and Egyptian President Hosni Mubarak shut down all internet access for five days in 2011 in an effort to impair Egyptians’ ability to coordinate and communicate. As we’ve said before, access to the Internet shouldn't be a bargaining chip in geopolitical battles. Instead of protecting human rights of civilians, Israel has adopted a disproportionate tactic often used by the authoritarian governments of Iran, Russia, and Myanmar. 

Israel is a party to the International Covenant on Civil and Political Rights and has long claimed to be committed to upholding and protecting human rights. But shutting off access to telecommunications for millions of ordinary Palestinians is grossly inconsistent with that claim and instead sends the message that the Israeli government is actively working to ensure that ordinary Palestinians are placed at an even greater risk of harm than they already are. It also sends the unmistakable message that the Israeli government is preventing people around the world learning the truth about its actions in Gaza, something that is affirmed by Israel’s other actions like approving new regulation to temporarily shut down news channels which ‘damage national security.’ 

We call on Israel to stop interfering with the telecommunications infrastructure in Gaza, and to ensure Palestinians from Gaza to the West Bank immediately have unrestricted access to the internet.    

 

 

What’s the Goal and How Do We Get There? Crucial Issues in Brazil’s Take on Saving the News from Big Tech

24 octobre 2023 à 10:57

Amidst the global wave of countries looking at Big Tech revenues and how they relate to the growing news media crisis, many are asking whether and how tech companies should  compensate publishers for the journalism that circulates on their platforms. This has become another flash point in Brazil’s heated agenda regarding platform regulation.

Draft proposals setting a “remuneration obligation” for digital platforms started to pop up in the Brazilian congress after Australia adopted its own News Media Bargaining Code. The issue gained steam when the rapporteur of PL 2630 (the so-called “Fake News bill”), Orlando Silva, presented a new draft in early 2022, including a press remuneration provision. Subsequent negotiations  moved this remuneration proposal to a different draft bill, PL 2370. The remuneration rules are similar to the current version of another draft proposal in Brazil’s Chamber of Deputies (PL 1354).

While the main disputed issues revolve around who should get paid, for what, and how remuneration is measured, there is a baseline implicit question that deserves further analysis: What are the ultimate goals of making digital platforms pay for journalistic content? Responses from those supporting the proposal include redressing Big Tech's unfair exploitation of their relationship with publishers, fixing power asymmetries in the online news distribution market, and preserving public interest journalism as an essential piece of democratic societies.

These are all important priorities. But if what we want in the end is to ensure a vibrant, plural, diverse and democratic arena for publishing and discussing news and the world, there are  fundamental tenets that should guide how we frame and pursue this goal.

These tenets are:

- We want people to widely read, share, comment, and debate news. We also want people to be able to access the documents and information underlying reporting to better reflect on them. We want plural and diverse sources of information to thrive. Access to information and free expression are human and fundamental rights that measures seeking to strengthen journalism must champion, not jeopardize. They are rights intrinsically related to upholding journalism as a key element of democratic societies.

- We want to fortify journalism and a free and diverse media. The overreliance of news outlets on Big Tech is a reality we must change, rather than reinforcing it. Proper responses should aim at building alternatives to the centralized intermediary role that few dominant digital platforms play in how information and revenues are distributed. Solutions that entrench this role and further consolidate publishers’ dependency on Big Tech are not fit for purpose. 

But before we discuss solutions that policymakers should embrace, let’s delve a little more into the underlying problems we should tackle.

An Account of Ad-Tech Industry’s Disruption of Journalism Sustainability

We have already written a good chunk on how Big Tech has disrupted the media's traditional business model.

While the ad-tech turmoil on how news businesses used to work back in the day affects journalism as a public interest good, even back in the day, the presence of thriving news players didn’t necessarily mean a plural and diverse media environment. Brazil is sadly, and historically, a compelling example of that. Adopting appropriate structural measures to tackle market concentration would probably have led to a different story. Even if an independent, diverse, and public interest journalism landscape doesn’t automatically follow from a robust news market, fixing asymmetries and distortions in such market do play a critical role in enabling a stronger journalism landscape.

When it comes to the relations between digital platforms and publishers, tech’s intermediation of the distribution of news content poses a series of issues. It starts with platforms' incentives to keep people on their sites rather than clicking through the actual content, and goes beyond. Here we highlight some of them:

  • Draining media advertising funds to digital platforms – Tech intermediaries pocket a huge portion of the money that advertisers pay for displaying ads online. It’s not only that digital platforms like Instagram and Google Search compete with news outlets making “ad spots” up for grabs. Even when the advertiser displays its ad on a news publisher website, much of the money paid stays with intermediaries along the way. In the UK, a study of the British advertisers’ association ISBA showed that only half of the ad money spent ultimately reached the news publishers. If in the analog era the main intermediary acting to place ads in media outlets was an advertising agency, nowadays there is an intricate ad-tech chain by which different players also get their bite. 
  • Complexity and opacity of the ad-tech ecosystem – How much do intermediaries get and how does the ecosystem operate are not simple questions to answer. The ad-tech ecosystem is both complex and opaque. The ISBA’s study itself stressed the hurdles of finding consistent and standardized data about its inner workings and the flow of advertising money across the intermediaries’ chain. Yet, one critical aspect of this ecosystem has already stood out – the reigning position that Google and Meta enjoy in the ad-tech stack. 
  • Ad-tech stack duopoly and market abuse – As we spelled out here, the ad-tech stack operates through real-time auctions that offer available online spaces for ad display combined with users’ profiling in a run-up for our attention. This stack includes: a “supply-side platform” (SSP), which acts as the publisher’s broker offering ad spots (usually called “ad inventory”) and related user eyeballs; a “demand-side platform” (DSP), which represents the advertisers and help them manage the purchasing of ad slots and find the “most effective” impression for their ads considering user data; and a marketplace for ad spots where supply and demand meet. As we noted, there are many companies that offer one or two of these services, but Google and Meta offer all three. Plus, they also compete with publishers by selling ad slots on YouTube or Facebook and Instagram, respectively. Google and Meta represent both buyers and sellers on a marketplace they control, collecting fees at each step of the way and rigging the bidding to their own benefit. They faced investigations of illegal collusion to rig the market in their favor by protecting Google’s dominance in exchange for preferential treatment for Meta. Although authorities decided not to pursue this specific case, other antitrust investigations and actions against their abusive conducts in the ad-tech market are in progress.
  • Making journalism dependent on surveillance advertising – Trading audience attention is not new in how the news market operates. But an integrated and unrelenting system of user tracking, profiling and targeting did come about in our digital era with the rise of Big Tech’s main way of doing business. A whole behavioral advertising industry has developed grounded in the promises and perils of delivering more value based on dragnet surveillance of our traits, relations, moves, and inferred interests. Big Tech companies rule this territory and shape it in such a way as to hold publishers hostages to their gimmicks. Making journalism reliant on surveillance advertising is a deal that serves to entrench few tech players as must-needed ad gatekeepers since this is not a trivial structure to build and maintain. This structure is also directly abusive to users, who are continuously tracked and profiled, feeding a vicious cycle. We shouldn't need pervasive, behavioral surveillance for journalism to thrive.

All these problems relate to Big Tech's unfair exploitation of their relationship with news organizations. But none of them are copyright issues. Copyright is a poor framework for addressing concerns about journalism sustainability. The copyright approach to the fight between tech and news relies on the assumption that journalists and media outlets, as copyright holders, are empowered to license (and thus control and block) quotation and discussion of the news of the day. That logic threatens the first fundamental tenet we presented above as it would undermine both the free discussion of important reporting and reporting itself. Copyright proposals also purport to create a remuneration dynamic that tracks and measures the “use” of journalistic content of each copyright holder so that each one can receive the corresponding compensation. Even when not explicitly attached to copyright law, proposals of journalistic remuneration based on the “use” of news content pose many challenges. Australia’s compensation arrangements are a mixed bag with several issues deriving from this and other problems we outline below.

Why Brazil Shouldn’t Follow Australia’s Code or Any “Content Use-Based” Models

Australia’s News Media Bargaining Code is a declared inspiration for Brazil’s debate over a remuneration right for publishers, endorsed by Big Media players and decision makers. As per the Code’s model, private remuneration agreements between news businesses and digital platforms result from these platforms making news content available on their services. The law details what “making content available” means, the conditions the Treasurer must follow to designate digital platforms that are bound by the law, the requirements news businesses must meet to benefit from the bargaining rules, the obligations that designated digital platforms have in relation to registered news businesses, and mechanisms for mediation and arbitration in case both parties fail to reach an agreement. 

Although Google and Meta have closed more than 30 agreements during the law’s first year in force, none of them is actually under the Code’s purview. The two tech giants’ strategic moves regarding the new law avoided any formal designations of digital platforms as per the Code’s rules (as James Meese notes in “The Decibel” podcast).

So far, the Code has served as a bargaining tool for media players to reach agreements with Google and Meta outside the law’s guarantees. Both due to the Code’s language and the unfolding bargaining practice, the Australian model brings a set of lessons we shouldn’t overlook. Professor Diana Bossio’s analysis points out some of them:

First, the lack of transparency in the agreements has deepened imbalances among media players competing for market share in an already concentrated ecosystem. Smaller, independent organizations unaware of higher sums secured by major outlets have struck deals for very modest amounts and lost key professionals to larger groups that used the new funding source to pay salaries above the usual market rate. Second, the tech platforms used agreements to bolster their own news products, such as “Google News Showcase,” according to their content and business priorities. Third, Google and Meta are the ones ultimately determining what is and which media outlets produce public interest journalism that gets to be paid. As a result, they are actually the ones deciding the “winners and losers of the Australian media industry.” In sum, Bossio states that

Lack of transparency and designation means the tech platforms have been able to act in the best interests of their own business priorities, rather than in the interest of the code’s stated aim of supporting public-interest journalism.

Canada’s Online News Act sought to address some of the pitfalls of the Australian model but has been struggling with securing its enforcement. Both Google and Meta have said the law is unworkable for their businesses, and Meta has decided to block news content for everyone accessing Facebook and Instagram in Canada. The company argues that people don’t come to Meta’s platforms for news, and that the only way it “can reasonably comply with this legislation is to end news availability for people in Canada.”

By ceasing to make news available on its platforms, Meta dodges Canada’s remuneration obligation. This is one of the traps of basing a remuneration arrangement on the “use” of journalistic content by online platforms, as the current draft of PL 2370 in Brazil does. Digital platforms can simply filter news out. If lawmakers respond by compelling them to carry news content in order to avoid such blocking, they fall yet in another trap – that of undermining platforms’ ability to remove harmful or otherwise problematic content under their terms of service. But the traps don’t end there. The “use” of journalistic content as the basis for remuneration is also bad because:

  • It encourages "clickbait" content.
  • It ends up favoring dominant or sensationalist media players.
  • It fosters and deepens structures for monitoring user sharing of links and content, which poses both data privacy and tech market concentration concerns.
  • It faces clear hurdles in circumscribing what “use” is, measuring such “use” in relation to each news organization, and supervising whether the remuneration is compatible with the amount of content “used.”

What should we do, then?

Which Alternatives Can Pave the Proper Way Forward

Let’s recall our fundamental tenets for achieving the end goal of ensuring a vibrant, plural, diverse, and democratic arena for publishing and discussing news and the world we live in. First, measures aimed at strengthening journalism shouldn’t serve to curb the circulation and discussion of news. Access to information and free expression are human and fundamental rights that these measures must champion, not endanger. Second, fortifying a free, independent, and diverse press entails the creation of alternatives to overcome news outlets’ dependency on Big Tech, instead of reinforcing it.

While PL 2370 and PL 1354 are important vectors for going a step further towards journalism sustainability in Brazil, their current language still fails to properly meet such concerns.

The draft bills follow the model of private agreements between digital platforms and news companies based on the “use” of journalistic content. Setting the kind of “use” that triggers remuneration vis-à-vis reasonable use exceptions has been complex and debated. The fear that this approach ends up favoring only the big players or that the money doesn’t get to the journalists actually doing the work has also driven discussions. Worryingly, there are no transparency requirements in the drafts for such remuneration deals. The bills don’t look at the market distortions we presented earlier. Relatedly, they don’t explore alternative approaches to Big Tech’s central intermediation role in how information and revenues are distributed. In fact, they may serve to cement the current dependency course.

By combining structural market measures and a policy decision to strengthen journalism, Brazilian decision makers, including Congress, should instead:

  • Establish restrictions for companies to operate in two or more parts of the ad-tech stack. Big Tech firms would have to choose if they want to represent the “demand-side”, the “supply side” or offer the “marketplace” where both meet. A draft law in the U.S. aims precisely to bridle such abusive situation and can inspire the Brazilian draft legislation. 
  • Ramp up the transparency of the ad-tech ecosystem and the flow of ad spending. For example, by requiring ad-tech platforms to disclose the underlying criteria (including figures) used to calculate ad revenues and viewership, backstopped by independent auditors.
  • Adopt further measures that can reduce Big Tech’s dominant role as intermediaries of publishers’ revenues coming from ads or subscribers. For example, to allow smaller players to participate in real-time bidding, incentivize more competitive solutions in such ecosystem, and open up the market of app stores. Currently, Google or Apple pocket 30 percent of every in-app subscription or micropayment dollar. As we noted, the EU and the U.S. are taking measures to change that. 
  • Build on Brazil's data protection legal framework  to stop surveillance advertising and return to contextual ads, which are based on the context in which it appears: what article it appears alongside of, or which publication. Rather than following users around to target them with ads, contextual advertisers seek out content that is relevant to their messages, and place ads alongside that content. This would dismiss the data advantage enjoyed by Big Tech companies in the ad ecosystem.

The measures above could likely be enough to rebalance the power asymmetries between digital platforms and news outlets, especially regarding larger media players. However, Brazil’s background indicates that this alone may fail to advance an independent, diverse, and public interest journalism landscape. The proper policy decision to pursue this goal is not to foster private and non-transparent agreements based on how much platforms or people “use” news.There are better approaches, such as establishing public subsidies for advancing journalism sustainability. The policy goal of strengthening journalism as a decisive element of democratic societies translates into a policy decision to financially support its flourishment. In addition to promoting structural market measures, the government should direct resources towards this goal. Considering the many funding priorities and budget constraints, a viable and sound path is using the collection of ad-tech players' taxation to create a fund managed by an independent, multistakeholder committee. The committee and the funding allocation would abide by strict transparency rules, representativeness criteria, and oversight.

With that, the discussion over who gets paid, for what, and which other initiatives are important to fund to pave a way of less dependency between news organizations and Big Tech could go way beyond bargaining agreements and have this fund as a catalyst based on guidelines set by law. This could also free the remuneration model from the problematic aspiration of tracking the "use" of news content and dispensing payments accordingly.

The idea of creating a fund is not new in Brazilian debates about journalism sustainability. Following global discussions, the Brazilian National Federation of Journalists (FENAJ) has been advocating for a fund considering the model of Brazil’s Audiovisual Sector Fund (FSA), which is part of a consistent policy fostering the audiovisual sector in the country. The idea gained support from Brazil's Digital Journalism Association (AJOR) and other civil society organizations. Brazilian decision makers should look at FSA’s experience to build a sounder path, putting in place, of course, the necessary checks and balances to prevent risks of capture and undue interference. As noted above, the collection of resources should rely on a relevant portion of revenue-related taxation of ad-tech players rather than the use of journalistic content. Moreover, transparency, public oversight, and democratic criteria to allocate the money are among the essential commitments to be set to ensure a participative, multistakeholder, and independent journalism fund.

We hope the crucial issues and alternatives outlined here can help to build a stronger way forward in Brazil’s take of upholding journalism before the dominant role of Big Tech companies.

Young People May Be The Biggest Target for Online Censorship and Surveillance—and the Strongest Weapon Against Them

Par : Jason Kelley
30 octobre 2023 à 15:54

Over the last year, state and federal legislatures have tried to pass—and in some cases succeeded in passing—legislation that bars young people from digital spaces, censors what they are allowed to see and share online, and monitors and controls when and how they can do it. 

EFF and many other digital rights and civil liberties organizations have fought back against these bills, but the sheer number is alarming. At times it can be nearly overwhelming: there are bills in Texas, Utah, Arkansas, Florida, Montana; there are federal bills like the Kids Online Safety Act and the Protecting Kids on Social Media Act. And there’s legislation beyond the U.S., like the UK’s Online Safety Bill

JOIN EFF AT the neon level 

Young people, too, have fought back. In the long run, we believe we’ll win, together—and because of your help. We’ve won before: In the 1990’s, Congress enacted sweeping legislation that would have curtailed online rights for people of all ages. But that law was aimed, like much of today’s legislation, at young people like you. Along with the ACLU, we challenged the law and won core protections for internet rights in a Supreme Court case, Reno v. ACLU, that recognized that free speech on the Internet merits the highest standards of Constitutional protection. The Court’s decision was its first involving the Internet. 

Even before that, EFF was fighting on the side of teens living on the cutting edge of the ‘net (or however they described it then). In 1990, a Secret Service dragnet called Operation Sundevil seized more than 40 computers from young people in 14 American cities. EFF was formed in part to protect those youths.

So the current struggle isn’t new. As before, young people are targeted by governments, schools, and sometimes parents, who either don’t understand or won’t admit the value that online spaces, and technology generally, offer, no matter your age. 

And, as before, today’s youth aren’t handing over their rights. Tens of thousands of you have vocally opposed flawed censorship bills like KOSA. You’re using the digital tools that governments want to strip you of to fight back, rallying together on Discords and across social media to protect online rights. 

If we don’t succeed in legislatures, know that we will push back in courts, and we will continue building technology for a safe, encrypted internet that anyone, of any age, can access without fear of surveillance or government censorship. 

If you’re a young person eager to help protect your online rights, we’ve put together a few of our favorite ways below to help guide you. We hope you’ll join us, however you can.

Here’s How to Take Your Rights With You When You Go Online—At Any Age

Join EFF at a Special “Neon” Level Membership for Just $18

The huge numbers of young people working hard to oppose the Kids Online Safety Act has been inspiring. Whatever happens, EFF will be there to keep fighting—and you can help us keep up the fight by becoming an EFF member. 

We’ve created a special Neon membership level for anyone under 18 that’s the lowest price we’ve ever offered–just $18 for a year’s membership. If you can, help support the activists, technologists, and attorneys defending privacy, digital creativity, and internet freedom for everyone by becoming an EFF member with a one-time donation. You’ll get a sticker pack (see below), insider briefings, and more. 

JOIN EFF at the neon level 

We aren’t verifying any ages for this membership level because we trust you. (And, because we oppose online age verification laws—read more about why here.)

Gift a Neon Membership 

Not a young person, but have one in your life that cares about digital rights? You can also gift a Neon membership! Membership helps us build better tech, better laws, and a better internet at a time when the world needs it most. Every generation must fight for their rights, and now, that battle is online. If you know a teen that cares about the internet and technology, help make them an EFF member! 

Speak Up with EFF’s Action Center

Young people—and people of every age—have already sent thousands of messages to Congress this year advocating against dangerous bills that would limit their access to online spaces, their privacy, and their ability to speak out online. If you haven’t done so, make sure that legislators writing bills that affect your digital life hear from you by visiting EFF’s Action Center, where you can quickly send messages to your representatives at the federal and state level (and sometimes outside of the U.S., if you live elsewhere). Take our action for KOSA today if you haven’t yet: 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Other bills that might interest you, as of October 2023, are the Protecting Kids on Social Media Act and the RESTRICT Act

If you’re under 18, you should know that many more pieces of legislation at the state level have passed or are pending this year that would impact you. You can always reach out to your representatives even if we don’t have an Action Center message available by finding the legislation here, for example, and the contact info of your rep on their website.

Protect Your Privacy with Surveillance Self-Defense

Protecting yourself online as a young person is often more complicated than it is for others. In addition to threats to your privacy posed by governments and companies, you may also want to protect some private information from schools, peers, and even parents. EFF’s Surveillance Self-Defense hub is a great place to start learning how to think about privacy, and what steps you can take to ensure information about you doesn’t go anywhere you don’t want. 

Fight for Strong Student Rights

Schools have become a breeding ground for surveillance. In 2023, most kids can tell you: surveillance cameras in school buildings are passé. Nearly all online activity in school is filtered and flagged. Children are accused of cheating by algorithms and given little recourse to prove their innocence. Facial recognition and other dangerous, biased biometric scanning is becoming more and more common.

But it’s not all bad. Courts have expanded some student rights recently. And you can fight back in other ways. For a broad overview, use our Privacy for Students guide to understand how potential surveillance and censorship impacts you, and what to do about it. If it fits, consider following that guide up with our LGBTQ Youth module

If you want to know more, take a deep dive into one of the most common surveillance tools in schools—student monitoring software—with our Red Flag Machine project and quiz. We analyzed public records from GoGuardian, a tool used in thousands of schools to monitor the online activity of millions of students, and what we learned is honestly shocking. 

And don’t forget to follow our other Student Privacy work. We regularly dissect developments in school surveillance, monitoring, censorship, and how they can impact you. 

Start a Local Tech or Digital Rights Group 

Don’t work alone! If you have friends or know others in your area that care about the benefits of technology, the internet, digital rights—or think they just might be interested in them—why not form a club? It can be particularly powerful to share why important issues like free speech, privacy, and creativity matter to you, and having a group behind you if you contact a representative can add more weight to your message. Depending on the group you form, you might also consider joining the EFA! (See below.)

Not sure how to meet with other folks in your area? Why not join an already-started online Discord server of young people fighting back against online censorship, or start your own?

Find Allies or Join other Grassroots Groups in the Electronic Frontier Alliance

The Electronic Frontier Alliance is a grassroots network of community and campus organizations across the United States working to educate our neighbors about the importance of digital rights. Groups of young people can be a great fit for the EFA, which includes chapters of Encode Justice, campus groups in computer science, hacking, tech, and more. You can find allies, or if you build your own group, join up with others. On our EFA site you’ll find toolkits on event organizing, talking to media, activism, and more. 

Speak out on Social Media

Social networks are great platforms for getting your message out into the world, cultivating a like-minded community, staying on top of breaking news and issues, and building a name for yourself. Not sure how to make it happen? We’ve got a toolkit to get you started! Also, do a quick search for some of the issues you care about—like “KOSA,” for example—and take a look at what others are saying. (Young TikTok users have made hundreds of videos describing what’s wrong with KOSA, and Tumblr—yes, Tumblr—has multiple anti-KOSA blogs that have gone viral multiple times.) You can always join in the conversation that way. 

Teach Digital Privacy with SEC 

If you’ve been thinking about digital privacy for a while now, you may want to consider sharing that information with others. The Security Education Companion is a great place to start if you’re looking for lesson plans to teach digital security to others.

In College (or Will Be Soon)? Take the Tor University Challenge

In the Tor University Challenge, you can help advance human rights with free and open-source technology, empowering users to defend against mass surveillance and internet censorship. Tor is a service that helps you to protect your anonymity while using the Internet. It has two parts: the Tor Browser that you can download that allows you to use the Internet anonymously, and the volunteer network of computers that makes it possible for that software to work. Universities are great places to run Tor Relays because they have fast, stable connections and computer science and IT departments that can work with students to keep a relay running, while learning hands-on cybersecurity experience and thinking about global policy, law, and society. 

Visit Tor University to get started. 

Learn about Local Surveillance and Fight Back 

Young people don’t just have to worry about government censorship and school surveillance. Law enforcement agencies routinely deploy advanced surveillance technologies in our communities that can be aimed at anyone, but are particularly dangerous for young black and brown people. Our Street-Level Surveillance resources are designed for members of the public, advocacy organizations, journalists, defense attorneys, and policymakers who often are not getting the straight story from police representatives or the vendors marketing this equipment. But at any age, it’s worth learning how automated license plate readers, gunshot detection, and other police equipment works.

Don’t stop there. Our Atlas of Surveillance documents the police tech that’s actually being deployed in individual communities. Search our database of police tech by entering a city, county, state or agency in the United States. 

Follow EFF

Stay educated about what’s happening in the tech world by following EFF. Sign up for our once- or twice-monthly email newsletter, EFFector. Follow us on Meta, Mastodon, Instagram, TikTok, Bluesky, Twitch, YouTube, and Twitter. Listen to our podcast, How to Fix the Internet, for candid discussions of digital rights issues with some of the smartest people working in the field. 


There are so many ways for people of all ages to fight for and protect the internet for themselves and others. (Just take a look at some of the ways we’ve fought for privacy, free speech, and creativity over the years: an airship, an airplane, and a badger; encrypting pretty much the entire web and also cracking insecure encryption to prove a point; putting together a speculative fiction collection and making a virtual reality game—to name just a few.)

Whether you’re new to the fight, or you’ve been online for decades—we’re glad to have you.

Platforms Must Stop Unjustified Takedowns of Posts By and About Palestinians

Legal intern Muhammad Essa Fasih contributed to this post.

Social media is a crucial means of communication in times of conflict—it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity. Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.

In the weeks since war between Hamas and Israel began,
social media platforms have removed content from or suspended accounts of Palestinian news sites, activists, journalists, students, and Arab citizens in Israel, interfering with the dissemination of news about the conflict and silencing voices expressing concern for Palestinians.

The platforms say some takedowns were caused by security issues, technical glitches, mistakes that have been fixed, or stricter rules meant to reduce hate speech. But users complain of
unexplained removals of posts about Palestine since the October 7 Hamas terrorist attacks.

Meta’s Facebook
shut down the page of independent Palestinian website Quds News Network, a primary source of news for Palestinians with 10 million followers. The network said its Arabic and English news pages had been deleted from Facebook, though it had been fully complying with Meta's defined media standards. Quds News Network has faced similar platform censorship before—in 2017, Facebook censored its account, as did Twitter in 2020.

Additionally, Meta’s
Instagram has locked or shut down accounts with significant followings. Among these are Let’s Talk Palestine, an account with over 300,000 followers that shows pro-Palestinian informative content, and Palestinian media outlet 24M. Meta said the accounts were locked for security reasons after signs that they were compromised.

The account of the news site Mondoweiss was also 
banned by Instagram and taken down on TikTok, later restored on both platforms.

Meanwhile, Instagram, Tiktok, and LinkedIn users sympathetic to or supportive of the plight of Palestinians have
complained of “shadow banning,” a process in which the platform limits the visibility of a user's posts without notifying them. Users say the platform limited the visibility of posts that contained the Palestinian flag.

Meta has
admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules. Responding to a surge in hate speech after Oct.7, the company lowered the threshold for predicting whether comments qualify as harassment or incitement to violence from 80 percent to 25 percent for users in Palestinian territories. Some content creators are using code words and emojis and shifting the spelling of certain words to evade automated filtering. Meta needs to be more transparent about decisions that downgrade users’ speech that does not violate its rules.

For some users, posts have led to more serious consequences. Palestinian citizens of Israel, including well-known singer Dalal Abu Amneh from Nazareth,
have been arrested for social media postings about the war in Gaza that are alleged to express support for the terrorist group Hamas.

Amneh’s case demonstrates a disturbing trend concerning social media posts supporting Palestinians. Amneh’s post of the
Arabic motto “There is no victor but God” and the Palestinian flag was deemed as incitement. Amneh, whose music celebrates Palestinian heritage, was expressing religious sentiment, her lawyer said, not calling for violence as the police claimed.

She
received hundreds of death threats and filed a complaint with Israeli police, only to be taken into custody. Her post was removed. Israeli authorities are treating any expression of support or solidarity with Palestinians as illegal incitement, the lawyer said.

Content moderation does not work at scale even in the best of times, as we have said
repeatedly. At all times, mistakes can lead to censorship; during armed conflicts they can have devastating consequences.

Whether through content moderation or technical glitches, platforms may also unfairly label people and communities. Instagram, for example, inserted the word “terrorist” into the profiles of some Palestinian users when its auto-translation converted the Palestinian flag emoji followed by the Arabic word for “Thank God” into “Palestinian terrorists are fighting for their freedom.” Meta 
apologized for the mistake, blaming it on a bug in auto-translation. The translation is now “Thank God.”

Palestinians have long fought 
private censorship, so what we are seeing now is not particularly new. But it is growing at a time when online speech protections are sorely needed. We call on companies to clarify their rules, including any specific changes that have been made in relation to the ongoing war, and to stop the knee jerk reaction to treat posts expressing support for Palestinians—or notifying users of peaceful demonstrations, or documenting violence and the loss of loved ones—as incitement and to follow their own existing standards to ensure that moderation remains fair and unbiased.

Platforms should also follow the 
Santa Clara Principles on Transparency and Accountability in Content Moderation notify users when, how, and why their content has been actioned, and give them  the opportunity to appeal. We know Israel has worked directly with Facebook, requesting and garnering removal of content it deemed incitement to violence, suppressing posts by Palestinians about human rights abuses during May 2021 demonstrations that turned violent.

The horrific violence and death in Gaza is heartbreaking. People are crying out to the world, to family and friends, to co-workers, religious leaders, and politicians their grief and outrage. Labeling large swaths of this outpouring of emotion by Palestinians as incitement is unjust and wrongly denies people an important outlet for expression and solace.

Protecting Kids on Social Media Act: Amended and Still Problematic

Senators who believe that children and teens must be shielded from social media have updated the problematic Protecting Kids on Social Media Act, though it remains an unconstitutional bill that replaces parents’ choices about what their children can do online with a government-mandated prohibition.  

As we wrote in August, the original bill (S. 1291) contained a host of problems. A recent draft of the amended bill gets rid of some of the most flagrantly unconstitutional provisions: It no longer expressly mandates that social media companies verify the ages of all account holders, including adults. Nor does it mandate that social media companies obtain parent or guardian consent before teens may use social media. 

However, the amended bill is still rife with issues.   

The biggest is that it prohibits children under 13 from using any ad-based social media. Though many social media platforms do require users to be over 13 to join (primarily to avoid liability under COPPA), some platforms designed for young people do not.  Most platforms designed for young people are not ad-based, but there is no reason that young people should be barred entirely from a thoughtful, cautious platform that is designed for children, but which also relies on contextual ads. Were this bill made law, ad-based platforms may switch to a fee-based model, limiting access only to young people who can afford the fee. Banning children under 13 from having social media accounts is a massive overreach that takes authority away from parents and infringes on the First Amendment rights of minors.  

The vast majority of content on social media is lawful speech fully protected by the First Amendment. Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media. At the same time, parents have a right to oversee their children’s online activities. But the First Amendment forbids Congress from making a freewheeling determination that children can be blocked from accessing lawful speech. The Supreme Court has ruled that there is no children’s exception to the First Amendment.   

Children—even those under 13—have a constitutional right to speak online and to access others’ speech via social media.

Perhaps recognizing this, the amended bill includes a caveat that children may still view publicly available social media content that is not behind a login, or through someone else’s account (for example, a parent’s account). But this does not help the bill. Because the caveat is essentially a giant loophole that will allow children to evade the bill’s prohibition, it raises legitimate questions about whether the sponsors are serious about trying to address the purported harms they believe exist anytime minors access social media. As the Supreme Court wrote in striking down a California law aimed at restricting minors’ access to violent video games, a law that is so “wildly underinclusive … raises serious doubts about whether the government is in fact pursuing the interest it invokes….” If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

Another problem: The amended bill employs a new standard for determining whether platforms know the age of users: “[a] social media platform shall not permit an individual to create or maintain an account if it has actual knowledge or knowledge fairly implied on the basis of objective circumstances that the individual is a child [under 13].” As explained below, this may still force online platforms to engage in some form of age verification for all their users. 

While this standard comes from FTC regulatory authority, the amended bill attempts to define it for the social media context. The amended bill directs courts, when determining whether a social media company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor, to consider “competent and reliable empirical evidence, taking into account the totality of the circumstances, including whether the operator, using available technology, exercised reasonable care.” But, according to the amended bill, “reasonable care” is not meant to mandate “age gating or age verification,” the collection of “any personal data with respect to the age of users that the operator is not already collecting in the normal course of business,” the viewing of “users’ private messages” or the breaking of encryption. 

While these exclusions provide superficial comfort, the reality is that companies will take the path of least resistance and will be incentivized to implement age gating and/or age verification, which we’ve raised concerns about many times over. This bait-and-switch tactic is not new in bills that aim to protect young people online. Legislators, aware that age verification requirements will likely be struck down, are explicit that the bills do not require age verification. Then, they write a requirement that would lead most companies to implement age verification or else face liability.  

If enacted, the bill will suffer a similar fate to the California law—a court striking it down for violating the First Amendment. 

In practice, it’s not clear how a court is expected to determine whether a company had “knowledge fairly implied on the basis of objective circumstances” that a user was a minor in the event of an enforcement action. In this case, while the lack of age gating/age verification mechanisms may not be proof that a company failed to exercise reasonable care in letting a child under 13 use the site,; the use of age gating/age verification tools to deny children under 13 the ability to use a social media site will surely be an acceptable way to avoid liability. Moreover, without more guidance, this standard of “reasonable care” is quite vague, which poses additional First Amendment and due process problems. 

Finally, although the bill no longer creates a digital ID pilot program for age verification, it still tries to push the issue forward. The amended bill orders a study and report looking at “current available technology and technologically feasible methods and options for developing and deploying systems to provide secure digital identification credentials; and systems to verify age at the device and operating system level.” But any consideration of digital identification for age verification is dangerous, given the risk of sliding down the slippery slope toward a national ID that is used for many more things than age verification and that threatens individual privacy and civil liberties. 

The Eyes on the Board Act Is Yet Another Misguided Attempt to Limit Social Media for Teens

Par : Jason Kelley
21 novembre 2023 à 13:37

Young people’s access to social media continues to be under attack by overreaching politicians. The latest effort, Senator Ted Cruz’s blunt “Eyes on the Board” Act, aims to end social media’s use entirely in schools. This heavy-handed plan to cut federal funding to any school that doesn’t block all social media platforms may have good intentions—like ensuring kids are able to focus on school work when they’re behind a desk—but the ramifications of such a bill would be bleak, and it’s not clear that it would solve any actual problem.

Eyes on the Board would prohibit any school from receiving any federal E-Rate funding subsidies if it also allows access to social media. Schools and libraries that receive this funding are already required to install internet filters; the Children’s Internet Protection Act, or CIPA, requires that these schools must block or filter Internet access to “visual depictions” that are obscene, child pornography, or harmful to minors, as well as requiring the monitoring of the online activities of minors for the same purpose. In return, the E-Rate program subsidizes internet services for schools and libraries in districts with high rates of poverty

This bill is a brazen attempt to censor information and to control how schools and teachers educate.

First, it’s not clear that there is a problem here that needs fixing. In practice, most schools choose to block much, much more than social media sites. This is a problem—these filters likely stop students from accessing educational information, and many tools flag students for accessing sites that aren’t blocked, endangering their privacy. Some students’ only access to the internet is during school hours, and others’ only internet-capable device is issued by their school, making these website blocks and flags particularly troubling. 

So it’s very, very likely that many schools already block social media if they find it disruptive. In our recent research, it was common for schools to do so. And according to the American Library Association’s last “School Libraries Count!” survey, conducted a decade ago, social media platforms were the most likely type of content to be blocked, with 88% of schools reporting that they did so. Again, it’s unclear what problem this bill purports to solve. But it is clear that Congress requiring that schools block social media platforms entirely, by government decree, is far more prohibitive than necessary to keep students’ “eyes on the board.” 

In short: too much social media access, via school networks or devices, is not a problem that teachers and administrators need the government to correct. If it is a problem, schools already have the tools to fix it, and twenty years after CIPA, they know generally how to do so. And if a school wants to allow access to platforms that an enormous percentage of students already use—to help guide them on its usage, or teach them about its privacy settings, for example—they should be allowed to do so without risking the loss of federal funding. 

Second, the broad scope of this bill would ban any access to a website whose primary purpose is to allow users to communicate user-generated content to the public, including even those that are explicitly educational or designed for young people. Banning students from using any social media, even educational platforms, is a massive overreach. 

No senator should consider moving this bill forward.

Third, the bill is also unconstitutional. A government prohibition on accessing a whole category of speech–social media speech, the vast majority of which is fully legal–is a restriction on speech that would be unlikely to survive strict scrutiny under the Supreme Court’s First Amendment precedent. As we have written about other bills that attack young people’s access to content on social media platforms, young people have First Amendment rights to speak online and to access others’ speech, whether via social media or another channel. The Supreme Court has repeatedly recognized that states and Congress cannot use concerns about children to ban them from expressing themselves or accessing information, and has ruled that there is no children’s exception to the First Amendment.   

Though some senators may see social media as distracting or even dangerous, it can play a useful role in society and young people’s lives. Many protests by young people against police brutality and gun violence have been organized using social media. Half of U.S. adults get news from social media, at least sometimes; likely even more teens get their news this way. Those students in lower-income communities may depend on school devices or school broadband to access valuable information on social media, and for many, this bill amounts to a flatout ban. 

People intending to limit access to information are already challenging books in schools and libraries in increasing numbers around the country. The author of this bill, Sen. Cruz, has been involved in these efforts. It is conceivable that challenges of books in schools and libraries could evolve into challenges of websites on the open internet. For now, students and library patrons can and will turn to the internet when books are pulled off shelves. 

This bill is a brazen attempt to censor information and to control how schools and teachers educate, and it would harm marginalized communities and children the most. No senator should consider moving this bill forward.

Alaa Abd El-Fattah: Letter to the United Nations Working Group on Arbitrary Detention

24 novembre 2023 à 10:41

EFF has signed on to the following letter alongside 33 other organizations in support of a submission to the United Nations Working Group on Arbitrary Detention (UNWGAD), first published here by English PEN. To learn more about Alaa's case, visit Offline.

23 November 2023

Dear Members of the United Nations Working Group on Arbitrary Detention,

We, the undersigned 34 freedom of expression and human rights organisations, are writing regarding the recent submission to the United Nations Working Group on Arbitrary Detention (UNWGAD) filed on behalf of the award-winning writer and activist Alaa Abd El-Fattah, a British-Egyptian citizen.

On 14 November 2023, Alaa Abd El-Fattah and his family filed an urgent appeal with the UNWGAD, submitting that his continuing detention in Egypt is arbitrary and contrary to international law. Alaa Abd El-Fattah and his family are represented by an International Counsel team led by English barrister Can Yeğinsu.

Alaa Abd-El Fattah has spent much of the past decade imprisoned in Egypt on charges related to his writing and activism and remains arbitrarily detained in Wadi al-Natrun prison and denied consular visits. He is a key case of concern to our organisations.

Around this time last year (11 November 2022), UN Experts in the Special Procedures of the UN Human Rights Council joined the growing chorus of human rights voices demanding Abd el-Fattah’s immediate release.

We, the undersigned organisations, are writing in support of the recent UNWGAD submission and to urge the Working Group to consider and announce their opinion on Abd El-Fattah’s case at the earliest opportunity.

Yours sincerely,

Brett Solomon, Executive Director, Access Now

Ahmed Samih Farag, General Director, Andalus Institute for Tolerance and Anti-Violence Studies

Quinn McKew, Executive Director, ARTICLE 19

Bahey eldin Hassan, Director, Cairo Institute for Human Rights Studies (CIHRS)

Jodie Ginsberg, President, Committee to Protect Journalists

Sayed Nasr, Executive Director, EgyptWide for Human Rights

Ahmed Attalla, Executive Director, Egyptian Front for Human Rights

Samar Elhusseiny, Programs Officer, Egyptian Human Rights Forum (EHRF)

Jillian C. York, Director for International Freedom of Expression, Electronic Frontier Foundation

Daniel Gorman, Director, English PEN

Wadih Al Asmar, President, EuroMed Rights

James Lynch, Co-Director, FairSquare

Ruth Kronenburg, Executive Director, Free Press Unlimited

Khalid Ibrahim, Executive Director, Gulf Centre for Human Rights (GCHR)

Adam Coogle, Deputy Middle East Director, Human Rights Watch

Mostafa Fouad, Head of Programs, HuMENA for Human Rights and Civic Engagement

Sarah Sheykhali, Executive Director, HuMENA for Human Rights and Civic Engagement

Baroness Helena Kennedy KC, Director, International Bar Association’s Human Rights Institute

Matt Redding, Head of Advocacy, IFEX

Alice Mogwe, President, International Federation for Human Rights (FIDH), within the framework of the Observatory for the Protection of Human Rights Defenders

Shireen Al Khatib, Acting Director, The Palestinian Center For Development and Media Freedoms (MADA)

Liesl Gerntholtz, Director, Freedom To Write Center, PEN America

Grace Westcott, President, PEN Canada

Romana Cacchioli, Executive Director, PEN International

Tess McEnery, Executive Director, Project on Middle East Democracy (POMED)

Antoine Bernard, Director of Advocacy and Assistance, Reporters Sans Frontières

Ricky Monahan Brown, President, Scottish PEN

Ahmed Salem, Executive Director, Sinai Foundation for Human Rights (SFHR)

Mohamad Najem, Executive Director, SMEX

Mazen Darwish, General Director, The Syrian Center for Media and Freedom of Expression (SCM)

Mai El-Sadany, Executive Director, Tahrir Institute for Middle East Policy (TIMEP)

Kamel Labidi, Board member, Vigilance for Democracy and the Civic State

Aline Batarseh, Executive Director, Visualizing Impact

Menna Elfyn, President, Wales PEN Cymru

Miguel Martín Zumalacárregui, Head of the Europe Office, World Organisation Against Torture (OMCT), within the framework of the Observatory for the Protection of Human Rights Defenders

 

Victory! Montana’s Unprecedented TikTok Ban is Unconstitutional

Par : Aaron Mackey
1 décembre 2023 à 17:33

A federal court on Thursday blocked Montana’s effort to ban TikTok from the state, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. 

Montana passed a law in May that prohibited TikTok from operating anywhere within the state and imposed $10,000 penalties on TikTok or any mobile application store that allowed users to access TikTok. The law was scheduled to take effect in January. EFF opposed enactment of this law, along with ACLU, CDT, and others. 

In issuing a preliminary injunction, the district court rejected the state’s claim that it had a legitimate interest in banning the popular video sharing application because TikTok is owned by a Chinese company. And although Montana has an interest in protecting minors from harmful content and protecting consumers’ privacy, the law’s total ban was not narrowly tailored to address the state’s concerns.

“SB 419 bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” the court wrote. 

EFF and the ACLU filed a friend-of-the-court brief in support of the challenge, brought by TikTok and a group of the app’s users who live in Montana. The brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court blocked the law from going into effect. 

The district court agreed that Montana’s statute violated the First Amendment. Although the court declined to decide whether the law was subject to heightened review under the Constitution (known as strict scrutiny), it ruled that Montana’s banning of TikTok failed to satisfy even less-searching review known as intermediate scrutiny.

“Ultimately, if Montana’s interest in consumer protection and protecting minors is to be carried out through legislation, the method sought to achieve those ends here was not narrowly tailored,” the court wrote.

The court’s decision this week joins a growing list of cases in which judges have halted state laws that unconstitutionally burden internet users’ First Amendment rights in the name of consumer privacy or child protection.

As EFF has said repeatedly, state lawmakers are right to be concerned about online services collecting massive volumes of their residents’ private data. But lawmakers should address those concerns directly by enacting comprehensive consumer data privacy laws, rather than seeking to ban those services entirely or prevent children from accessing them. Consumer data privacy laws both directly address lawmakers’ concerns and do not raise the First Amendment issues that lead to courts invalidating laws like Montana’s.

Digital Rights Groups Urge Meta to Stop Silencing Palestine

6 décembre 2023 à 03:59

Legal intern Muhammad Essa Fasih contributed to this post.

In the wake of the October 7 attack on Israel and the ensuing backlash on Palestine, Meta has engaged in unjustified content and account takedowns on its social media platforms. This has suppressed the voices of journalists, human rights defenders, and many others concerned or directly affected by the war. 

This is not the first instance of biased moderation of content related to Palestine and the broader MENA region. EFF has documented numerous instances over the past decade in which platforms have seemingly turned their backs on critical voices in the region. In 2021, when Israel was forcibly evicting Palestinian families from their homes in Jerusalem, international digital and human rights groups including EFF partnered in a campaign to hold Meta to account. These demands were backed by prominent signatories, and later echoed by Meta’s Oversight Board.

The campaign—along with other advocacy efforts—led to Meta agreeing to an independent review of its content moderation activities in Israel and Palestine, published in October 2022 by BSR. The BSR audit was a welcome development in response to our original demands; however, we are yet to see its recommendations fully implemented in Meta’s policies and practices.

The rest of our demands went unmet. Therefore, in the context of the current crackdown on pro-Palestinian voices, EFF and 17 other digital and human rights organizations are  issuing an updated set of demands to ensure that Meta considers the impact of its policies and content moderation practices on Palestinians, and takes serious action to ensure that its content interventions are fair, balanced, and consistent with the Santa Clara Principles on Transparency and Accountability in Content Moderation. 

Why it matters

The campaign is crucial for many reasons ranging from respect for free speech and equality to prevention of violence.

Free public discourse plays an important role in global conflicts in that it has the ability to affect the decision making of those occupying decisive positions. Dissemination of information and public opinion can reflect the majority opinion and can build the necessary pressure on individuals in positions of power to make democratic and humane decisions. Borderless platforms like Meta, therefore, have colossal power to shape narratives across the globe. In order to reflect a true picture of the majority public opinion, it is essential that these platforms allow for a level playing field for all sides of a conflict.

These leviathan platforms have the power and responsibility to refuse to succumb to unjustifiable government demands intended to skew the discourse in favor of the latter’s geopolitical and economic interests. There is already a significant imbalance between the government of Israel and the Palestinian people, particularly in their economic and geopolitical influence. Adding to that, suppression of information coming out of or about the weaker party has the potential to aid and abet further suffering.

Meta’s censorship of content showing the scale of current devastation and suffering in Palestine by loosely using categories like nudity, sexual activity, and graphic content, in a situation where the UN is urging the entire international community to work to "mitigate the risk of genocide", interferes with the right to information and free expression at a time when those rights are more needed than ever. According to some estimates over 90% of pro-Palestinian content has been deleted following Israel’s requests since October 7.

As we’ve said many times before, content moderation is impossible at scale, but clear signs and a record of discrimination against certain groups escapes justification and needs to be addressed immediately.

In the light of all this, it is imperative that interested organizations continue to play their role in holding Meta to account for such glaring discrimination. Meta must cooperate and meet these reasonable demands if it wants to present itself as a platform that respects free speech. It is about time that Mark Zuckerberg started to back his admiration for Frederick Douglass’ quote on free speech with some material practice.

 



The Latest EU Media Freedom Act Agreement Is a Bad Deal for Users

6 décembre 2023 à 14:23

The European Parliament and Member States’ representatives last week negotiated a controversial special status for media outlets that are active on large online platforms. The EU Media Freedom Act (EMFA), though well-intended, has significant flaws. By creating a special class of privileged self-declared media providers whose content cannot be removed from big tech platforms, the law not only changes company policies but risks harming users in the European Union (EU) and beyond. 

Fostering Media Plurality: Good Intentions 

Last year, the EU Commission presented the EMFA as a way to bolster media pluralism in the EU. It promised increased transparency about media ownership and safeguards against government surveillance and the use of spyware against journalists—real dangers that EFF has warned against for years. Some of these aspects are still in flux and remain up for negotiation, but the political agreement on EMFA’s content moderation provisions could erode public trust in media and jeopardize the integrity of information channels. 

Content Hosting by Force: Bad Consequences 

Millions of EU users trust that online platforms will take care of content that violates community standards. But contrary to concerns raised by EFF and other civil society groups, Article 17 of the EMFA enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force.  

This “must carry” rule prevents large online platforms like X or Meta, owner of Facebook, Instagram, and WhatsApp, from removing or flagging media content that breaches community guidelines. If the deal becomes law, it could undermine equality of speech, fuel disinformation, and threaten marginalized groups. It also poses important concerns about government interference in editorial decisions.

Imagine signing up to a social media platform committed to removing hate speech, only to find that EU regulations prevent platforms from taking any action against it. Platforms must instead create a special communication channel to discuss content restrictions with news providers before any action is taken. This approach not only undermines platforms’ autonomy in enforcing their terms of use but also
jeopardizes the safety of marginalized groups, who are often targeted by hate speech and propaganda. This policy could also allow orchestrated disinformation to remain online, undermining one of the core goals of EMFA to provide more “reliable sources of information to citizens”.  

Bargaining Hell: Platforms and Media Companies Negotiating Content  

Not all media providers will receive this special status. Media actors must self-declare their status on platforms, and demonstrate adherence to recognized editorial standards or affirm compliance with regulatory requirements. Platforms will need to ensure that most of the reported information is publicly accessible. Also, Article 17 is set to include a provision on AI-generated content, with specifics still under discussion. This new mechanism puts online platforms in a powerful yet precarious position of deciding over the status of a wide range of media actors. 

The approach of the EU Media Freedom Act effectively leads to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible—Christoph Schmon, EFF International Policy Director

It’s likely that the must carry approach will lead to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible. There are strong pecuniary interests by media outlets to pursue a fast-track communication channel and make sure that their content is always visible, potentially at the expense of smaller providers.  

Implementation Challenges 

It’s positive that negotiators listened to some of our concerns and added language to safeguard media independence from political parties and governments. However, we remain concerned about the enforcement reality and the potential exploitation of the self-declaration mechanism, which could undermine the equality of free speech and democratic debate. While lawmakers stipulated in Article 17 that the EU Digital Services Act remains intact and that platforms are free to shorten the suspension period in crisis situations, the practical implementation of the EMFA will be a challenge. 

2023 Year in Review

Par : Cindy Cohn
21 décembre 2023 à 11:00

At the end of every year, we look back at the last 12 months and evaluate what has changed for the better (and worse) for digital rights.  While we can be frustratedhello ongoing attacks on encryptionoverall it's always an exhilarating reminder of just how far we've come since EFF was founded over 33 years ago. Just the scale alone it's breathtaking. Digital rights started as a niche, future-focused issue that we would struggle to explain to nontechnical people; now it's deeply embedded into all of our lives.

The legislative, court, and agency fights around the world this year also helped us see and articulate a common thread: the need for a "privacy first" approach to laws and technology innovation.  As we wrote in a new white paper aptly entitled "Privacy First: A Better Way to Address Online Harms," many of the ills of today’s internet have a single thing in common, and it is that they are built on a business model of corporate surveillance and behavioral advertising.  Addressing that problem could help us make great strides in a range of issues, and avoid many of the the terrible likely impacts of many of today's proposed "solutions."

Instead of considering proposals that would censor speech and put children's access to internet resources at the whims of state attorneys general, we could be targeting the root cause of the concern: internet companies' collection, storage, sales, and use of our personal information and activities to feed their algorithms and ad services. Police go straight to tech companies for your data or the data on everyone who was near a certain location.  And that's when they even bother with a court-overseen process, rather than simply issuing a subpoena, showing up and demanding it, or buying data from data brokers. If we restricted what data tech companies could keep and for how long, we could also tackle this problem at the source. Instead of unconstitutional link taxes to save local journalism, laws that attack behavioral advertising--built on collection of data--would break the ad and data monopoly that put journalists at the mercy of Big Tech in the first place.

Concerns about what is feeding AI, social media algorithms, government spying (either your own or another country's), online harassment, getting access to healthcare--so much can be better protected if we address privacy first. EFF knows this, and it's why, in 2023, we did things like launch the Tor University Challenge, urge the Supreme Court to recognize that the Fifth Amendment protects you from being forced to give your phone's passcode to police, and work to fix the dangerously flawed UN Cybercrime Treaty. Most recently, we celebrated Google's decision to limit the data collected and kept in its "Location History" as a potentially huge step to prevent geofence warrants that use Google's storehouse of location data to conduct massive, unconstitutional searches sweeping in many innocent bystanders. 

Of course, as much as individuals need more privacy, we also need more transparency, especially from our governments and the big corporations that rule so much of our digital lives. That's why EFF urged the Supreme Court to overturn an order preventing Twitternow Xfrom publishing a transparency report with data about what, exactly, government agents have asked the company for. It's why we won an important victory in keeping laws and regulations online and accessible. And it's why we defended the Internet Archive from an attack by major publishers seeking to cripple libraries' ability to give the rest of us access to knowledge into the digital age.

All of that barely scratches the surface of what we've been doing this year. But none of it would be possible without the strong partnership of our members, supporters, and all of you who stood up and took action to build a better future. 

EFF has an annual tradition of writing several blog posts on what we’ve accomplished this year, what we’ve learned, and where we have more to do. We will update this page with new stories about digital rights in 2023 every day between now and the new year.

❌
❌