Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierLibre anglophone

EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations

Par : Sophia Cope
18 octobre 2024 à 18:24

EFF legal intern Nick Delehanty was the principal author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.

At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.

Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.

Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.

In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.

We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:

  • First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
  • Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
  • Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
  • Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.

We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.

The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.

A Flourishing Internet Depends on Competition

Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of  “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem. 

In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.

Content Moderation 

In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.  

This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places. 

In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.  

Product Quality 

The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like. 

Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity.  It also clogged the first page of results with ads and sponsored links.  

Morals, Privacy, and Security 

There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.  

Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity 

Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data. 

You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.  

Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.  

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

17 octobre 2024 à 16:04

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

17 octobre 2024 à 10:27

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to pedal technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Preemption Playbook: Big Tech’s Blueprint Comes Straight from Big Tobacco

16 octobre 2024 à 16:53

Big Tech is borrowing a page from Big Tobacco's playbook to wage war on your privacy, according to Jake Snow of the ACLU of Northern California. We agree.  

In the 1990s, the tobacco industry attempted to use federal law to override a broad swath of existing state laws and prevent states from future action on those areas. For Big Tobacco, it was the “Accommodation Program,” a national campaign ultimately aimed to override state indoor smoking laws with weaker federal law. Big Tech is now attempting this with federal privacy bills, like the American Privacy Rights Act (APRA), that would preempt many state privacy laws.  

In “Big Tech is Trying to Burn Privacy to the Ground–And They’re Using Big Tobacco’s Strategy to Do It,” Snow outlines a three-step process that both industries have used to weaken state laws. Faced with a public relations crisis, the industries:

  1. Muddy the waters by introducing various weak bills in different states.
  2. Complain that they are too confusing to comply with, 
  3. Ask for “preemption” of grassroots efforts.

“Preemption” is a legal doctrine that allows a higher level of government to supersede the power of a lower level of government (for example, a federal law can preempt a state law, and a state law can preempt a city or county ordinance).  

EFF has a clear position on this: we oppose federal privacy laws that preempt current and future state privacy protections, especially by a lower federal standard.  

Congress should set a nationwide baseline for privacy, but should not take away states' ability to react in the future to current and unforeseen problems. Earlier this year, EFF joined ACLU and dozens of digital and human rights organizations in opposing APRA’s preemption sections. The letter points out that, "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling.” EFF led a similar coalition effort in 2018.  

Companies that collect and use our data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy. But many existing federal laws concerning privacy, civil rights, and more operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. Complaints of this “patchwork” have long been a part of the strategy for both Big Tech and Big Tobacco.  

States have long been the “laboratories of democracy” and have led the way in the development of innovative privacy legislation. Because of this, federal laws should establish a floor and not a ceiling, particularly as new challenges rapidly emerge. Preemption would leave consumers with inadequate protections, and make them worse off than they would be in the absence of federal legislation.  

Congress never preempted states' authority to enact anti-smoking laws, despite Big Tobacco’s strenuous efforts. So there is hope that Big Tech won’t be able to preempt state privacy law, either. EFF will continue advocating against preemption to ensure that states can protect their citizens effectively. 

Read Jake Snow’s article here.

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

16 octobre 2024 à 15:29

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

EFF and IFPTE Local 20 Attain Labor Contract

Par : Josh Richman
16 octobre 2024 à 11:17
First-Ever, Three-Year Pact Protects Workers’ Pay, Benefits, Working Conditions, and More

SAN FRANCISCO—Employees and management at the Electronic Frontier Foundation have achieved a first-ever labor contract, they jointly announced today.  EFF employees have joined the Engineers and Scientists of California Local 20, IFPTE.  

The EFF bargaining unit includes more than 60 non-management employees in teams across the organization’s program and administrative staff. The contract covers the usual scope of subjects including compensation; health insurance and other benefits; time off; working conditions; nondiscrimination, accommodation, and diversity; hiring; union rights; and more. 

"EFF is its people. From the moment that our staff decided to organize, we were supportive and approached these negotiations with a commitment to enshrining the best of our practices and adopting improvements through open and honest discussions,” EFF Executive Director Cindy Cohn said. “We are delighted that we were able to reach a contract that will ensure our team receives the wages, benefits, and protections they deserve as they continue to advance our mission of ensuring that technology supports freedom, justice and innovation for all people of the world.” 

“We’re pleased to have partnered with EFF management in crafting a contract that helps our colleagues thrive both at work and outside of work,” said Shirin Mori, a member of the EFF Workers Union negotiating team. “This contract is a testament to creative solutions to improve working conditions and benefits across the organization, while also safeguarding the things employees love about working at EFF. We deeply appreciate the positive working relationship with management in establishing a strong contract.” 

The three-year contract was ratified unanimously by EFF’s board of directors Sept. 18, and by 96 percent of the bargaining unit on Sept. 25. It is effective Oct. 1, 2024 through Sept. 30, 2027. 

EFF is the largest and oldest nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.  

The Engineers and Scientists of California Local 20, International Federation of Professional and Technical Engineers, is a democratic labor union representing more than 8,000 engineers, scientists, licensed health professionals, and attorneys at PG&E, Kaiser Permanente, the U.S. Environmental Protection Agency, Legal Aid at Work, numerous clinics and hospitals, and other employers throughout Northern California.  

For the contract: https://ifpte20.org/wp-content/uploads/2024/10/Electronic-Frontier-Foundation-2024-2027.pdf 

For more on IFPTE Local 20: https://ifpte20.org/ 

Civil Rights Commission Pans Face Recognition Technology

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

15 octobre 2024 à 15:48

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released  today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

EFF to New York: Age Verification Threatens Everyone's Speech and Privacy

15 octobre 2024 à 14:11

Young people have a right to speak and access information online. Legislatures should remember that protecting kids' online safety shouldn't require sweeping online surveillance and censorship.

EFF reminded the New York Attorney General of this important fact in comments responding to the state's recently passed Stop Addictive Feeds Exploitation (SAFE) for Kids Act—which requires platforms to verify the ages of people who visit them. After New York's legislature passed the bill, it is now up to the state attorney general's office to write rules to implement it.

We urge the attorney general's office to recognize that age verification requirements are incompatible with privacy and free expression rights for everyone. As we say in our comments:

[O]nline age-verification mandates like that imposed by the New York SAFE For Kids Act are unconstitutional because they block adults from content they have a First Amendment right to access, burden their First Amendment right to browse the internet anonymously, and chill data security- and privacy-minded individuals who are justifiably leery of disclosing intensely personal information to online services. Further, these mandates carry with them broad, inherent burdens on adults’ rights to access lawful speech online. These burdens will not and cannot be remedied by new developments in age-verification technology.

We also noted that none of the methods of age verification listed in the attorney general's call for comments is both privacy-protective and entirely accurate. They each have their own flaws that threaten everyone's privacy and speech rights. "These methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of 'dangerous in one way' to 'dangerous in a different way'," we wrote in the comments.

Read the full comments here: https://www.eff.org/document/eff-comments-ny-ag-safe-kids-sept-2024

Should I Use My State’s Digital Driver’s License?

11 octobre 2024 à 11:56

A mobile driver’s license (often called an mDL) is a version of your ID that you keep on your phone instead of in your pocket. In theory, it would work wherever your regular ID works—TSA, liquor stores, to pick up a prescription, or to get into a bar. This sounds simple enough, and might even be appealing—especially if you’ve ever forgotten or lost your wallet. But there are a few questions you should ask yourself before tossing your wallet into the sea and wandering the earth with just your phone in hand.

In the United States, some proponents of digital IDs promise a future where you can present your phone to a clerk or bouncer and only reveal the information they need—your age—without revealing anything else. They imagine everyone whipping through TSA checkpoints with ease and enjoying simplified applications for government benefits. They also see it as a way to verify identity on the internet, a system that likely censors everyone.

There are real privacy and security trade-offs with digital IDs, and it’s not clear if the benefits are big enough—or exist at all—to justify them.

But if you are curious about this technology, there are still a few things you should know and some questions to consider.

Questions to Ask Yourself

Can I even use a Digital ID anywhere? 

The idea of being able to verify your age by just tapping your phone against an electronic reader—like you may already do to pay for items—may sound appealing. It might make checking out a little faster. Maybe you won’t have to worry about the bouncer at your favorite bar creepily wishing you “happy birthday,” or noting that they live in the same building as you.

Most of these use cases aren’t available yet in the United States. While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used at TSA checkpoints.

For example, in California, only a small handful of convenience stores in Sacramento and Los Angeles currently accept digital IDs for purchasing age-restricted items like alcohol and tobacco. TSA lists airports that support mobile driver’s licenses, but it only works for TSA PreCheck and only for licenses issued in eleven states.

Also, “selective disclosure,” like revealing just your age and nothing else, isn’t always fully baked. When we looked at California’s mobile ID app, this feature wasn’t available in the mobile ID itself, but rather, it was part of the TruAge addon. Even if the promise of this technology is appealing to you, you might not really be able to use it.

Is there a law in my state about controlling how police officers handle digital IDs?

One of our biggest concerns with digital IDs is that people will unlock their phones and hand them over to police officers in order to show an ID. Ordinarily, police need a warrant to search the content of our phones, because they contain what the Supreme Court has properly called “the privacies of life.”

There are some potential technological protections. You can technically get your digital ID read or scanned in the Wallet app on your phone, without unlocking the device completely. Police could also have a special reader like at some retail stores.

But it’s all too easy to imagine a situation where police coerce or trick someone into unlocking their phone completely, or where a person does not even know that they just need to tap their phone instead of unlocking it. Even seasoned Wallet users screw up payment now and again, and doing so under pressure amplifies that risk. Handing your phone over to law enforcement, either to show a QR code or to hold it up to a reader, is also risky since a notification may pop up that the officer could interpret as probable cause for a search.

Currently, there are few guardrails for how law enforcement interacts with mobile IDs. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement, but as far as we know it’s the only state to do anything so far.

At the very minimum, law enforcement should be prohibited from leveraging an mDL check to conduct a phone search.

Is it clear what sorts of tracking the state would use this for?

Smartphones have already made it significantly easier for governments and corporations to track everything we do and everywhere we go. Digital IDs are poised to add to that data collection, by increasing the frequency that our phones leave digital breadcrumbs behind us. There are technological safeguards that could reduce these risks, but they’re currently not required by law, and no technology fix is perfect enough to guarantee privacy.

For example, if you use a digital ID to prove your age to buy a six-pack of beer, the card reader’s verifier might make a record of the holder’s age status. Even if personal information isn’t exchanged in the credential itself, you may have provided payment info associated with this transaction. This collusion of personal information might be then sold to data brokers, seized by police or immigration officials, stolen by data thieves, or misused by employees.

This is just one more reason why we need a federal data privacy law: currently, there aren’t sufficient rules around how your data gets used.

Do I travel between states often?

Not every state offers or accepts digital IDs, so if you travel often, you’ll have to carry a paper ID. If you’re hoping to just leave the house, hop on a plane, and rent a car in another state without needing a wallet, that’s likely still years away.

How do I feel about what this might be used for online?

Mobile driver’s licenses are a clear fit for online age verification schemes. The privacy harms of these sorts of mandates vastly outweigh any potential benefit. Just downloading and using a mobile driver’s license certainly doesn’t mean you agree with that plan, but it’s still good to be mindful of what the future might entail.

Am I being asked to download a special app, or use my phone’s built-in Wallet?

Both Google and Apple allow a few states to use their Wallet apps directly, while other states use a separate app. For Google and Apple’s implementations, we tend to have better documentation and a more clear understanding of how data is processed. For apps, we often know less.

In some cases, states will offer Apple and Google Wallet support, while also providing their own app. Sometimes, this leads to different experiences around where a digital ID is accepted. For example, in Colorado, the Apple and Google Wallet versions will get you through TSA. The Colorado ID app cannot be used at TSA, but can be used at some traffic stops, and to access some services. Conversely, California’s mobile ID comes in an app, but also supports Apple and Google Wallets. Both California’s app and the Apple and Google Wallets are accepted at TSA.

Apps can also come and go. For example, Florida removed its app from the Apple App Store and Google Play Store completely. All these implementations can make for a confusing experience, where you don’t know which app to use, or what features—if any—you might get.

The Right to Paper

For now, the success or failure of digital IDs will at least partially be based on whether people show interest in using them. States will likely continue to implement them, and while it might feel inevitable, it doesn’t have to be. There are countless reasons why a paper ID should continue to be accepted. Not everyone has the resources to own a smartphone, and not everyone who has a smartphone wants to put their ID on it. As states move forward with digital ID plans, privacy and security are paramount, and so is the right to a paper ID.

Podcast Episode Rerelease: So You Think You’re A Critical Thinker

Par : Josh Richman
11 octobre 2024 à 03:01

This episode was first released in March 2023.

With this year’s election just weeks away, concerns about disinformation and conspiracy theories are on the rise.

We covered this issue in a really enlightening talk in March 2023 with Alice Marwick, the director of research at Data & Society, and previously the cofounder and principal researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill.

We talked with Alice about why seemingly ludicrous conspiracy theories get so many followers, and when fact-checking does and doesn’t work. And we came away with some ideas for how to identify and leverage people’s commonalities to stem disinformation, while making sure that the most marginalized and vulnerable internet users are still empowered to speak out.

We thought this is a good time to re-publish that episode, in hopes that it might help you make some sense of what you might see and hear in the next few months.

If you believe conversations like this are important, we hope you’ll consider voting for How to Fix the Internet in the “General - Technology” category of the Signal Awards’ 3rd Annual Listener's Choice competition. Deadline for voting is Thursday, Oct. 17.

Vote now!

This episode was first published on March 21, 2023.

The promise of the internet was that it would be a tool to melt barriers and aid truth-seekers everywhere. But it feels like polarization has worsened in recent years, and more internet users are being misled into embracing conspiracies and cults.

Listen on Apple Podcasts Badge Listen on Spotify Podcasts Badge  Subscribe via RSS badge

You can also find this episode on the Internet Archive and on YouTube.

From QAnon to anti-vax screeds to talk of an Illuminati bunker beneath Denver International Airport, Alice Marwick has heard it all. She has spent years researching some dark corners of the online experience: the spread of conspiracy theories and disinformation. She says many people see conspiracy theories as participatory ways to be active in political and social systems from which they feel left out, building upon beliefs they already harbor to weave intricate and entirely false narratives.  

Marwick speaks with EFF’s Cindy Cohn and Jason Kelley about finding ways to identify and leverage people’s commonalities to stem this flood of disinformation while ensuring that the most marginalized and vulnerable internet users are still empowered to speak out.  

In this episode you’ll learn about:  

  • Why seemingly ludicrous conspiracy theories get so many views and followers  
  • How disinformation is tied to personal identity and feelings of marginalization and disenfranchisement 
  • When fact-checking does and doesn’t work  
  • Thinking about online privacy as a political and structural issue rather than something that can be solved by individual action 

Alice Marwick is director of research at Data & Society; previously, she was an Associate Professor in the Department of Communication and cofounder and Principal Researcher at the Center for Information, Technology and Public Life at the University of North Carolina, Chapel Hill. She researches the social, political, and cultural implications of popular social media technologies. In 2017, she co-authored Media Manipulation and Disinformation Online (Data & Society), a flagship report examining far-right online subcultures’ use of social media to spread disinformation, for which she was named one of Foreign Policy magazine’s 2017 Global Thinkers. She is the author of Status Update: Celebrity, Publicity and Branding in the Social Media Age (Yale 2013), an ethnographic study of the San Francisco tech scene which examines how people seek social status through online visibility, and co-editor of The Sage Handbook of Social Media (Sage 2017). Her forthcoming book, The Private is Political (Yale 2023), examines how the networked nature of online privacy disproportionately impacts marginalized individuals in terms of gender, race, and socio-economic status. She earned a political science and women's studies bachelor's degree from Wellesley College, a Master of Arts in communication from the University of Washington, and a PhD in media, culture and communication from New York University. 

Transcript

ALICE MARWICK
I show people these TikTok videos that are about these kind of outrageous conspiracy theories, like that the Large Hadron Collider at CERN is creating a multiverse. Or that there's, you know, this pyramid of tunnels under the Denver airport where they're trafficking children and people kinda laugh at them.

They're like, this is silly. And then I'm like, this has 3 million views. You know, this has more views than probably most of the major news stories that came out this week. It definitely has more views than any scientific paper or academic journal article I'll ever write, right? Like, this stuff has big reach, so it's important to understand it, even if it seems kind of frivolous or silly, or, you know, self-evident.

It's almost never self-evident. There's always some other reason behind it, because people don't do things arbitrarily. They do things that help them make sense of their lives. They give their lives meaning these are practices that people engage in because it means something to them. And so I feel like my job as a researcher is to figure out, what does this mean? Why are people doing this?

CINDY COHN
That’s Alice Marwick. The research she’s talking about is something that worries us about the online experience – the spread of conspiracy theories and misinformation. The promise of the internet was that it would be a tool that would melt barriers and aid truth-seekers everywhere. But sometimes it feels like polarization has worsened, and Internet users are misled into conspiracies and cults. Alice is trying to figure out why, how – and more importantly, how to fix it.

I’m Cindy Cohn, the Executive Director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley, EFF’s Associate Director of Digital Strategy.

This is our podcast series: How to Fix the Internet.

CINDY COHN
The idea behind this show is that we're trying to fix the internet. We're trying to make our digital lives better. EFF spends a lot of time warning about all the ways that things could go wrong and jumping into the fight when things do go wrong online, but what we'd like to do with this podcast is to give ourselves a vision of what the world looks like if we start to get it right.
JASON KELLEY
Our guest today is Alice Marwick. She’s a researcher at the Center for Information, Technology and Public Life at the University of North Carolina. She does qualitative research on a topic that affects everyone’s online lives but can be hard to grasp outside of anecdotal data – the spread of conspiracy theories and disinformation online.

This is a topic that many of us have a personal connection to – so we started off our conversation with Alice by asking what drew her into this area of research.

ALICE MARWICK
So like many other people I got interested in missing disinformation in the run up to the 2016 election. I was really interested in how ideas that had formerly been like a little bit subcultural and niche in far right circles were getting pushed into the mainstream and circulating really wildly and widely.

And in doing that research, it sort of helped me understand disinformation as a frame for understanding the way that information ties into marginalization, I think more broadly and disinformation is often a mechanism by which people who are marginalized the stories that the dominant culture tells about those marginalized people, the way that it circulates.

JASON KELLEY
I think it's been a primary focus for a lot of people in a lot of ways over the last few years. I know I have spent a lot of time on alternative social media platforms over the last few years because I find the topics kind of interesting to figure out what's happening there. And also because I have a friend who has kind of entered that space and, uh, I like to learn, you know, where the information that he's sharing with me comes from, essentially, right. But one thing that I've been thinking about with him and and with other folks is, is there something that happened to him that made him kind of easily radicalized, if you will? And I, I don't think that's a term that, that you recommend using, but I think a lot of people just assume that that's something that happens.

That there are people who, um, you know, grew up watching the X-files or something and ended up more able to fall into these misinformation and disinformation traps. And I'm wondering if that's, if that's actually true. It seems like from your research, it's not.

ALICE MARWICK
It's not, and that's because there's a lot of different things that bring people to disinformation, because disinformation is really deeply tied to identity in a lot of ways. There's lots of studies showing that more or less, every American believes in at least one conspiracy theory, but the conspiracy theory that you believe in is really based on who you are.

So in some cases it is about identity, but I think the biggest misconception about [00:04:00] disinformation is that the people who believe it are just completely gullible and that they don't have any critical thinking skills and that they go on YouTube and they watch a video or they listen to a podcast and all of a sudden their entire mindset shifts.

CINDY COHN
So why is radicalization not the right term? How do you think about this term and why you've rejected it?

ALICE MARWICK
The whole idea of radicalization is tied up in this countering violent extremism movement that is multinational, that is tied to this huge surveillance apparatus, to militarization, to, in many ways, like a very Islamophobic idea of the world. People have been researching why individuals commit political violence for 50 years and they haven't found any individual characteristics that make someone more susceptible to doing something violent, like committing a mass shooting or participating in the January 6th insurrection, for example. What instead that we see is that there's a lot of different puzzle pieces that can contribute to whether somebody takes on a set, an ideology, and whether they commit acts of violence and service of that  ideology.

And I think the thing that's frustrating to researchers is sometimes the same thing can have two completely different effects in people. So there's this great study of women in South America who were involved in guerilla warfare, and some of those women, when they had kids, they were like, oh, I'm not gonna do this anymore.

It's too dangerous. You know, I wanna focus on my family. But then there was another set of women that when they had kids, they felt they had more to lose and they had to really contribute to this effort because it was really important to the freedom of them and their children.

So when you think about radicalization, there's this real desire to have this very simplistic pathway that everybody kind of just walks along and they end up a terrorist. But that's just not the way the world works. 

The second reason I don't like radicalization is because white supremacy is baked into the United States from its inception. And white supremacist ideas and racist ideas are pretty foundational. And they're in all kinds of day-to-day language and media and thinking. And so why would we think it's radical to be, for example, anti-black or anti-trans when anti-blackness and anti-transness have like these really long histories?

CINDY COHN
Yeah,  I think that's right. And there is a way in which radicalization makes it sound as if, um, that's something other than our normal society. Iin many instances, that's not actually what's going on.

There's pieces of our society, the water we swim in every day that are getting, um, that are playing a big role in some of this stuff that ends up in a very violent place. And so by calling it radicalization, we're kind of creating an other that we're not a part of that I think will mean that we might miss some of the, some of the pieces of this.

ALICE MARWICK
Yeah, and I think that when we think about disinformation, the difference between a successful and an unsuccessful disinformation campaign is often whether or not the ideas exist in the culture already. One of the reasons QAnon, I think, has been so successful is that it picks up a lot of other pre circulating conspiracy theories.

It mixes them with anti-Semitism, it mixes them with homophobia and transphobia, and it kind of creates this hideous concoction, this like potion that people drink that reinforces a lot of their preexisting beliefs. It's not something that comes out of nowhere. It's something that's been successful precisely because it reinforces ideas that people already had.

CINDY COHN
I think the other thing that I saw in your research that might have been surprising or at least was a little surprising to me, is how participatory Q-Anon is.

You took a look at some of the Q-Anon. Conversations, you could see people pulling in pieces of knowledge from other things, you know, flight patterns and, and unexplained deaths and other things. It's something that they're co-creating, um, which I found fascinating.

ALIVE MARWICK
It's really similar to the dynamics of fandom in a lot of ways. You know, any of us who have ever participated in, like, a Usenet group or a subreddit about a particular TV show, know that people love putting theories together. They love working together to try to figure out what's going on. And obviously we see those same dynamics at play in a lot of different parts of internet culture.

So it's about taking the participatory dynamics of the internet and sort of mixing them with what we're calling conspiratorial literacy, which is sort of the ability to assemble these narratives from all these disparate places to kind of pull together, you know, photos and Wikipedia entries and definitions and flight paths and you know, news stories into these sort of n narratives that are really hard to make coherent sometimes, ‘cause they get really complicated.

But it's also about a form of political participation. I think there's a lot of people in communities where disinformation is rampant, where they feel like talking to people about Q-Anon or anti-vaxing or white supremacy is a way that they can have some kind of political efficacy. It's a way for them to participate, and sometimes I think people feel really disenfranchised in a lot of ways.

JASON KELLEY
I wonder because you mentioned internet culture, if some of this is actually new, right? I mean, we had satanic panics before and something I hear a lot of in various places is that things used to be so much simpler when we had four television channels and a few news anchors and all of them said the same thing, and you couldn't, supposedly, you couldn't find your way out into those other spaces. And I think you call this the myth of the epistemically consistent past. Um, and is that real? Was that a real time that actually existed? 

ALICE MARWICK
I mean, let's think about who that works for, right? If you're thinking about like 1970, let's say, and you're talking about a couple of major TV networks, no internet, you know, your main interpersonal communication is the telephone. Basically, what the mainstream media is putting forth is the narrative that people are getting.

And there's a very long history of critique of the mainstream media, of putting forth a narrative that's very state sponsored, that's very pro-capitalist, that writes out the histories of lots and lots of different types of people. And I think one of the best examples of this is thinking about the White Press and the Black Press.

And the Black Press existed because the White Press didn't cover stories that were of interest to the black community, or they strategically ignored those stories. Like the Tulsa Race massacre, for example, like that was completely erased from history because the white newspapers were not covering it.

So when we think about an. Epistemically consistent past, we're thinking about the people who that narrative worked for.

CINDY COHN
I really appreciate this point. To me, what was exciting about the internet and, you know, I'm a little older. I was alive during the seventies, um, and watched Walter Cronkite and, you know, this idea that, you know, old white guys in New York get, decide what the rest of us see, which is, that's who ran the networks, right.

That, that, you know, and maybe we had a little pbs, so we got a little Sesame Street too. 

But the promise of the Internet was that we could hear from more and more diverse voices, and reduce the power of those gatekeepers. What is scary is that some people are now pretty much saying that the answers to the problems of today’s Internet is to find four old white guys and let them decide what all the rest of us see again.    

ALICE MARWICK
I think it's really easy to blame the internet for the ills of society, and I, I guess I'm a digital critic, but I'm ultimately, I love the internet, like I love social media. I love the internet. I love online community. I love the possibilities that the internet has opened up for people. And when I look at the main amplifiers of disinformation, it's often politicians and political elites whose platforms are basically independent of the internet.

Like people are gonna cover, you know, leading politicians regardless of what media they're covering them with. And when you look at something like the lies around the Dominion voting machines, like, yes, those lies start in these really fringy internet communities, but they're picked up and amplified incredibly quickly by mainstream politicians.

And then they're covered by mainstream news. So who's at fault there? I think that blaming the internet really ignores the fact that there's a lot of other players here, including the government, you know, politicians, these big mainstream media sources. And it's really convenient to blame all social media or just the entire internet for some of these ills, but I don't think it's accurate.

CINDY COHN
Well, one of the things that I saw in your research and, and our friend, Yochai Benkler has done in a lot of things is the role of amplifiers, right? That these, these these places where people, you know, agree about things that aren't true and, and converse about things that aren't true. They predate the internet, maybe the internet gave a little juice to them, but what really gives juice to them is these amplifiers who, as I think you, you rightly point out, are some of the same people who were the mainstream media controllers in that hazy past of yore, um, I think that if this stuff never makes it to more popular amplifiers. I don't think it becomes the kind of thing that we worry about nearly so much.

ALICE MARWICK
Yeah, I mean, when I was looking at white supremacist disinformation in 2017,  someone I spoke with pointed out that the mainstream media is the best recruitment tool for white supremacists because historically it's been really hard for white supremacists to recruit. And I'm not talking about like historically, like in the thirties and forties, I'm talking about like in the eighties and nineties when they had sort of lost a lot of their mainstream political power.

It was very difficult to find like-minded people, especially if people were living in places that were a little bit more progressive or were multiracial. Most people, in reading a debunking story in the Times or the Post or whatever, about white supremacist ideas are going to disagree with those ideas.

But even if one in a thousand believes them and is like, oh wow, this is a person who's spreading white supremacist ideas, I can go to them and learn more about it. That is a far more powerful platform than anything that these fringe groups had. in the past, and one of the things that we've noticed in our research is that often conspiracy theories go mainstream precisely because they're being debunked by the mainstream media

CINDY COHN
Wow. So there's two kinds of amplifiers. There's the amplifiers who are trying to debunk things and accidentally perhaps amplify. But there are, there are people who are intentional amplifiers as well, and that both of them have the same effect, or at least both of them can spread the misinformation.

ALICE MARWICK
Yeah. I mean, of course, debunking has great intentions, right? We don't want horrific misinformation and disinformation to go and spread unchecked. But one of the things that we noticed when we were looking at news coverage of disinformation was that a lot of the times the debunking aspect was not as strong as we would've expected.

You know, you would expect a news story saying, this is not true, this is false, the presumptions are false. But instead, you'd often get these stories where they kind of repeated the narrative and then at the end there was, you know, this is incorrect. And the false narrative is often much more interesting and exciting than whatever the banal truth is.

So I think a lot of this has to do with the business model of journalism, right? There's a real need to comment on everything that comes across Twitter, just so that you can get some of the clicks for it. And that's been really detrimental, I think, to. journalists who have the time and the space to really research things and craft their pieces.

You know, it's an underpaid occupation. They're under a huge amount of economic and time pressure to like get stories out. A lot of them are working for these kind of like clickbaity farms that just churn out news stories on any hot topic of the day. And I think that is just as damaging and dangerous as some of these social media platforms.

JASON KELLEY
So when it comes to debunking, there's a sort of parallel, which is fact checking. And, you know, I have tried to fact check people, myself, um, individually. It doesn't seem to work. Does it work when it's, uh, kind of built into the platform as we've seen in different, um, in different spaces like Facebook or Twitter with community notes they're testing out now?

Or does that also kind of amplify it in some way because it just serves to upset, let's say, the people who have already decided to latch onto the thing that is supposedly being fact checked.

ALICE MARWICK
I think fact checking does work in some instances. If it's about things that people don't already have, like a deep emotional attachment to. I think sometimes also if it's coming from someone they trust, you know, like a relative or a close friend, I think there are instances in which it doesn't get emotional and people are like, oh, I was wrong about that, that's great. And then they move on. 

When it's something like Facebook where, you know, there's literally like a little popup saying, you know, this is untrue. Oftentimes what that does is it just reinforces this narrative that the social platforms are covering things up and that they're biased against certain groups of people because they're like, oh, Facebook only allows for one point of view.

You know, they censor everybody who doesn't believe X, Y, or Z. And the thing is that I think both liberals and conservatives believe that, obviously the narrative that social platforms censor conservatives is much stronger. But if you look at the empirical evidence, conservative stories perform much better on social media, specifically Facebook and Twitter, than do liberal stories.

So it, it's kind of like, it makes nobody happy. I don't think we should be amplifying, especially extremist views or views that are really dangerous. And I think that what you wanna do is get rid of the lowest hanging fruit. Like you don't wanna convert new people to these ideas like you, there might be some people who are already so enmeshed in some of these communities that it's gonna be hard for them to find their way out. But let's try to minimize the number of people who are exposed to it.

JASON KELLEY
That's interesting. It sounds like there are some models of fact checking that can help, but it really more applies to the type of information that's being, uh, fact checked than, than the specific way that the platform kind of sets it up. Is that what I'm hearing? Is that right?

ALICE MARWICK
Yeah, I mean, the problem is with a lot of, a lot of people online, I bet if you ask 99 people, if they consider themselves to be critical thinkers, 95 would say, yes, I'm a critical thinker. I'm a free thinker.

JASON KELLEY
A low estimate, I'm pretty sure.

ALICE MARWICK
A low estimate. So let's say you ask a hundred people in 99 say they're critical thinkers. Um, you know, I, I interview a lot of people about who have sort of what we might call unusual beliefs, and they all claim that they do fact checking and that they, when they hear something, they want to see if it's true.

And so they go and read other perspectives on it. And obviously, you know, they're gonna tell the researcher what they think I wanna hear. They're not gonna be like, oh, I saw this thing on Facebook and then I, like, spread it to 2000 people. And then it, you know, it turned out it was false. Um, but especially in the communities like Q-Anon, or anti-vaxxers, they already think of themselves as like researchers.

A lot of people who are into conspiracy theories think of themselves as researchers. That's one of their identities. And they spend quite a bit of time going down rabbit holes on the internet, looking things up and reading about it. And it's almost like a funhouse mirror held up to academic research because it is about the pleasure of learning, I think, and the joy of sort of educating yourself and these sort of like autodidactic processes where people can kind of learn just for the fun of learning. Um, but then they're doing it in a way that's somewhat divorced from what I would call sort of empirical standards of data collection or, you know, data assessment.

CINDY COHN
So, let's flip it around for a second. What does it look like if we are doing this right? What are the things that we would see in our society and in our conversations that would indicate that we're, we're kind of on the right path, or that we're, we're addressing this?

ALICE MARWICK
Well, I mean, the problem is this is a big problem. So it requires a lot of solutions. A lot of different things need to be worked on. You know, the number one thing I think would be toning down, you know, violent political rhetoric in general. 

Now how you do that, I'm not sure. I think it comes from, you know, there's this kind of window of discourse that's open that I think needs to be shut, where maybe we need to get back to slightly more civil levels of discourse. That's a really hard problem to solve. In terms of the internet, I think right now there's been a lot of focus on the biggest social media sites, and I think that what's happening is you have a lot of smaller social sites and it's much more difficult to play whack-a-Mole with a hundred different platforms than it is with three.

CINDY COHN
Given that we think that a pluralistic society is a good thing and we shouldn't all be having exactly the same beliefs all the time. How do we nurture that diversity without, you know, without the kind of violent edges? Or is it inevitable? Is there a way that we can nurture a pluralistic society that doesn't get to this us versus them, what team are you on kind of approach that I think underlies some of the spilling into violence that we've seen?

ALICE MARWICK
This is gonna sound naive, but I do think that there's a lot more commonalities between people than there are differences. So I interviewed a woman who's a conservative evangelical anti-vaxxer last week, and you. She and I don't have a lot in common in any way, but we had, like, a very nice conversation and one of the things that she told me is be she has this one particular interest that's brought her into conversation with a lot of really liberal people.

And so because she's interacted with a lot of them, she knows that they're not like demonic or evil. She knows they're just people and they have really different, they have really different opinions on a lot of really serious issues, but they're still able to sort of chat [00:32:00] about the things that they do care about.

And I think that if we can trace those lines of inclusion and connectivity between people, I think that's much, that's a much more positive, I think, area for growth than it is just constantly focusing on the differences. And that's easy for me to say as a white woman, right? Like it's much harder to deal with these differences if the difference in question is that the person thinks you're, you know, genetically inferior or that you shouldn't exist.

Those are things that are not easy. You can't just kumbaya your way out of those kinds of things. And in that case, I think we need to center the concerns of the most vulnerable and of the most marginalized, and make sure they're the ones whose voices are getting heard and their concerns are being amplified, which is not always the case, unfortunately.

JASON KELLEY
So let's say that we got to that point and um, you know, the internet space that you're on isn't as polarized, but it's pluralistic. Can you describe a little bit about what that feels like in your mind?

ALICE MARWICK
I think one thing to remember is that most people don't really care about politics. You know, a lot of us are kind of Twitter obsessed and we follow the news and we see our news alerts come up on our phone and we're like, Ooh, what just happened? Most people don't really care about that stuff. If you look at a site like Reddit, which gets a bad rap, but I think Reddit is just like a wonderful site for a lot of reasons.

It's mostly focused around interest-based communities, and the vast, vast majority of them are not about politics. They're about all kinds of other things. You know very mundane stuff. Like you have a dog or a cat, or you like the White Lotus and you wanna talk about the finale. Or you, you know, you live in a community and you want to talk about the fact that they're building a new McDonald's on like Route Six or whatever.

Yes, in those spaces you'll see people get into spats and you'll see people get into arguments and in those cases, there's usually some community moderation, but generally I think a lot of those communities are really healthy and positive. The moderators put forth like these are the norms.

And I think it's funny, I think some people would say Reddit uplifting, but I think you see the same thing in some Facebook groups as well, um, where you have people who really love, like quilting or I'm in dozens and dozens of Facebook groups on all kinds of weird things.

Like, “I found this weird thing at a thrift store,” or “I found this painting, you know, what can you tell me about it?” And I get such a kick out of seeing people from all these walks of life come together and talk about these various interests. And I do think that. You know, that's the utopian ideal of the internet that I think got us all so into it in the eighties and nineties.

This idea that you can come together with people and talk about things that you care about, even if you don't have anyone in your local immediate community who cares about those same things, and we've seen over and over that, that can be really empowering for people. You know, if you're an LGBTQ person in an area where there aren't that many other LGBTQ people, or if you're a black woman and you're the only black woman at your company, you know, you can get resources and support for that.

If you have an illness that isn't very well understood, you know, you can do community education on that. So, You know, these pockets of the internet, they exist and they're pretty big. And when we just constantly focus on this small minority of people who are on Twitter, you know, yelling at each other about stuff, I think it really overlooks the fact that so much of the internet is already this place of like enjoyment and, you know, hope.

CINDY COHN
Oh, I, that is so right and so good to be reminded of, um, that, that, that it's not that we have to fix the internet, it's that we have to grow the part of the internet  that never got broken. Right. That is fixed. 

JASON KELLEY
Let’s take a quick moment to say thank you to our sponsor.

“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.


CINDY COHN
Now back to our conversation with Alice Marwick. In addition to all of her fascinating research on disinformation that we’ve been talking about so far, Alice has also been doing some work on another subject very near and dear to our hearts here at EFF – privacy.

Alice has a new book coming out in May 2023 called The Private is Political – so of course we couldn’t let her go without talking about that. 

ALICE MARWICK
I wanted to look at how you can't individually control privacy anymore because all of our privacy is networked because of social media and big data. We share information about each other, information about us as collected by all kinds of entities.

You know, you can configure your privacy settings till the cows come home, but it's not gonna change whether your photo gets swept up in, you know, some AI that then uses it for other kinds of purposes. And the second thing is to think about privacy as a political issue that has big impacts on everyone's lives, especially people who are marginalized in other areas.

I interviewed, oh, people from all kinds of places and spaces with all sorts of identities, and there's this really big misconception that people don't care about privacy. But people care very deeply about privacy and the way that they. Show that care  manifest in like so many different kinds of creative ways.

And so I'm hoping, I'm looking forward to sharing the stories of the people I spoke with.

CINDY COHN
That's great. Can you tell us one or I, I don't wanna spoil it, but -

ALICE MARWICK
Yeah, no. So I spoke with Jazz in North Carolina. These are all pseudonyms. And Jazz is an atheist, gender queer person, and they come from a pretty conservative Southern Baptist family and they're also homeless. They have a child who lives with their sister and they get a little bit of help from their family, like, not a lot, but enough that it can make the difference between whether they get by or not.

So what they did is they created two completely different sets of internet accounts. They have two Facebooks, two Twitters, two email addresses. Everything is different and it's completely firewalled. So on one, they use their preferred name and their pronouns. On the other, they use the pronouns they were assigned at birth and the name that their oarents gave them. And so the contrast between the two was just extreme. And so  Jazz said that they feel like their real, their Facebook page that really reflects them, that's their “me” page. That's where they can be who they really are because they have to kind of cover up who they are in so many other areas of their lives.

So they get this sort of big kick out of having this space on the internet where they can be like fiery and they can talk about politics and gender and things that they care about, but they have a lot to lose if the, if that, you know, seeps into their other life. So they have to be really cognizant of things like who does Facebook recommend that you friend, you know, who might see my other email address, who might do a Google search for my name?

And so I call this privacy work. It's the work that all of us do to maintain our privacy and we all do it. Um, and, but it's just much more intense for some kinds of people. Um, and so I see in jazz, you know, a lot of these themes, somebody who is. Suffering from intersectional forms of marginalization, but is still kind of doing the best they can.

And, you know, moving forward in the world, somebody who's being very creative with the internet, they're using it in ways that none of the designers or technologists ever intended, and they're helping it work for them, but they're also not served well by these technologies because they don't have the options to set the technologies up in ways that would fit their life or their needs.

Um, and so what I'm really calling for here is to, rather than thinking about privacy as individual, as something we each have to solve, as seeing it as a political and a structural problem that cannot be solved by individual responsibility or individual actions.

CINDY COHN
I so support that. That is certainly what we've experienced in the world as well, you know, the fight against the Real Names policy, say at Facebook, which, which really impacted, um, LGBTQ and trans community, especially because people are, they're changing their names, right? And that's important.

This real names policy, you know, first of all it's based on not good science. This idea that if you attach people's names to what they say, they will behave better. Which is, you know, belied by all of Facebook. Um, and, and, you know, it doesn't have any science behind it at all. But also these negative effects for, for, for people who, you know, for safety, you know, we work with a lot of domestic violence victims, you know, being able to separate out. One identity from another is tremendously important. And, and again, can, can matter for people's very lives. Or it could just be like, you know, when I'm Cindy at the dog park, I, I, I'm not interested in being, you know, Cindy, who's the ED of EFF, and being able to segment out your life and show up as, as different people, like, there's, there's a lot of power in that, even if it's not, you know, um, necessary to save your life.

ALICE MARWICK
Yeah, absolutely. Sort of that, that ability to maintain our social roles and to play different aspects of ourselves at different times. That's like a very human thing, and that's sort of fundamental to privacy. It's what parts of yourself do you wanna reveal at any given time. And when you have these huge sites like Facebook where they want a real name and they want you to have a persistent identity, it makes that really difficult.

Whereas sites like Reddit where you can have a pseudonym and you can have 12 accounts and nobody cares, and the site is totally designed to deal with that. You know, that works a lot better with how most people, I think, want to use the internet.

CINDY COHN
What other things do you think we can do? I mean, I'm assuming that we need some legal support here as well as technical, um, uh, support for, uh, more private internet, really More privacy protective internet.

ALICE MARWICK
I mean, we need comprehensive data privacy laws.

CINDY COHN
Yeah.

ALICE MARWICK
The fact that every different type of personal information is governed differently and some aren't governed at all. The fact that your email is not private, that, you know, anything you do through a third party is not private, whereas your video store records are private.

That makes no sense whatsoever. You know, it's just this complete amalgam. It doesn't have any underlying principle whatsoever. The other thing I would say is data brokers. We gotta get 'em out. We gotta get rid of them. You shouldn't be able to collect data in one for one purpose and then use it for God knows how many other purposes.

I think, you know, I was very happy under the Obama administration to see that the FTC was starting to look into data brokers. It seems like we lost a lot of that energy during the Trump administration, but you know, to me they're public enemy number one. Really don't like 'em.

CINDY COHN
We are with you.  And you know this isn’t new – as early as 1973 the federal government developed  something called the Fair Information Practice Principles that included recognizing that it wasn’t fair to collect data for one purpose and then use it for another without meaningful consent – but that’s the central proposition that underlies the data broker business model. I appreciate that your work confirms that those ideas are still good ones.  

ALICE MARWICK
Yeah, I think there's sort of a group of people doing critical privacy critical surveillance studies, um, a more diverse group of people than we've typically seen studying privacy. For a long time it was just sort of the domain of, you know, legal scholars and computer scientists. And so now that it's being sort of opened up to qualitative analysis and sociology and other forms, you know, I think we're starting to see a much more comprehensive understanding, which hopefully at some point will, you know, affect policy making and technology design as well.

CINDY COHN
Yeah, I sure hope so. I mean, I think we're in a time when our US Supreme Court is really not grappling with privacy harms and is effectively making it harder and harder to at least use the judicial remedies to try to address privacy harm. So, you know, this development of the rest of society and people's thinking about eventually, I think, will leak over into, into the judicial side.

But it's one of the things that a fixed internet would give us is the ability to have actual accountability for privacy harms at a level that much better than what we have now. And the other thing I hear you really developing out is that maybe the individual model, which is kind of inherent in a lot of litigation, isn't really the right model for thinking about how to remedy all of this either.

ALICE MARWICK
Well, a lot of it is just theatrical, right? It reminds me of, you know, security theater at the airport. Like the idea that by clicking through a 75-page, you know, terms of service change that's written at, you know, a level that would require a couple of years of law school, that it would take years if you spent, if you actually sat and read those, it would take up like two weeks of your life every year.

Like that is just preposterous. Like, nobody would sit and be like, okay, well here's a problem. What's the best way to solve it? It's just a loophole that allows companies to get away with all kinds of things that I think are, you know, unethical and immoral by saying, oh, well we told you about it.

But I think often what I hear from people is, well, if you don't like it, don't use it. And that's easy to say if you're talking about something that is, you know, an optional extra to your life. But when we're talking about the internet, there aren't other options. And I think what people forget is that the internet has replaced a lot of technologies that kind of withered away. You know, I've driven across country three times, and the first two times was kind of pre-mobile internet or a pre, you know, ubiquitous internet. And you had a giant road atlas in your car. Every gas station had maps and there were payphones everywhere. You know, now most payphones are gone, you go to a gas station, you ask for directions, they're gonna look at you blankly, and no one has a road atlas. You know, there are all these infrastructures that existed pre-internet that allowed us to exist without smartphones in the internet. And now most of those are gone. What are you supposed to do if you're in college and you're not using, you know, at the very least, your course management system, which is probably already, you know, collecting information on you and possibly selling it to a third party.

You can't pass your class. If you're not joining your study group, which might be on Facebook or any other medium, or WhatsApp or whatnot. Like, you can't communicate with people. It's absolutely ridiculous that we're just saying, oh, well, if you don't like it, don't use it. Like you don't tell people, you know.

If you're being targeted by like a murderous sociopath, oh, just don't go outside, right? Just stay inside all the time. That's just not, it's  terrible advice and it's not realistic.

CINDY COHN
No, I think that is true and certainly trying to find a job. I mean,  there are benefits to the fact that all of this stuff is networked, but it really does shine a light on the fact that, that this terms of service approach to things as if this is a contract, like a freely negotiated contract like I learned in law school with two equal parties, having a negotiation and coming to a meeting of the minds like this is, it's a whole other planet from that approach.

And to try to bring that frame to, you know, whether you enforce those terms or not, is, it's jarring to people. It's not how people live. And so it feels this way in which the legal system is kind of divorced from, from our lives. And, and if we get it right, the legal terms and the things that we are agreeing to will be things that we actually agree to, not things that are stuffed into a document that we never read or we really realistically can't read.

ALICE MARWICK
Yeah, I would love it if the terms of service was an actual contract and I could sit there and be like, all right, Facebook, if you want my business, this is what you have to do for me. And make some poor entry level employees sit there and go through all my ridiculous demands. Like, sure, you want it to be a contract, then I'm gonna be an equal participant.

CINDY COHN
You want those green m and ms in the green room?

ALICE MARWICK
Yeah, I want, I want different content moderation standards. I want a pony, I want glittery gifs on every page. You know, give it all to me.

CINDY COHN
Yeah. I mean, you know, there's a, there's a way in which a piece of the fed-averse strategy that I think, uh, we're kind of at the beginning of, uh, perhaps, uh, in this moment is, um, is that a little bit, you have a smaller community, you have people who run the servers, um, who you can actually interact with.

I mean, I don't know that, again, I don't know that there's ponies, but, um, but you know, one of the things that will help get us there is smaller, right? We can't do content moderation at scale. Um, and we can't do, you know, contractual negotiations at scale. So smaller might be helpful and I don't think it's gonna solve all the problems.

I'm, you know, but I think that there, there's a way in which you can at least get your arms around the problem. If you're dealing with a smaller community that then can inter, inter-operate with other communities, but isn't beholden to them with one rule to rule them all.

ALICE MARWICK
Yeah, I mean, I think the biggest problem right now is we need to get around usability and ux and these platforms need to be just as easy to use as like the easiest social platform. You know, it needs to be something that if you aren't, you know, if you don't have a college education, if you're not super techy, if you aren't familiar with, you know, if you're only familiar with very popular social media platforms, you still be, are able to use things like Mastodon.

I don't think we're quite there yet, but I can see a future in which we get there.

CINDY COHN
Well thank you so much for continuing to do this work.

ALICE MARWICK
Oh, thank you. Thank you, Cindy. Thank you, Jason. It was great to chat today.

JASON KELLEY
I'm so glad we got to talk to Alice. That was a really fun conversation and one that I think really underscored a point that I've noticed, um, which is that over the last, I don't know, many years we've seen Congress and other legislators try to tackle these two separate issues that we talked with Alice about.

One being sort of like content on the internet and the other being privacy on the internet. And when we spoke with her about privacy, it was clear that there are a lot. Obvious and simple and direct solutions to kind of informing how we can make privacy on the internet something that actually exists compared to content, which is a much stickier issue.

And, and it's, it's interesting that Congress and other legislators have consistently focused on one of these two topics, or let's say both of them at the expense of, of the one that actually is fairly direct when it comes to solutions. That really sticks out for me, but I'm, I'm wondering, I've blathered on, what do you find  most interesting about what we talked with her about? There was a lot there.

CINDY COHN
Well, I think that Alice does a great service to all of us by pointing out all the ways in which the kind of easy solutions that we reach to, especially around misinformation and disinformation and easy stories we tell ourselves are not easy at all and not empirically supported. So I think one of the things she does is just shine a light on the difference between the kind of stories we tell ourselves about how we could fix some of these problems and the actual empirical evidence about whether those things will work or not.

The other thing that I appreciated is she kind of pointed to spaces on the internet where things are kind of fixed. She talked about Reddit, she talked about some of the fan fiction places she talked about. Facebook groups and pointing out that, you know, sometimes we can be overly focused on politics and the darker pieces of the internet, and that these places that are supportive and loving and good communities that are doing the right thing, they already exist.

We don't have to create them, we just have to find a way to foster them, um, and build more of them. Make the, make more of the internet. That experience. But it, it's, it's refreshing to realize that, you know, Massive pieces of the internet were never broken, um, and don't need to be fixed.

JASON KELLEY
That is 100% right. We're sort of tilted, I think, to focus on the worst things, which is part of our job at EFF. But it's nice when someone says, you know, there are actually good things. And it reminds us that a lot of, in a lot of ways it's working and we can make it better by focusing on what's working.

Well that’s it for this episode of How to Fix the Internet.

Thank you so much for listening. If you want to get in touch about the show, you can write to us at podcast@eff.org or check out the EFF website to become a member, donate, or look at hoodies, tshirts, hats and other merch, just in case you feel the need to represent your favorite podcast and your favorite digital rights organization.

This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. You can find their names and links to their music in our episode notes, or on our website at eff.org/podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time in two weeks

I’m Jason Kelley

CINDY COHN
And I’m Cindy Cohn.
MUSIC CREDIT ANNOUNCER
This podcast is licensed Creative Commons Attribution 4.0 International, and includes the following music licensed Creative Commons Attribution 3.0 Unported by its creators:
Probably Shouldn’t by J.Lang featuring Mr_Yesterday

CommonGround by airtone featuring: simonlittlefield

Additional beds and alternate theme remixes by Gaëtan Harris

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

Par : Karen Gullo
10 octobre 2024 à 14:20

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

Election Security: When to Worry, When to Not

This post was written by EFF intern Nazli Ungan as an update to a 2020 Deeplinks post by Cindy Cohn.

Everyone wants an election that is secure and reliable and that will ensure that the voters’ actual choices are reflected in the results. That’s as true as we head into the 2024 U.S. general elections as it always has been.

At the same time, not every problem in voting technology or systems is worth pulling the fire alarm—we have to look at the bigger story and context. And we have to stand down when our worst fears turn out to be unfounded.

Resilience is the key word when it comes to the security and the integrity of our elections. We need our election systems to be technically and procedurally resilient against potential attacks or errors. But equally important, we need the voting public to be resilient against false or unfounded claims of attack or error. Luckily, our past experiences and the work of election security experts have taught us a few lessons on when to worry and when to not.

See EFF's handout on Election Security here: https://www.eff.org/document/election-security-recommendations

We Need Risk-Limiting Audits

First, and most importantly, it is critical to have systems in place to support election technology and the election officials who run it. Machines may fail, humans may make errors. We cannot simply assume that there will not be any issues in voting and tabulation. Instead, there must be built-in safety measures that would catch any issues that may affect the official election results.  

It is critical to have systems in place to support election technology and the election officials who run it.

The most important of these is performing routine, post-election Risk-Limiting Audits after every election. RLAs should occur even if there is no apparent reason to suspect the accuracy of the results. Risk-limiting audits are considered the gold standard of post-election audits and they give the public justified confidence in the results. This type of audit entails manually checking randomly selected ballots until there is convincing evidence that the election outcome is correct. In many cases, it can be performed by counting only a small fraction of ballots cast making it cheap enough to be performed in every election. When the margins are tighter, a greater fraction of the votes are required to be hand counted, but this is a good thing because we want to scrutinize close contests more strictly to make sure the right person won the race. Some states have started requiring risk-limiting audits and the rest should catch up!

 We (and many others in the election integrity community) also continue to push for more transparency in election systems, more independent testing and red-team style attacks, including end-to-end pre-election testing.

And We Need A Paper Trail

Second, voting on paper ballots continues to be extremely important and the most secure strategy. Ideally, all voters should use paper ballots marked by hand, or with an assistive device, and verify their votes before casting. If there is no paper record, there is no way to perform a post-election audit, or recount votes in the event of an error or a security incident. On the other hand, if voters vote on paper, they can verify their choices are recorded accurately. More importantly, election officials can hand count a portion of the paper ballots to make sure they match with the electronic vote totals and confirm the accuracy of the election results. 

What happened in Antrim County, Michigan in the 2020 general elections illustrates the importance of paper ballots. Immediately after the 2020 elections, Antrim County published inaccurate unofficial results, and then restated these results three times to correct the errors, which led to conspiracy theories about the voting systems used there. Fortunately, Antrim County voters had voted on paper ballots, so Michigan was able to confirm the final presidential results by conducting a county-wide hand count and affirm them by a state-wide risk-limiting audit pilot. This would not have been possible without paper ballots.  

And we can’t stop there, because not every paper record is created equal. Some direct recording electronic systems are equipped with a type of Voter-Verified Paper Audit Trail that make it difficult for voters to verify their selections and for election officials to use in audits and recounts. The best practice is to have all votes cast on pre-printed paper ballots, marked by hand or an assistive ballot marking device.  

Third, it is important to have the entire voting technical system under the control of election officials so that they can investigate any potential problems, which is one of the reasons why internet voting remains a bad, bad idea. There are “significant security, privacy, and ballot secrecy challenges” associated with electronic ballot return systems and they make it possible for a single attacker to alter thousands or even millions of votes.” Maybe in the future we will have tools to limit the risks of internet voting. But until then, we should reject any proposal that includes electronic ballot return over the internet. Speaking about the internet, voting machines should never connect to the internet, dial a modem, or communicate wirelessly. 

Internet voting remains a bad, bad idea

Fourth, every part of the voting process that relies on technology must have paper backups so that voting can continue even when the machines fail. This includes paper backups for electronic pollbooks, emergency paper ballots in case voting machines fail, and provisional ballots in case there voter eligibility cannot be confirmed. 

Stay Vigilant and Informed

Fifth, we should continue to be vigilant. Election officials have come a long way from when we started raising concerns about electronic voting machines and systems. But the public should keep watching and, when warranted, not be afraid to raise or flag things that seem strange. For example, if you see something like voting machines “flipping” the votes, you should tell the poll workers. This doesn’t necessarily mean there has been a security breach; it can be as simple as a calibration error, but it can mean lost votes. Poll workers can and should address the issue immediately by providing voters with emergency paper ballots. 

Sixth, not everything that seems out of the ordinary may be reason to worry. We should build societal resistance to disinformation. CISA's Election Security Rumor vs. Reality website is a good resource that addresses election security rumors and educates us on when we need to be or don’t need to be alarmed. State-specific information is also available online. If we see or hear anything odd about what is happening at a particular locality, we should first hear what the election officials on the ground have to say about it. After all, they were there! We should also pay attention to what non-partisan election protection organizations, such as Verified Voting, say about the incident.  

The 2024 presidential election is fast approaching and there may be many claims of computer glitches and other forms of manipulation concerning our voting systems in November. Knowing when to worry and when NOT to worry will continue to be extremely important.  

In the meantime, the work of securing our elections and building resilience must continue. While not every glitch is worrisome, we should not dismiss legitimate security concerns. As often said: election security is a race without a finish line!

A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.

The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned. Research has shown that a majority of white Americans can already be identified from just 1.3 million users of a similar service, GEDMatch, due to genetic likenesses, even though GEDMatch has a much smaller database of genetic profiles. 23andMe has about ten times as many customers.

Selling a giant trove of our most sensitive data is a bad idea that the company should avoid at all costs. And for now, the company appears to have backed off its consideration of a third-party buyer. Before 23andMe reconsiders, it should at the very least make a series of privacy commitments to all its users. Those should include: 

  • Do not consider a sale to any company with ties to law enforcement or a history of security failures
  • Prior to any acquisition, affirmatively ask all users if they would like to delete their information, with an option to download it beforehand.
  • Prior to any acquisition, seek affirmative consent from all users before transferring user data. The consent should give people a real choice to say “no.” It should be separate from the privacy policy, contain the name of the acquiring company, and be free of dark patterns.
  • Prior to any acquisition, require the buyer to make strong privacy and security commitments. That should include a commitment to not let law enforcement indiscriminately search the database, and to prohibit disclosing any person’s genetic data to law enforcement without a particularized warrant. 
  • Reconsider your own data retention and sharing policies. People primarily use the service to obtain a genetic test. A survey of 23andMe customers in 2017 and 2018 showed that over 40% were unaware that data sharing was part of the company’s business model.  

23andMe is already legally required to provide users in certain states with some of these rights. But 23andMe—and any company considering selling such sensitive data—should go beyond current law to assuage users’ real privacy fears. In addition, lawmakers should continue to pass and strengthen protections for genetic privacy. 

Existing users can demand that 23andMe delete their data 

The privacy of personal genetic information collected by companies like 23andMe is always going to be at some level of risk, which is why we suggest consumers think very carefully before using such a service. Genetic data is immutable and can reveal very personal details about you and your family members. Data breaches are a serious concern wherever sensitive data is stored, and last year’s breach of 23andMe exposed personal information from nearly half of its customers. The data can be abused by law enforcement to indiscriminately search for evidence of a crime. Although 23andMe’s policies require a warrant before releasing information to the police, some other companies do not. In addition, the private sector could use your information to discriminate against you. Thankfully, existing law prevents genetic discrimination in health insurance and employment.  

What Happens to My Genetic Data If 23andMe is Sold to Another Company?

In the event of an acquisition or liquidation through bankruptcy, 23andMe must still obtain separate consent from users in about a dozen states before it could transfer their genetic data to an acquiring company. Users in those states could simply refuse. In addition, many people in the United States are legally allowed to access and delete their data either before or after any acquisition. Separately, the buyer of 23andMe would, at a minimum, have to comply with existing genetic privacy laws and 23andMe's current privacy policies. It would be up to regulators to enforce many of these protections. 

Below is a general legal lay of the land, as we understand it.  

  • 23andMe must obtain consent from many users before transferring their data in an acquisition. Those users could simply refuse. At least a dozen states have passed consumer data privacy laws specific to genetic privacy. For example, Montana’s 2023 law would require consent to be separate from other documents and to list the buyer’s name. While the consent requirements vary slightly, similar laws exist in Alabama, Arizona, California, Kentucky, Nebraska, Maryland, Minnesota, Tennessee, Texas, Virginia, Utah, Wyoming. Specifically, Wyoming’s law has a private right of action, which allows consumers to defend their own rights in court. 
  • Many users have the legal right to access and delete their data stored with 23andMe before or after an acquisition. About 19 states have passed comprehensive privacy laws which give users deletion and access rights, but not all have taken effect. Many of those laws also classify genetic data as sensitive and require companies to obtain consent to process it. Unfortunately, most if not all of these laws allow companies like 23andMe to freely transfer user data as part of a merger, acquisition, or bankruptcy. 
  • 23andMe must comply with its own privacy policy. Otherwise, the company could be sanctioned for engaging in deceptive practices. Unfortunately, its current privacy policy allows for transfers of data in the event of a merger, acquisition, or bankruptcy. 
  • Any buyer of 23andMe would likely have to offer existing users privacy rights that are equal or greater to the ones offered now, unless the buyer obtains new consent. The Federal Trade Commission has warned companies not to engage in the unfair practice of quietly reducing privacy protections of user data after an acquisition. The buyer would also have to comply with the web of comprehensive and genetic-specific state privacy laws mentioned above. 
  • The federal Genetic Information Nondiscrimination Act of 2008 prevents genetic-based discrimination by health insurers and employers. 

What Can You Do to Protect Your Genetic Data Now?

Existing users can demand that 23andMe delete their data or revoke some of their past consent to research. 

If you don’t feel comfortable with a potential sale, you can consider downloading a local copy of your information to create a personal archive, and then deleting your 23andMe account. Doing so will remove all your information from 23andMe, and if you haven’t already requested it, the company will also destroy your genetic sample. Deleting your account will also remove any genetic information from future research projects, though there is no way to remove anything that’s already been shared. We’ve put together directions for archiving and deleting your account here. When you get your archived account information, some of your data will be in more readable formats than others. For example, your “Reports Summary” will arrive as a PDF that’s easy to read and includes information about traits and your ancestry report. Other information, like the family tree, arrives in a less readable format, like a JSON file.

You also may be one of the 80% or so of users who consented to having your genetic data analyzed for medical research. You can revoke your consent to future research as well by sending an email. Under this program, third-party researchers who conduct analyses on that data have access to this information, as well as some data from additional surveys and other information you provide. Third-party researchers include non-profits, pharmaceutical companies like GlaxoSmithKline, and research institutions. 23andMe has used this data to publish research on diseases like Parkinson’s. According to the company, this data is deidentified, or stripped of obvious identifying information such as your name and contact information. However, genetic data cannot truly be de-identified. Even if separated from obvious identifiers like name, it is still forever linked to only one person in the world. And at least one study has shown that, when combined with data from GenBank, a National Institutes of Health genetic sequence database, data from some genealogical databases can result in the possibility of re-identification. 

What Can 23andMe, Regulators, and Lawmakers Do?

Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act

As mentioned above, 23andMe must follow existing law. And it should make a series of additional commitments before ever reconsidering a sale. Most importantly, it must give every user a real choice to say “no” to a data transfer and ensure that any buyer makes real privacy commitments. Other consumer genetic genealogy companies should proactively take these steps as well. Companies should be crystal clear about where the information goes and how it’s used, and they should require an individualized warrant before allowing police to comb through their database. 

Government regulators should closely monitor the company’s plans and press the company to explain how it will protect user data in the event of a transfer of ownership—similar to the FTC’s scrutiny of the prior Facebook WhatsApp acquisition. 

Lawmakers should also work to pass stronger comprehensive privacy protections in general and genetic privacy protections in particular. While many of the state-based genetic privacy laws are a good start, they generally lack a private right of action and only protect a slice of the U.S. population. EFF has long advocated for a strong federal privacy law that includes a private right of action. 

Our DNA is quite literally what makes us human. It is inherently personal and deeply revealing, not just of ourselves but our genetic relatives as well, making it deserving of the strongest privacy protections. Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act, and when they do, EFF will be ready to support them. 

Salt Typhoon Hack Shows There's No Security Backdoor That's Only For The "Good Guys"

At EFF we’ve long noted that you cannot build a backdoor that only lets in good guys and not bad guys. Over the weekend, we saw another example of this: The Wall Street Journal reported on a major breach of U.S. telecom systems attributed to a sophisticated Chinese-government backed hacking group dubbed Salt Typhoon.

According to reports, the hack took advantage of systems built by ISPs like Verizon, AT&T, and Lumen Technologies (formerly CenturyLink) to give law enforcement and intelligence agencies access to the ISPs’ user data. This gave China unprecedented access to data related to U.S. government requests to these major telecommunications companies. It’s still unclear how much communication and internet traffic, and related to whom, Salt Typhoon accessed.

That’s right: the path for law enforcement access set up by these companies was apparently compromised and used by China-backed hackers. That path was likely created to facilitate smooth compliance with wrong-headed laws like CALEA, which require telecommunications companies to facilitate “lawful intercepts”—in other words, wiretaps and other orders by law enforcement and national security agencies. While this is a terrible outcome for user privacy, and for U.S. government intelligence and law enforcement, it is not surprising. 

The idea that only authorized government agencies would ever use these channels for acquiring user data was always risky and flawed. We’ve seen this before: in a notorious case in 2004 and 2005, more than 100 top officials in the Greek government were illegally surveilled for a period of ten months when unknown parties broke into Greece’s “lawful access” program. In 2024, with growing numbers of sophisticated state-sponsored hacking groups operating, it’s almost inevitable that these types of damaging breaches occur. The system of special law enforcement access that was set up for the “good guys” isn’t making us safer; it’s a dangerous security flaw. 

Internet Wiretaps Have Always Been A Bad Idea

Passed in 1994, CALEA requires that makers of telecommunications equipment provide the ability for government eavesdropping. In 2004, the government dramatically expanded this wiretap mandate to include internet access providers. EFF opposed this expansion and explained the perils of wiretapping the internet.  

The internet is different from the phone system in critical ways, making it more vulnerable. The internet is open and ever-changing.  “Many of the technologies currently used to create wiretap-friendly computer networks make the people on those networks more pregnable to attackers who want to steal their data or personal information,” EFF wrote, nearly 20 years ago.

Towards Transparency And Security

The irony should be lost on no one that now the Chinese government may be in possession of more knowledge about who the U.S. government spies on, including people living in the U.S., than Americans. The intelligence and law enforcement agencies that use these backdoor legal authorities are notoriously secretive, making oversight difficult. 

Companies and people who are building communication tools should be aware of these flaws and implement, where possible, privacy by default. As bad as this hack was, it could have been much worse if it wasn’t for the hard work of EFF and other privacy advocates making sure that more than 90% of web traffic is encrypted via HTTPS. For those hosting the 10% (or so) of the web that has yet to encrypt its traffic, now is a great time to consider turning on encryption, either using Certbot or switching to a hosting provider that offers HTTPS by default.

What can we do next? We must demand real privacy and security.  

That means we must reject the loud law enforcement and other voices that continue to pretend that there are “good guy only” ways to ensure access. We can point to this example, among many others, to push back on the idea that the default in the digital world is that governments (and malicious hackers) should be able to access all of our messages and files. We’ll continue to fight against US bills like EARN IT, the EU “Chat Control” file-scanning proposal, and the UK’s Online Safety Act, all of which are based on this flawed premise. 

It’s time for U.S. policymakers to step up too. If they care about China and other foreign countries engaging in espionage on U.S. citizens, it’s time to speak up in favor of encryption by default. If they don’t want to see bad actors take advantage of their constituents, domestic companies, or security agencies, again—speak up for encryption by default. Elected officials can and have done so in the past. Instead of holding hearings that give the FBI a platform to make digital wiretaps easier, demand accountability for the digital lock-breaking they’re already doing

The lesson will be repeated until it is learned: there is no backdoor that only lets in good guys and keeps out bad guys. It’s time for all of us to recognize this, and take steps to ensure real security and privacy for all of us.

FTC Findings on Commercial Surveillance Can Lead to Better Alternatives

8 octobre 2024 à 13:04

On September 19, the FTC published a staff report following a multi-year investigation of nine social media and video streaming companies. The report found a myriad of privacy violations to consumers stemming largely from the ad-revenue based business models of companies including Facebook, YouTube, and X (formerly Twitter) which prompted unbridled consumer surveillance practices. In addition to these findings, the FTC points out various ways in which user data can be weaponized to lock out competitors and dominate the respective markets of these companies.

The report finds that market dominance can be established and expanded by acquisition and maintenance of user data, creating an unfair advantage and preventing new market entrants from fairly competing. EFF has found that  this is not only true for new entrants who wish to compete by similarly siphoning off large amounts of user data, but also for consumer-friendly companies who carve out a niche by refusing to play the game of dominance-through-surveillance. Abusing user data in an anti-competitive manner means users may not even learn of alternatives who have their best interests, rather than the best interests of the company advertising partners, in mind.

The relationship between privacy violations and anti-competitive behavior is elaborated upon in a section of the report which points out that “data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.” In contrast with the recent United States v. Google LLC (2020) ruling, where Judge Amit P. Mehta found that the data collection practices of Google, though injurious to consumers, were outweighed by an improved user experience, the FTC highlighted a dangerous feedback loop in which privacy abuses beget further privacy abuses. We agree with the FTC and find the identification of this ‘unvirtuous cycle’ a helpful focal point for further antitrust action.

In an interesting segment focusing on the existing protections the European Union’s General Data Protection Regulation (GDPR) specifies for consumers’ data privacy rights which the US lacks, the report explicitly mentions not only the right of consumers to delete or correct the data held by companies, but importantly also the right to transfer (or port) one’s data to the third party of their choice. This is a right EFF has championed time and again in pointing out the strength of the early internet came from nascent technologies’ imminent need (and implemented ability) to play nicely with each other in order to make any sense—let alone be remotely usable—to consumers. It is this very concept of interoperability which can now be re-discovered and give users control over their own data by granting them the freedom to frictionlessly pack up their posts, friend connections, and private messages and leave when they are no longer willing to let the entrenched provider abuse them.

We hope and believe that the significance of the FTC staff report comes not only from the abuses they have meticulously documented, but the policy and technological possibilities that can follow from the willingness to embrace alternatives. Alternatives where corporate surveillance cementing dominant players based on selling out their users is not the norm. We look forward to seeing these alternatives emerge and grow.

The X Corp. Shutdown in Brazil: What We Can Learn

8 octobre 2024 à 12:39

Update (10/8/2024): Brazil lifted a ban on the X Corp. social media platform today after the country's Supreme Court said the company had complied with all of its orders. Regulators have 24 hours to reinstate the platform, though it could take longer for it to come back online.

The feud between X Corp. and Brazil’s Supreme Court continues to drag on: After a month-long standoff, X Corp. folded and complied with court orders to suspend several accounts, name a legal representative in Brazil, and pay 28.6 million reais ($5.24 million) in fines. That hasn’t cleared the matter up, though.

The Court says X paid the wrong bank, which X denies. Justice Alexandre de Moraes has asked that the funds be redirected to the correct bank and for Brazil’s prosecutor general to weigh in on X’s requests to be reinstated in Brazil.

So the drama continues, as does the collateral damage to millions of Brazilian users who rely on X Corp. to share information and expression. While we watch it unfold, it’s not too early to draw some important lessons for the future.

Let’s break it down.

How We Got Here

The Players

Unlike courts in many countries, the Brazilian Supreme Court has the power to conduct its own investigations in limited circumstances, and issue orders based on its findings. Justice Moraes has drawn on this power frequently in the past few years to target what he called “digital militias,” anti-democratic acts, and fake news. Many in Brazil believe that these investigations, combined with other police work, have helped rein in genuinely dangerous online activities and protect the survival of Brazil’s democratic processes, particularly in the aftermath of January 2023 riots.

At the same time, Moraes’ actions have raised concerns about judicial overreach. For instance, his work is less than transparent. And the resulting content blocking orders more often than not demand suspension of entire accounts, rather than specific posts. Other leaked orders include broad requests for subscriber information of people who used a specific hashtag.

X Corp.’s controversial CEO, Elon Musk has publicly criticized the blocking orders. And while he may be motivated by concern for online expression, it is difficult to untangle that motivation from his personal support for the far-right causes Moraes and others believe threaten democracy in Brazil.

The Standoff

In August, as part of an investigation into coordinated actions to spread disinformation and destabilize Brazilian democracy, Moraes ordered X Corp. to suspend accounts that were allegedly used to intimidate and expose law enforcement officers. Musk refused, directly contradicting his past statements that X Corp. “can’t go beyond the laws of a country”—a stance that supposedly justified complying with controversial orders to block accounts and posts in Turkey and India.

After Moraes gave X Corp. 24 hours to fulfill the order or face fines and the arrest of one of its lawyers, Musk closed down the company’s operations in Brazil altogether. Moraes then ordered Brazilian ISPs to block the platform until Musk designated a legal representative. And people who used tools such as VPNs to circumvent the block can be fined 50,000 reais (approximately $ 9,000 USD) per day.

These orders remain in place unless or until pending legal challenges succeed. Justice Moraes has also authorized Brazil’s Federal Police to monitor “extreme cases” of X Corp. use. It’s unclear what qualifies as an “extreme case,” or how far the police may take that monitoring authority. Flagged users must be notified that X Corp. has been blocked in Brazil; if they continue to use it via VPNs or other means, they are on the hook for substantial daily fines.

A Bridge Too Far

Moraes’ ISP blocking order, combined with the user fines, has been understandably controversial. International freedom of expression standards treat these kinds of orders as extreme measures, permissible only in exceptional circumstances where provided by law and in accordance with necessary and proportionate principles. Justice Moraes said the blocking was necessary given upcoming elections and the risk that X Corp. would ignore future orders and allow the spread of disinformation.

But it has also meant that millions of Brazilians cannot access a platform that, for them, is a valuable source of information. Indeed, restrictions on accessing X Corp. ended up creating hurdles to understanding and countering electoral disinformation. The Brazilian Association of Newspapers has argued the restrictions adversely impact journalism. At the same time, online electoral disinformation holds steady on other platforms (while possibly at a slower pace).

Moreover, now that X Corp. has bowed to his demands, Moraes’ concerns that the company cannot be trusted to comply with Brazilian law are harder to justify. In any event, there are far more balanced options now to deal with the remaining fines that don’t create collateral damage to millions of users.

What Comes Next: Concerns and Open Questions

There are several structural issues that have helped fuel the conflict and exacerbated its negative effects. First, the mechanisms for legal review of Moraes’ orders are unclear and/or ineffective. The Supreme Court has previously held that X Corp. itself cannot challenge suspension of user accounts, thwarting a legal avenue for platforms to defend their users’ speech—even where they may be the only entities that even know about the order before accounts are shut down.

A Brazilian political party and the Federal Council of the Brazilian Bar Association filed legal challenges to the blocking order and user fines, respectively, but it is likely that courts will find these challenges procedurally improper as well.

Back in 2016, a single Supreme Court Justice held back a wave of blocking orders targeting WhatsApp. Eight years later, a single Justice may have created a new precedent in the opposite direction—with little or no means to appeal it.

Second, this case highlights what can happen when too much power is held by just a few people or institutions. On the one hand, in Brazil as elsewhere, a handful of wealthy corporations wield enormous power over online expression. Here, that problem is exacerbated by Elon Musk’s control of Starlink, an important satellite internet provider in Brazil.

On the other hand, the Supreme Court also has tremendous power. Although the court’s actions may have played an important role in preserving Brazilian democracy in recent years, powers that are not properly subject to public oversight or meaningful challenge invite overreach.

All of which speaks to a need for better transparency (in both the public and private sectors) and real checks and balances. Independent observers note that, despite challenges, Brazil has already improved its democratic processes. Strengthening this path includes preventing judicial overreach.

As for social media platforms, the best way to stave off future threats to online expression may be to promote more alternatives, so no single powerful person, whether a judge, a billionaire, or even a president, can dramatically restrict online expression with the stroke of a pen.

 

 

 

 

Germany Rushes to Expand Biometric Surveillance

7 octobre 2024 à 16:07

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

❌
❌