Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

EFF and 34 Civil Society Organizations Call on Ghana’s President to Reject the Anti-LGBTQ+ Bill 

22 mars 2024 à 08:42

MPs in Ghana’s Parliament voted to pass the country’s draconian ‘Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill’ on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

EFF has joined 34 civil society organizations to demand that President Akufo-Addo vetoes the Family Values Bill.

The legislation criminalizes being LGBTQ+ or an ally of LGBTQ+ people, and also imposes custodial sentences for users and social media companies in punishment for vague, ill-defined offenses like promoting “change in public opinion of prohibited acts” on social media. This would effectively ban all speech and activity online and offline that even remotely supports LGBTQ+ rights.

The letter concludes:

“We also call on you to reaffirm Ghana’s obligation to prevent acts that violate and undermine LGBTQ+ people’s fundamental human rights, including the rights to life, to information, to free association, and to freedom of expression.”

Read the full letter here.

Congress Should Give Up on Unconstitutional TikTok Bans

Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week.

A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company.

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place.

Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target. 

The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now. 

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation

Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms.

Montana’s ban was as unprecedented as it was unconstitutional

And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana. 

Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.

Access to Internet Infrastructure is Essential, in Wartime and Peacetime

We’ve been saying it for 20 years, and it remains true now more than ever: the internet is an essential service. It enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. More specifically, during wartime and conflict, internet and phone services enable the communication of information between people in challenging situations, as well as the reporting by on-the-ground journalists and ordinary people of the news. 

Unfortunately, governments across the world are very aware of their power to cut off this crucial lifeline, and frequently undertake targeted initiatives to do so. These internet shutdowns have become a blunt instrument that aid state violence and inhibit free speech, and are routinely deployed in direct contravention of human rights and civil liberties.

And this is not a one-dimensional situation. Nearly twenty years after the world’s first total internet shutdowns, this draconian measure is no longer the sole domain of authoritarian states but has become a favorite of a diverse set of governments across three continents. For example:

In Iran, the government has been suppressing internet access for many years. In the past two years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media, and sometimes cutting off internet access altogether. 

In Sudan, authorities have enacted a total telecommunications blackout during a massive conflict and displacement crisis. Shutting down the internet is a deliberate strategy blocking the flow of information that brings visibility to the crisis and prevents humanitarian aid from supporting populations endangered by the conflict. The communications blackout has extended for weeks, and in response a global campaign #KeepItOn has formed to put pressure on the Sudanese government to restore its peoples' access to these vital services. More than 300 global humanitarian organizations have signed on to support #KeepItOn.

And in Palestine, where the Israeli government exercises near-total control over both wired internet and mobile phone infrastructure, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. The latest blackout in January 2024 occurred amid a widespread crackdown by the Israeli government on digital rights—including censorship, surveillance, and arrests—and amid accusations of bias and unwarranted censorship by social media platforms. On that occasion, the internet was restored after calls from civil society and nations, including the U.S. As we’ve noted, internet shutdowns impede residents' ability to access and share resources and information, as well as the ability of residents and journalists to document and call attention to the situation on the ground—more necessary than ever given that a total of 83 journalists have been killed in the conflict so far. 

Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world. In 2022, President Biden offered to upgrade the West Bank and Gaza to 4G, but the initiative stalled. While some Palestinians are able to circumvent the blackout by utilizing Israeli SIM cards (which are difficult to obtain) or Egyptian eSIMs, these workarounds are not solutions to the larger problem of blackouts, which the National Security Council has said: “[deprive] people from accessing lifesaving information, while also undermining first responders and other humanitarian actors’ ability to operate and to do so safely.”

Access to internet infrastructure is essential, in wartime as in peacetime. In light of these numerous blackouts, we remain concerned about the control that authorities are able to exercise over the ability of millions of people to communicate. It is imperative that people’s access to the internet remains protected, regardless of how user platforms and internet companies transform over time. We continue to shout this, again and again, because it needs to be restated, and unfortunately today there are ever more examples of it happening before our eyes.




EFF’s Submission to Ofcom’s Consultation on Illegal Harms

11 mars 2024 à 13:31

More than four years after it was first introduced, the Online Safety Act (OSA) was passed by the U.K. Parliament in September 2023. The Act seeks to make the U.K. “the safest place” in the world to be online and provides Ofcom, the country’s communications regulator, with the power to enforce this.

EFF has opposed the Online Safety Act since it was first introduced. It will lead to a more censored, locked-down internet for British users. The Act empowers the U.K. government to undermine not just the privacy and security of U.K. residents, but internet users worldwide. We joined civil society organizations, security experts, and tech companies to unequivocally ask for the removal of clauses that require online platforms to use government-approved software to scan for illegal content. 

Under the Online Safety Act, websites, and apps that host content deemed “harmful” minors will face heavy penalties; the problem, of course, is views vary on what type of content is “harmful,” in the U.K. as with all other societies. Soon, U.K. government censors will make that decision. 

The Act also requires mandatory age verification, which undermines the free expression of both adults and minors. 

Ofcom recently published the first of four major consultations seeking information on how internet and search services should approach their new duties on illegal content. While we continue to oppose the concept of the Act, we are continuing to engage with Ofcom to limit the damage to our most fundamental rights online. 

EFF recently submitted information to the consultation, reaffirming our call on policymakers in the U.K. to protect speech and privacy online. 

Encryption 

For years, we opposed a clause contained in the then Online Safety Bill allowing Ofcom to serve a notice requiring tech companies to scan their users–all of them–for child abuse content. We are pleased to see that Ofcom’s recent statements note that the Online Safety Act will not apply to end-to-end encrypted messages. Encryption backdoors of any kind are incompatible with privacy and human rights. 

However, there are places in Ofcom’s documentation where this commitment can and should be clearer. In our submission, we affirmed the importance of ensuring that people’s rights to use and benefit from encryption—regardless of the size and type of the online service. The commitment to not scan encrypted data must be firm, regardless of the size of the service, or what encrypted services it provides. For instance, Ofcom has suggested that “file-storage and file-sharing” may be subject to a different risk profile for mandating scanning. But encrypted “communications” are not significantly different from encrypted “file-storage and file-sharing.”

In this context, Ofcom should also take note of new milestone judgment in PODCHASOV v. RUSSIA (Application no. 33696/19) where the European Court of Human Rights (ECtHR) ruled that weakening encryption can lead to general and indiscriminate surveillance of communications for all users, and violates the human right to privacy. 

Content Moderation

An earlier version of the Online Safety Bill enabled the U.K. government to directly silence user speech and imprison those who publish messages that it doesn’t like. It also empowered Ofcom to levy heavy fines or even block access to sites that offend people. We were happy to see this clause removed from the bill in 2022. But a lot of problems with the OSA remain. Our submission on illegal harms affirmed the importance of ensuring that users have: greater control over what content they see and interact with, are equipped with knowledge about how various controls operate and how they can use them to their advantage, and have the right to anonymity and pseudonymity online.

Moderation mechanisms must not interfere with users’ freedom of expression rights, and moderators should receive ample training and materials to ensure cultural and linguistic competence in content moderation. In cases where time-related pressure is placed on moderators to make determinations, companies often remove more than necessary to avoid potential liability, and are incentivized towards using automated technologies for content removal and upload filters. These are notoriously inaccurate and prone to overblocking legitimate material. Moreover, the moderation of terrorism-related content is prone to error and any new mechanism like hash matching or URL detection must be provided with expert oversight. 

Next Steps

Throughout this consultation period, EFF will continue contributing to and monitoring Ofcom’s drafting of the regulation. And we will continue to hold the U.K. government accountable to the international and European human rights protections to which they are signatories.

Read EFF's full submission to Ofcom

Four Voices You Should Hear this International Women’s Day

Around the globe, freedom of expression varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship can have a negative impact on different communities, and especially marginalized or vulnerable ones. This International Women’s Day, spend some time with four stories of hope and inspiration that teach us how to reflect on the past to build a better future.

1. Podcast Episode: Safer Sex Work Makes a Safer Internet

An internet that is safe for sex workers is an internet that is safer for everyone. Though the effects of stigmatization and criminalization run deep, the sex worker community exemplifies how technology can help people reduce harm, share support, and offer experienced analysis to protect each other. Public interest technology lawyer Kendra Albert and sex worker, activist, and researcher Danielle Blunt have been fighting for sex workers’ online rights for years and say that holding online platforms legally responsible for user speech can lead to censorship that hurts us all. They join EFF’s Cindy Cohn and Jason Kelley in this podcast to talk about protecting all of our free speech rights.

2. Speaking Freely: Sandra Ordoñez

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park in Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools. In this interview with EFF’s Jillian C. York, Sandra discusses free speech and how communities that are often the most directly affected are the last consulted.

3. Story: Coded Resistance, the Comic!

From the days of chattel slavery until the modern Black Lives Matter movement, Black communities have developed innovative ways to fight back against oppression. EFF's Director of Engineering, Alexis Hancock, documented this important history of codes, ciphers, underground telecommunications and dance in a blog post that became one of our favorite articles of 2021. In collaboration with The Nib and illustrator Chelsea Saunders, "Coded Resistance" was adapted into comic form to further explore these stories, from the coded songs of Harriet Tubman to Darnella Frazier recording the murder of George Floyd.

4. Speaking Freely: Evan Greer

Evan Greer is many things: a musician, an activist for LGBTQ issues, the Deputy Director of Fight for the Future, and a true believer in the free and open internet. In this interview, EFF’s Jillian C. York spoke with Evan about the state of free expression, and what we should be doing to protect the internet for future activism. Among the many topics discussed was how policies that promote censorship—no matter how well-intentioned—have historically benefited the powerful and harmed vulnerable or marginalized communities. Evan talks about what we as free expression activists should do to get at that tension and find solutions that work for everyone in society.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Actions You Can Take To Protect Digital Rights this International Women’s Day

This International Women’s Day, defend free speech, fight surveillance, and support innovation by calling on our elected politicians and private companies to uphold our most fundamental rights—both online and offline.

1. Pass the “My Body, My Data” Act

Privacy fears should never stand in the way of healthcare. That's why this common-sense federal bill, sponsored by U.S. Rep. Sara Jacobs, will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for. The protected information includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. It also lets people take on companies that violate their privacy with a strong private right of action.

2. Ban Government Use of Face Recognition

Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Because of face recognition errors, a Black woman, Porcha Woodruff, was wrongfully arrested, and another, Lamya Robinson, was wrongfully kicked out of a roller rink.

Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations, including to disparately surveil people of color. At the local, state, and federal level, people across the country are urging politicians to ban the government’s use of face surveillance because it is inherently invasive, discriminatory, and dangerous. Many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology. 

3. Tell Congress: Don’t Outlaw Encrypted Apps

Advocates of women's equality often face surveillance and repression from powerful interests. That's why they need strong end-to-end encryption. But if the so-called “STOP CSAM Act” passes, it would undermine digital security for all internet users, impacting private messaging and email app providers, social media platforms, cloud storage providers, and many other internet intermediaries and online services. Free speech for women’s rights advocates would also be at risk. STOP CSAM would also create a carveout in Section 230, the law that protects our online speech, exposing platforms to civil lawsuits for merely hosting a platform where part of the illegal conduct occurred. Tell Congress: don't pass this law that would undermine security and free speech online, two critical elements for fighting for equality for all genders.  

4. Tell Facebook: Stop Silencing Palestine

Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI) have led to Palestinian content and accounts being removed and banned at an unprecedented scale. As Palestinians and their supporters have taken to social platforms to share images and posts about the situation in the Gaza strip, some have noticed their content suddenly disappear, or had their posts flagged for breaches of the platforms’ terms of use. In some cases, their accounts have been suspended, and in others features such liking and commenting have been restricted

This has an exacerbated impact for the most at risk groups in Gaza, such as those who are pregnant or need reproductive healthcare support, as sharing information online is both an avenue to communicating the reality with the world, as well as sharing information with others who need it the most.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Voices You Should Hear this International Women’s Day

Four Infosec Tools for Resistance this International Women’s Day 

While online violence is alarmingly common globally, women are often more likely to be the target of mass online attacks, nonconsensual leaks of sensitive information and content, and other forms of online violence. 

This International Women’s Day, visit EFF’s Surveillance Self-Defense (SSD) to learn how to defend yourself and your friends from surveillance. In addition to tutorials for installing and using security-friendly software, SSD walks you through concepts like making a security plan, the importance of strong passwords, and protecting metadata.

1. Make Your Own Security Plan

This IWD, learn what a security plan looks like and how you can build one. Trying to protect your online data—like pictures, private messages, or documents—from everything all the time is impractical and exhausting. But, have no fear! Security is a process, and through thoughtful planning, you can put together a plan that’s best for you. Security isn’t just about the tools you use or the software you download. It begins with understanding the unique threats you face and how you can counter those threats. 

2. Protect Yourself on Social Networks

Depending on your circumstances, you may need to protect yourself against the social network itself, against other users of the site, or both. Social networks are among the most popular websites on the internet. Facebook, TikTok, and Instagram each have over a billion users. Social networks were generally built on the idea of sharing posts, photographs, and personal information. They have also become forums for organizing and speaking. Any of these activities can rely on privacy and pseudonymity. Visit our SSD guide to learn how to protect yourself.

3. Tips for Attending Protests

Keep yourself, your devices, and your community safe while you make your voice heard. Now, more than ever, people must be able to hold those in power accountable and inspire others through the act of protest. Protecting your electronic devices and digital assets before, during, and after a protest is vital to keeping yourself and your information safe, as well as getting your message out. Theft, damage, confiscation, or forced deletion of media can disrupt your ability to publish your experiences, and those engaging in protest may be subject to search or arrest, or have their movements and associations surveilled. 

4. Communicate Securely with Signal or WhatsApp

Everything you say in a chat app should be private, viewable by only you and the person you're talking with. But that's not how all chats or DMs work. Most of those communication tools aren't end-to-end encrypted, and that means that the company who runs that software could view your chats, or hand over transcripts to law enforcement. That's why it's best to use a chat app like Signal any time you can. Signal uses end-to-end encryption, which means that nobody, not even Signal, can see the contents of your chats. Of course, you can't necessarily force everyone you know to use the communication tool of your choice, but thankfully other popular tools, like Apple's Messages, WhatsApp and more recently, Facebook's Messenger, all use end-to-end encryption too, as long as you're communicating with others on those same platforms. The more people who use these tools, even for innocuous conversations, the better.

On International Women’s Day and every day, stay safe out there! Surveillance self-defense can help.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Reasons to Protect the Internet this International Women’s Day

Today is International Women’s Day, a day celebrating the achievements of women globally but also a day marking a call to action for accelerating equality and improving the lives of women the world over. 

The internet is a vital tool for women everywhere—provided they have access and are able to use it freely. Here are four reasons why we’re working to protect the free and open internet for women and everyone.

1. The Fight For Reproductive Privacy and Information Access Is Not Over

Data privacy, free expression, and freedom from surveillance intersect with the broader fight for reproductive justice and safe access to abortion. Like so many other aspects of managing our healthcare, these issues are fundamentally tied to our digital lives. With the decision of Dobbs v. Jackson to overturn the protections that Roe v. Wade offered for people seeking abortion healthcare in the United States, what was benign data before is now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. The repeal of Roe created a lot of new dangers for people seeking healthcare. EFF is working hard to protect your rights in two main areas: 1) your data privacy and security, and 2) your online right to free speech.

2. Governments Continue to Cut Internet Access to Quell Political Dissidence   

The internet is an essential service that enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. Governments are very aware of their power to cut off access to this crucial lifeline, and frequently undertake targeted initiatives to shut down civilian access to the internet. In Iran, people have suffered Internet and social media blackouts on and off for nearly two years, following an activist movement rising up after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention, and in response, the Iranian government rushed to control visibility on the injustice. Social media has been banned in Iran and intermittent shutdowns of the entire peoples’ access to the Internet has cost the country millions, all in effort to control the flow of information and quell political dissidence.

3. People Need to Know When They Are Being Stalked Through Tracking Tech 

At EFF, we’ve been sounding the alarm about the way physical trackers like AirTags and Tiles can be slipped into a target’s bag or car, allowing stalkers and abusers unprecedented access to a person’s location without their knowledge. We’ve also been calling attention to stalkerware, commercially-available apps that are designed to be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent. This is a huge threat to survivors of domestic abuse as stalkers can track their locations, as well as access a lot of sensitive information like all passwords and documents. For example, Imminent Monitor, once installed on a victim’s computer, could turn on their webcam and microphone, allow perpetrators to view their documents, photographs, and other files, and record all keystrokes entered. Everyone involved in these industries has the responsibility to create a safeguard for people.

4. LGBTQ+ Rights Online Are Being Attacked 

An increase in anti-LGBTQ+ intolerance is harming individuals and communities both online and offline across the globe. Several countries are introducing explicitly anti-LGBTQ+ initiatives to restrict freedom of expression and privacy, which is in turn fuelling offline intolerance against LGBTQ+ people. Across the United States, a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. That’s why we’ve worked to pass data sanctuary laws in pro-LGBTQ+ states to shield health records from disclosure to anti-LGBTQ+ states.

The problem is global. In Jordan, the new Cybercrime Law of 2023 in Jordan restricts encryption and anonymity in digital communications. And in Ghana, the country’s Parliament just voted to pass the country’s draconian Family Values Bill, which introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. EFF is working to expose and resist laws like these, and we hope you’ll join us!

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Infosec Tools for Resistance this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Ghana's President Must Refuse to Sign the Anti-LGBTQ+ Bill

29 février 2024 à 17:52

After three years of political discussions, MPs in Ghana's Parliament voted to pass the country’s draconian Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

President Nana Akufo-Addo must protect the human rights of all people in Ghana and refuse to provide assent to the bill.

This anti-LGBTQ+ legislation introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. This would effectively ban all speech and activity on and offline that even remotely supports LGBTQ+ rights.

Ghanaian authorities could probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. They could also require online platforms to suppress content about LGBTQ+ issues, regardless of where it was created. 

Doing so would criminalize the activity of many major cultural and commercial institutions. If President Akufo-Addo does approve the bill, musicians, corporations, and other entities that openly support LGBTQ+ rights would be banned in Ghana.

Despite this direct threat to online freedom of expression, tech giants are yet to speak out publicly against the LGBTQ+ persecution in Ghana. Twitter opened its first African office in Accra in April 2021, citing Ghana as “a supporter of free speech, online freedom, and the Open Internet.” Adaora Ikenze, Facebook’s head of Public Policy in Anglophone West Africa has said: “We want the millions of people in Ghana and around the world who use our services to be able to connect, share and express themselves freely and safely, and will continue to protect their ability to do that on our platforms.” Both companies have essentially dodged the question.

For many countries across Africa, and indeed the world, the codification of anti-LGBTQ+ discourses and beliefs can be traced back to colonial rule, and a recent CNN investigation from December 2023 found alleged links between the drafting of homophobic laws in Africa and a US nonprofit. The group denied those links, despite having hosted a political conference in Accra shortly before an early version of this bill was drafted.

Regardless of its origin, the past three years of political and social discussion have contributed to a decimation of LGBTQ+ rights in Ghana, and the decision by MPs in Ghana’s Parliament to pass this bill creates severe impacts not just for LGBTQ+ people in Ghana, but for the very principle of free expression online and off. President Nana Akufo-Addo must reject it.

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

31 janvier 2024 à 10:06

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

EFF’s 2024 In/Out List

19 janvier 2024 à 09:46

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.


IN

1. Affordable and future-proof
internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A
privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

Digital Rights for LGBTQ+ People: 2023 Year in Review

1 janvier 2024 à 08:16

An increase in anti-LGBTQ+ intolerance is impacting individuals and communities both online and offline across the globe. Throughout 2023, several countries sought to pass explicitly anti-LGBTQ+ initiatives restricting freedom of expression and privacy. This fuels offline intolerance against LGBTQ+ people, and forces them to self-censor their online expression to avoid being profiled, harassed, doxxed, or criminally prosecuted. 

One growing threat to LGBTQ+ people is data surveillance. Across the U.S., a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. For example, the Texas Attorney General is investigating a hospital for providing gender-affirming health care to transgender youths. We can expect anti-trans investigators to use the tactics of anti-abortion investigators, including seizure of internet browsing and private messaging

It is imperative that businesses are prevented from collecting and retaining this data in the first place, so that it cannot later be seized by police and used as evidence. Legislators should start with Rep. Jacobs’ My Body, My Data bill. We also need new laws to ban reverse warrants, which police can use to identify every person who searched for the keywords “how do I get gender-affirming care,” or who was physically located near a trans health clinic. 

Moreover, LGBTQ+ expression was targeted by U.S. student monitoring tools like GoGuardian, Gaggle, and Bark. The tools scan web pages and documents in students’ cloud drives for keywords about topics like sex and drugs, which are subsequently blocked or flagged for review by school administrators. Numerous reports show regular flagging of LGBTQ+ content. This creates a harmful atmosphere for students; for example, some have been outed because of it. In a positive move, Gaggle recently removed LGBTQ+ terms from their keyword list and GoGuardian has done the same. But, LGBTQ+ resources are still commonly flagged for containing words like "sex," "breasts," or "vagina." Student monitoring tools must remove all terms from their blocking and flagging lists that trigger scrutiny and erasure of sexual and gender identity. 

Looking outside the U.S., LGBTQ+ rights were gravely threatened by expansive cybercrime and surveillance legislation in the Middle East and North Africa throughout 2023. For example, the Cybercrime Law of 2023 in Jordan, introduced as part of King Abdullah II’s modernization reforms, will negatively impact LGBTQ+ people by restricting encryption and anonymity in digital communications, and criminalizing free speech through overly broad and vaguely defined terms. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. 

For many countries across Africa, and indeed the world, anti-LGBTQ+ discourses and laws can be traced back to colonial rule. These laws have been used to imprison, harass, and intimidate LGBTQ+ individuals. In May 2023, Ugandan President Yoweri Museveni signed into law the extremely harsh Anti-Homosexuality Act 2023. It imposes, for example, a 20-year sentence for the vaguely worded offense of “promoting” homosexuality. Such laws are not only an assault on the rights of LGBTQ+ people to exist, but also a grave threat to freedom of expression. They lead to more censorship and surveillance of online LGBTQ+ speech, the latter of which will lead to more self-censorship, too.

Ghana’s draft Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill 2021 goes much further. It threatens up to five years in jail to anyone who publicly identifies as LGBTQ+ or “any sexual or gender identity that is contrary to the binary categories of male and female.” The bill assigns criminal penalties for speech posted online, and threatens online platforms—specifically naming Twitter, Facebook, and Instagram—with criminal penalties if they do not restrict pro-LGBTQ+ content. If passed, Ghanaian authorities could also probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. EFF this year joined other human rights groups to oppose this law.

Taking inspiration from Uganda and Ghana, a new proposed law in Kenya—the Family Protection Bill 2023—would impose ten years imprisonment for homosexuality, and life imprisonment for “aggravated homosexuality.” The bill also allows for the expulsion of refugees and asylum seekers who breach the law, irrespective of whether the conduct is connected with asylum requests. Kenya today is the sole country in East Africa to accept LGBTQ+ individuals seeking refuge and asylum without questioning their sexual orientation; sadly, that may change. EFF has called on the authorities in Kenya and Ghana to reject their respective repulsive bills, and for authorities in Uganda to repeal the Anti-Homosexuality Act.

2023 was a challenging year for the digital rights of LGBTQ+ people. But we are optimistic that in the year to come, LGBTQ+ people and their allies, working together online and off, will make strides against censorship, surveillance, and discrimination.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Fighting European Threats to Encryption: 2023 Year in Review 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Corporate Spy Tech and Inequality: 2023 Year in Review

24 décembre 2023 à 12:30

Our personal data and the ways private companies harvest and monetize it plays an increasingly powerful role in modern life. Throughout 2023, corporations have continued to collect our personal data, sell it to governments, use it to reach inferences about us, and exacerbate existing structural inequalities across society. 

EFF is fighting back. Earlier this year, we filed comments with the U.S. National Telecommunications and Information Administration addressing the ways that corporate data surveillance practices cause discrimination against people of color, women, and other vulnerable groups. Thus, data privacy legislation is civil rights legislation. And we need it now.

In early October, a bad actor claimed they were selling stolen data from the genetic testing service, 23andMe. This initially included display name, birth year, sex, and some details about genetic ancestry results—of one million users of Ashkenazi Jewish descent and another 100,000 users of Chinese descent. By mid-October this expanded out to another four million accounts. It's still unclear if the thieves deliberately targeted users based on race or religion. EFF provided guidance to users about how to protect their accounts. 

When it comes to corporate data surveillance, users’ incomes can alter their threat models. Lower-income people are often less able to avoid corporate harvesting of their data, as some lower-priced technologies collect more data than other technologies, whilst others contain pre-installed malicious programmes. This year, we investigated the low-budget Dragon Touch KidzPad Y88X 10 kid’s tablet, bought from online vendor Amazon, and revealed that malware and pre-installed riskware were present. Likewise, lower-income people may suffer the most from data breaches, because it costs money and takes considerable time to freeze and monitor credit reports, and to obtain identity theft prevention services.

Disparities in whose data is collected by corporations leads to disparities in whose data is sold by corporations to government agencies. As we explained this year, even the U.S. Director of National Intelligence thinks the government should stop buying corporate surveillance data. Structural inequalities affect whose data is purchased by governments. And when government agencies have access to the vast reservoir of personal data that businesses have collected from us, bias is a likely outcome.  

This year we’ve also repeatedly blown the whistle on the ways that automakers stockpile data about how we drive—and about where self-driving cars take us. There is an active government and private market for vehicle data, including location data, which is difficult if not impossible to de-identify. Cars can collect information not only about the vehicle itself, but also about what's around the vehicle. Police have seized location data about people attending Black-led protests against police violence and racism. Further, location data can have a disparate impact on certain consumers who may be penalized for living in a certain neighborhood.

Technologies developed by businesses for governments can yield discriminatory results. Take face recognition, for example. Earlier this year, the Government Accountability Office (GAO) published a report highlighting the inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve said over and over again: governments cannot be trusted with this flawed and dangerous technology. The technology all too often does not work—particularly pertaining to Black people and women. In February, Porcha Woodruff was arrested by six Detroit police officers on the charges of robbery and carjacking after face recognition technology incorrectly matched an eight-year-old image of her (from a police database) with video footage of a suspect. The charges were dropped and she has since filed a lawsuit against the City of Detroit. Her lawsuit joins two others against the Detroit police for incorrect face recognition matches.

Developments throughout 2023 affirm that we need to reduce the amount of data that corporations can collect and sell to end the disparate impacts caused by corporate data processing. EFF has repeatedly called for such privacy legislation. To be effective, it must include effective private enforcement, and prohibit “pay for privacy” schemes that hurt lower-income people. In the U.S., states have been more proactive and more willing to consider such protections, so legislation at the federal level must not preempt state legislation. The pervasive ecosystem of data surveillance is a civil rights problem, and as we head into 2024 we must continue thinking about them as parts of the same problem. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Debunking the Myth of “Anonymous” Data

10 novembre 2023 à 08:49

Today, almost everything about our lives is digitally recorded and stored somewhere. Each credit card purchase, personal medical diagnosis, and preference about music and books is recorded and then used to predict what we like and dislike, and—ultimately—who we are. 

This often happens without our knowledge or consent. Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them. 

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person’s name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy. 

Sometimes companies say our personal data is “anonymized,” implying a one-way ratchet where it can never be dis-aggregated and re-identified. But this is not possible—anonymous data rarely stays this way. As Professor Matt Blaze, an expert in the field of cryptography and data privacy, succinctly summarized: “something that seems anonymous, more often than not, is not anonymous, even if it’s designed with the best intentions.” 

Anonymization…and Re-Identification?

Personal data can be considered on a spectrum of identifiability. At the top is data that can directly identify people, such as a name or state identity number, which can be referred to as “direct identifiers.” Next is information indirectly linked to individuals, like personal phone numbers and email addresses, which some call “indirect identifiers.” After this comes data connected to multiple people, such as a favorite restaurant or movie. The other end of this spectrum is information that cannot be linked to any specific person—such as aggregated census data, and data that is not directly related to individuals at all like weather reports.

Data anonymization is often undertaken in two ways. First, some personal identifiers like our names and social security numbers might be deleted. Second, other categories of personal information might be modified—such as obscuring our bank account numbers. For example, the Safe Harbor provision contained with the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that only the first three digits of a zip code can be reported in scrubbed data.

However, in practice, any attempt at de-identification requires removal not only of your identifiable information, but also of information that can identify you when considered in combination with other information known about you. Here's an example: 

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to one landmark study, these three characteristics are enough to uniquely identify 87% of the U.S. population. A different study showed that 63% of the U.S. population can be uniquely identified from these three facts.

We cannot trust corporations to self-regulate. The financial benefit and business usefulness of our personal data often outweighs our privacy and anonymity. In re-obtaining the real identity of the person involved (direct identifier) alongside a person’s preferences (indirect identifier), corporations are able to continue profiting from our most sensitive information. For instance, a website that asks supposedly “anonymous” users for seemingly trivial information about themselves may be able to use that information to make a unique profile for an individual. 

Location Surveillance

To understand this system in practice, we can look at location data. This includes the data collected by apps on your mobile device about your whereabouts: from the weekly trips to your local supermarket to your last appointment at a health center, an immigration clinic, or a protest planning meeting. The collection of this location data on our devices is sufficiently precise for law enforcement to place suspects at the scene of a crime, and for juries to convict people on the basis of that evidence. What’s more, whatever personal data is collected by the government can be misused by its employees, stolen by criminals or foreign governments, and used in unpredictable ways by agency leaders for nefarious new purposes. And all too often, such high tech surveillance disparately burdens people of color.  

Practically speaking, there is no way to de-identify individual location data since these data points serve as unique personal identifiers of their own. And even when location data is said to have been anonymized, re-identification can be achieved by correlating de-identified data with other publicly available data like voter rolls or information that's sold by data brokers. One study from 2013 found that researchers could uniquely identify 50% of people using only two randomly chosen time and location data points. 

Done right, aggregating location data can work towards preserving our personal rights to privacy by producing non-individualized counts of behaviors instead of detailed timelines of individual location history. For instance, an aggregation might tell you how many people’s phones reported their location as being in a certain city within the last month, but not the exact phone number and other data points that would connect this directly and personally to you. However, there’s often pressure on the experts doing the aggregation to generate granular aggregate data sets that might be more meaningful to a particular decision-maker but which simultaneously expose individuals to an erosion of their personal privacy.  

Moreover, most third-party location tracking is designed to build profiles of real people. This means that every time a tracker collects a piece of information, it needs something to tie that information to a particular person. This can happen indirectly by correlating collected data with a particular device or browser, which might later correlate to one person or a group of people, such as a household. Trackers can also use artificial identifiers, like mobile ad IDs and cookies to reach users with targeted messaging. And “anonymous” profiles of personal information can nearly always be linked back to real people—including where they live, what they read, and what they buy.

For data brokers dealing in our personal information, our data can either be useful for their profit-making or truly anonymous, but not both. EFF has long opposed location surveillance programs that can turn our lives into open books for scrutiny by police, surveillance-based advertisers, identity thieves, and stalkers. We’ve also long blown the whistle on phony anonymization

As a matter of public policy, it is critical that user privacy is not sacrificed in favor of filling the pockets of corporations. And for any data sharing plan, consent is critical: did each person consent to the method of data collection, and did they consent to the particular use? Consent must be specific, informed, opt-in, and voluntary. 

Internet Access Shouldn't Be a Bargaining Chip In Geopolitical Battles

We at EFF are horrified by the events transpiring in the Middle East: Hamas’ deadly attack on southern Israel last weekend and Israel’s ongoing retributive military attack and siege on Gaza. While we are not experts in military strategy or international diplomacy, we do have expertise with how human rights and civil liberties should be protected on the internet—even in times of conflict and war. 

That is why we are deeply concerned that a key part of Israel’s response has been to target telecommunications infrastructure in Gaza, including effectively shutting down the internet 

Here are a few reasons why:  

Shutting down telecommunications deprives civilians of a life-saving tool for sharing information when they need it the most. 

In wartime, being able to communicate directly with the people you trust is instrumental to personal safety and protection, and may ultimately mean the difference between life and death. But right now, the millions of people in Gaza, who are already facing a dire humanitarian crisis, are experiencing oppressive limitations on their access to the internet—stifling their ability to find out where their families are, obtain basic information about resources and any promised humanitarian aid, share safer border crossings, and other crucial information.  

The internet was built, in part, to make sure that communications like this are possible. And despite its use for spreading harmful content and misinformation, the internet is particularly imperative in moments of war and conflict when sharing and receiving real-time and up to date information is critical for survival. For example, what was previously a safe escape route may no longer be safe even a few hours later, and news printed in a broadsheet may no longer be reliable or relevant the following day.  

The internet enables this flow of information to remain active and alert to new realities. Shutting down access to internet services creates impossible obstacles for the millions of people trapped in Gaza. It is eroding access to the lifeline that millions of civilians need to stay alive.  

Shutting down telecommunications will not silence Hamas.  

We also understand the impulse to respond to Hamas’ shocking use of the internet to terrorize Israelis, including by taking over Facebook pages of people they have taken hostage to live stream and post horrific footage. We urge social media and other platforms to act quickly when those occur, which typically they can already do under their respective terms of use. But the Israeli government’s reaction of shutting down all internet communications in Gaza is a wrongheaded response and one that will impact exactly the wrong people.  

Hamas is sufficiently well-resourced to maneuver through any infrastructural barriers, including any internet shutdowns imposed by the Israeli government. Further, since Israel isn’t able to limit the voice of Hamas, the internet shutdown effectively allows Hamas to dominate the Palestinian narrative in the public vernacular—eliminating the voices of activists, journalists, and ordinary people documenting their realities and sharing facts in real-time.  

Shutting down telecommunications sets a dangerous precedent.  

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power to cut off that access. Shutdowns have become a blunt instrument that aid state violence and deprive free speech, and are routinely deployed by authoritarian governments that do not care about the rule of law or human rights. For example, limiting access to the internet was a vital component of the Syrian government’s repressive strategy in 2013, and Egyptian President Hosni Mubarak shut down all internet access for five days in 2011 in an effort to impair Egyptians’ ability to coordinate and communicate. As we’ve said before, access to the Internet shouldn't be a bargaining chip in geopolitical battles. Instead of protecting human rights of civilians, Israel has adopted a disproportionate tactic often used by the authoritarian governments of Iran, Russia, and Myanmar. 

Israel is a party to the International Covenant on Civil and Political Rights and has long claimed to be committed to upholding and protecting human rights. But shutting off access to telecommunications for millions of ordinary Palestinians is grossly inconsistent with that claim and instead sends the message that the Israeli government is actively working to ensure that ordinary Palestinians are placed at an even greater risk of harm than they already are. It also sends the unmistakable message that the Israeli government is preventing people around the world learning the truth about its actions in Gaza, something that is affirmed by Israel’s other actions like approving new regulation to temporarily shut down news channels which ‘damage national security.’ 

We call on Israel to stop interfering with the telecommunications infrastructure in Gaza, and to ensure Palestinians from Gaza to the West Bank immediately have unrestricted access to the internet.    

 

 

Social Media Platforms Must Do Better When Handling Misinformation, Especially During Moments of Conflict

In moments of political tension and social conflict, people have turned to social media to share information, speak truth to power, and report uncensored information from their communities. Just over a decade ago, social media was celebrated widely as a booster—if not a catalyst—for the democratic uprisings that swept the Middle East, North Africa, Spain, and elsewhere. That narrative was always more complex than popular media made it out to be, and these platforms always had a problem sifting out misinformation from facts. But in those early days, social media was a means for disenfranchised and marginalized individuals, long overlooked by mainstream media, to be heard around the world. Often, for the first time. 

Yet in the wake of Hamas’ deadly attack on southern Israel last weekend—and Israel’s ongoing retributive military attack and siege on Gaza—misinformation has been thriving on social media platforms. In particular, on X (formerly known as Twitter), a platform stripped of its once-robust policies and moderation teams by CEO Elon Musk and left exposed to the spread of information that is false (misinformation) and deliberately misleading or biased (disinformation).  

It can be difficult to parse out verified information from information that has been misconstrued, misrepresented, or manipulated. And the entwining of authentic details and real newsworthy events with old footage or manufactured information can lead to information genuinely worthy of record—such as a military strike in an urban area—becoming associated with a viral falsehood.  Indeed, Bellingcat—an organization that was founded amidst the Syrian war and has long investigated mis- and disinformation in the region—found one current case where a widely shared video was said to show something false, but further investigation revealed that although the video itself was inauthentic, the information in the text of the post was accurate and highly newsworthy.

As we’ve said many, many times, content moderation does not work at scale, and there is no perfect way to remove false or misleading information from a social media site. But platforms like X have backslid over the past year on a number of measures. Once a relative leader in transparency and content moderation, X has been criticized for failing to remove hate speech and has disabled features that allow users to report certain types of misinformation. Last week, NBC reported that the publication speed on the platform’s Community Notes feature was so slow that notes on known disinformation were being delayed for days. Similarly, TikTok and Meta have implemented lackluster strategies to monitor the nature of content on their services. 

But there are steps that social media platforms can take to increase the likelihood that their sites are places where reliable information is available—particularly during moments of conflict. 

Platforms should:

  • have robust trust and safety mechanisms in place that are proportionate to the volume of posts on their site to address misinformation, and vet and respond to user and researcher complaints; 
  • ensure their content moderation practices are transparent, consistent, and sufficiently resourced in all locations where they operate and in all relevant languages; 
  • employ independent, third-party fact-checking, including to content posted by States and government representatives;
  • urge users to read articles and evaluate their reliability before boosting them through their own accounts; 
  • subject their systems of moderation to independent audits to assess their reliability, and
  • adhere to the Santa Clara Principles on Transparency and Accountability in Content Moderation and provide users with transparency, notice, and appeals in every instance, including misinformation and violent content. 

International companies like X and Meta are also subject to the European Union’s Digital Services Act, which imposes obligations on large platforms to employ robust procedures for removing illegal content and tackling systemic risks and abuse. Last week, European Commissioner for the Internal Market, Thierry Breton, urged TikTok, warned Meta, and called on Elon Musk to urgently prevent the dissemination of disinformation and illegal content on their sites, and ensure that proportionate and appropriate measures are in place to guarantee user safety and security online. While their actions serve as a warning to platforms that the European Commission is closely monitoring and considering formal proceedings, we strongly disagree with the approach of politicizing the DSA to negotiate speech rules with platforms and mandating the swift removal of content that is not necessarily illegal.

Make no mistake:  mis- and disinformation can readily work into the greater public dialogue. Take, for example, the allegation claiming that Hamas “decapitated babies and toddlers.” This was unverified, yet inflamed users on social media and led to more than five leading newspapers in the UK printing the story on their front page. The allegation was further legitimized when President Biden claimed to have seen “confirmed pictures of terrorists beheading children.” The White House later walked back this claim. Israeli officials have since reported that they cannot confirm babies were beheaded by Hamas. 

Another instance is the horrific allegations of rape and deliberate targeting of women and the elderly during the Saturday attack that have been repeated on social media as well as by numerous political figures, celebrities, and media outlets, including Senator Marco Rubio, Newsweek, the Los Angeles Times, and the Denver Post. President Biden repeated the claims in a speech after speaking with Israeli Prime Minister Netanyahu. The origin of the claims is unclear, but they are likely to have originated on social media. The Israeli Defense Force told the Forward that it “does not yet have any evidence of rape having occurred during Saturday’s attack or its aftermath.” 

Hamas is also poised to exploit the lack of moderation on X, as a spokesperson for the group told the New York Times. Because Hamas has long been designated by the United States and the EU as a terrorist organization, X has addressed Hamas content, stating that the company is working with the Global Internet Forum to Counter Terrorism (GIFCT) to prevent its distribution and that of other designated terrorist organizations. Still, the group has vowed to continue broadcasting executions, though it did not state on which platform it would do so. 

We are all vulnerable to believing and passing on misinformation. Ascertaining the accuracy of information can be difficult for users during conflicts when channels of communication are compromised, and the combatants, as well as their supporters, have self-interests in circulating propaganda. But these challenges do not excuse platforms from employing effective systems of moderation to tackle mis- and disinformation. And without adequate guardrails for users and robust trust and safety mechanisms, this will not be the last instance where unproven allegations have such dire implications—both online and offline.

EFF's Comment to the Meta Oversight Board on Polish Anti-Trans Facebook Post 

27 septembre 2023 à 11:33

EFF recently submitted comments in response to the Meta Oversight Board’s request for input on a Facebook post in Polish from April 2023 that targeted trans people. The Oversight Board was created by Meta in 2020 as an appellate body and has 22 members from around the world who review contested content moderation decisions made by the platform.  

Our comments address how Facebook’s automated systems failed to prioritize content for human review. From our observations—and the research of many within the digital rights community—this is a common deficiency made worse during the pandemic, when Meta decreased the number of workers moderating content on its platforms. In this instance, the content was eventually sent for human review and was still assessed to be non-violating and therefore not escalated further. Facebook kept the content online despite 11 different users reporting the content 12 times and only removed the content once the Oversight Board decided to take the case for review. 

As EFF has demonstrated, Meta has at times over-removed legal LGBTQ+ related content whilst simultaneously keeping content online that depicts hate speech toward the LGBTQ+ community. This is often because the content—as in this specific case—is not an explicit depiction of such hate speech, but rather a message that is embedded in a wider context that automated content moderation tools and inadequately trained human moderators are simply not equipped to consider. These tools do not have the ability to recognize nuance or the context of statements, and human reviewers are not provided the training to remove content that depicts hate speech beyond a basic slur. 

This incident serves as part of the growing body of evidence that Facebook’s systems are inadequate in detecting seriously harmful content, particularly that which targets marginalized and vulnerable communities. Our submission looks at the various reasons for these shortcomings and makes the case that Facebook should have removed the content—and should keep it offline.

Read the full submission in the PDF below.

❌
❌