Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Canada’s Leaders Must Reject Overbroad Age Verification Bill

Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users.

First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.”

The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation. 

Flawed Definitions Would Encompass Nearly the Entire Internet 

The bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.

Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification. 

Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common.

Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom.

Promoting Dangerous Age Verification Methods 

S-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.”

This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected.

We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough.

Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit.

Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID.

Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous.

Dangers for Everyone, But This Can Be Stopped

Canada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns.

It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.

We Called on the Oversight Board to Stop Censoring “From the River to the Sea” — And They Listened

Earlier this year, the Oversight Board announced a review of three cases involving different pieces of content on Facebook that contained the phrase “From the River to the Sea.” EFF submitted to the consultation urging Meta to make individualized moderation decisions on this content rather than a blanket ban as the phrase can be a historical call for Palestinian liberation and not an incitement of hatred in violation with Meta’s community standards.

We’re happy to see that the Oversight Board agreed. In last week’s decision, the Board found that the three pieces of examined content did not break Meta’s rules on “Hate Speech, Violence and Incitement or Dangerous Organizations and Individuals.” Instead, these uses of the phrase “From the River to the Sea” were found to be an expression of solidarity with Palestinians and not an inherent call for violence, exclusion, or glorification of designated terrorist group Hamas. 

The Oversight Board decision follows Meta’s original action to keep the content online. In each of the three cases, users appealed to Meta to remove the content but the company’s automated tools dismissed the appeals for human review and kept the content on Facebook. Users subsequently appealed to the Board and called for the content to be removed. The material included a comment that used the hashtag #fromtherivertothesea, a video depicting floating watermelon slices forming the phrases “From the River to the Sea” and “Palestine will be free,” and a reshared post declaring support for the Palestinian people.

As we’ve said many times, content moderation at scale does not work. Nowhere is this truer than on Meta services like Facebook and Instagram where the vast amount of material posted has incentivized the corporation to rely on flawed automated decision-making tools and inadequate human review. But this is a rare occasion where Meta’s original decision to carry the content and the Oversight Board’s subsequent decision supporting this upholds our fundamental right to free speech online. 

The tech giant must continue examining content referring to “From the River to the Sea” on an individualized basis, and we continue to call on Meta to recognize its wider responsibilities to the global user base to ensure people are free to express themselves online without biased or undue censorship and discrimination.

Digital Apartheid in Gaza: Big Tech Must Reveal Their Roles in Tech Used in Human Rights Abuses

This is part two of an ongoing series. Part one on unjust content moderation is here

Since the start of the Israeli military response to Hamas’ deadly October 7 attack, U.S.-based companies like Google and Amazon have been under pressure to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response. 

We agree. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the United Nations and those they have publicly set for themselves. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what international law recognizes as an illegal occupation. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms.  

Google's Human Rights statement claims that “In everything we do, including launching new products and expanding our operations around the globe, we are guided by internationally recognized human rights standards. We are committed to respecting the rights enshrined in the Universal Declaration of Human Rights and its implementing treaties, as well as upholding the standards established in the United Nations Guiding Principles on Business and Human Rights (UNGPs) and in the Global Network Initiative Principles (GNI Principles). Google goes further in the case of AI technologies, promising not to design or deploy AI in technologies that are likely to facilitate injuries to people, gather or use information for surveillance or be used in violation of human rights, or even where the use is likely to cause overall harm.” 

Amazon states that it is "Guided by the United Nations Guiding Principles on Business and Human Rights," and that their “approach on human rights is informed by international standards; we respect and support the Core Conventions of the International Labour Organization (ILO), the ILO Declaration on Fundamental Principles and Rights at Work, and the UN Universal Declaration of Human Rights.” 

It is time for Google and Amazon to tell the truth about use of their technologies in Gaza so that everyone can see whether their human rights commitments were real or simply empty promises.

Concerns about Google and Amazon Facilitating Human Rights Abuses  

The Israeli government has long procured surveillance technologies from corporations based in the United States. Most recently, an investigation in August by +972 and Local Call revealed that the Israeli military has been storing intelligence information on Amazon’s Web Services (AWS) cloud after the scale of data collected through mass surveillance on Palestinians in Gaza was too large for military servers alone. The same article reported that the commander of Israel’s Center of Computing and Information Systems unit—responsible for providing data processing for the military—confirmed in an address to military and industry personnel that the Israeli army had been using cloud storage and AI services provided by civilian tech companies, with the logos of AWS, Google Cloud, and Microsoft Azure appearing in the presentation. 

This is not the first time Google and Amazon have been involved in providing civilian tech services to the Israeli military, nor is it the first time that questions have been raised about whether that technology is being used to facilitate human rights abuses. In 2021, Google and Amazon Web Services signed a $1.2 billion joint contract with the Israeli military called Project Nimbus to provide cloud services and machine learning tools located within Israel. In an official announcement for the partnership, the Israeli Finance Ministry said that the project sought to “provide the government, the defense establishment and others with an all-encompassing cloud solution.” Under the contract, Google and Amazon reportedly cannot prevent particular agencies of the Israeli government, including the military, from using its services. 

Not much is known about the specifics of Nimbus. Google has publicly stated that the project is not aimed at military uses; the Israeli military publicly credits Nimbus with assisting the military in conducting the war. Reports note that the project involves Google establishing a secure instance of the Google Cloud in Israel. According to Google documents from 2022, Google’s Cloud services include object tracking, AI-enabled face recognition and detection, and automated image categorization. Google signed a new consulting deal with the Israeli Ministry of Defense based around the Nimbus platform in March 2024, so Google can’t claim it’s simply caught up in the changed circumstances since 2021. 

Alongside Project Nimbus, an anonymous Israeli official reported that the Israeli military deploys face recognition dragnets across the Gaza Strip using two tools that have facial recognition/clustering capabilities: one from Corsight, which is a "facial intelligence company," and the other built into the platform offered through Google Photos. 

Clarity Needed 

Based on the sketchy information available, there is clearly cause for concern and a need for the companies to clarify their roles.  

For instance, Google Photos is a general-purpose service and some of the pieces of Project Nimbus are non-specific cloud computing platforms. EFF has long maintained that the misuse of general-purpose technologies alone should not be a basis for liability. But, as with Cisco’s development of a specific module of China’s Golden Shield aimed at identifying the Falun Gong (currently pending in litigation in the U.S. Court of Appeals for the Ninth Circuit), companies should not intentionally provide specific services that facilitate human rights abuses. They must also not willfully blind themselves to how their technologies are being used. 

In short, if their technologies are being used to facilitate human rights abuses, whether in Gaza or elsewhere, these tech companies need to publicly demonstrate how they are adhering to their own Human Rights and AI Principles, which are based in international standards. 

We (and the whole world) are waiting, Google and Amazon. 

EFF and 12 Organizations Tell Bumble: Don’t Sell User Data Without Opt-In Consent

Bumble markets itself as a safe dating app, but it may be selling your deeply personal data unless you opt-out—risking your privacy for their profit. Despite repeated requests, Bumble hasn’t confirmed if they sell or share user data, and its policy is also unclear about whether all users can delete their data, regardless of where they live. The company had previously struggled with security vulnerabilities

So EFF has joined Mozilla Foundation and 11 other organizations urging Bumble to do a better job protecting user privacy.

Bumble needs to respect the privacy of its users and ensure that the company does not disclose a user’s data unless that user opts-in to such disclosure. This privacy threat should not be something users have to opt-out of. Protecting personal data should be effortless, especially from a company that markets itself as a safe and ethical alternative.

Dating apps collect vast amounts of intimate details about their customers—everything from sexual preferences to precise location—who are often just searching for compatibility and love. This data falling into the wrong hands can come with unacceptable consequences, especially for those seeking reproductive health care, survivors of intimate partner violence, and members of the LGBTQ+ community. For this reason, the threshold for a company collecting, selling, and transferring such personal data—and providing transparency about privacy practices—is high.

The letter urges Bumble to:

  1. Clarify in unambiguous terms whether or not Bumble sells customer data. 
  2. If the answer is yes, identify what data or personal information Bumble sells, and to which partners, identifying particularly if any companies would be considered data brokers. 
  3. Strengthen customers’ consent mechanism to opt-in to the sharing or sale of data, rather than opt-out.”

Read the full letter here.

Digital Apartheid in Gaza: Unjust Content Moderation at the Request of Israel’s Cyber Unit

This is part one of an ongoing series. Part two on the role of big tech in human rights abuses is here.

Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms—to remove content considered as incitement to violence and terrorism, as well as any promotion of groups widely designated as terrorists. 

Many of these relationships predate the current conflict, but have proliferated in the period since. Between October 7 and November 14, a total of 9,500 takedown requests were sent from the Israeli authorities to social media platforms, of which 60 percent went to Meta with a reported 94% compliance rate. 

This is not new. The Cyber Unit has long boasted that its takedown requests result in high compliance rates of up to 90 percent across all social media platforms. They have unfairly targeted Palestinian rights activists, news organizations, and civil society; one such incident prompted Meta’s Oversight Board to recommend that the company “Formalize a transparent process on how it receives and responds to all government requests for content removal, and ensure that they are included in transparency reporting.”

When a platform edits its content at the behest of government agencies, it can leave the platform inherently biased in favor of that government’s favored positions. That cooperation gives government agencies outsized influence over content moderation systems for their own political goals—to control public dialogue, suppress dissent, silence political opponents, or blunt social movements. And once such systems are established, it is easy for the government to use the systems to coerce and pressure platforms to moderate speech they may not otherwise have chosen to moderate.

Alongside government takedown requests, free expression in Gaza has been further restricted by platforms unjustly removing pro-Palestinian content and accounts—interfering with the dissemination of news and silencing voices expressing concern for Palestinians. At the same time, X has been criticized for failing to remove hate speech and has disabled features that allow users to report certain types of misinformation. TikTok has implemented lackluster strategies to monitor the nature of content on their services. Meta has admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules.

To combat these consequential harms to free expression in Gaza, EFF urges platforms to follow the Santa Clara Principles on Transparency and Accountability in Content Moderation and undertake the following actions:

  1. Bring in local and regional stakeholders into the policymaking process to provide a greater cultural competence—knowledge and understanding of local language, culture and contexts—throughout the content moderation system.
  2. Urgently recognize the particular risks to users’ rights that result from state involvement in content moderation processes.
  3. Ensure that state actors do not exploit or manipulate companies’ content moderation systems to censor dissenters, political opponents, social movements, or any person.
  4. Notify users when, how, and why their content has been actioned, and give them the opportunity to appeal.

Everyone Must Have a Seat at the Table

Given the significant evidence of ongoing human rights violations against Palestinians, both before and since October 7, U.S. tech companies have significant ethical obligations to verify to themselves, their employees, the American public, and Palestinians themselves that they are not directly contributing to these abuses. Palestinians must have a seat at the table, just as Israelis do, when it comes to moderating speech in the region, most importantly their own. Anything less than this risks contributing to a form of digital apartheid.

An Ongoing Issue

This isn’t the first time EFF has raised concerns about censorship in Palestine, including in multiple international forums. Most recently, we wrote to the UN Special Rapporteur on Freedom of Expression expressing concern about the disproportionate impact of platform restrictions on expression by governments and companies. In May, we submitted comments to the Oversight Board urging that moderation decisions of the rallying cry “From the river to the sea” must be made on an individualized basis rather than through a blanket ban. Along with international and regional allies, EFF also asked Meta to overhaul its content moderation practices and policies that restrict content about Palestine, and have issued a set of recommendations for the company to implement. 

And back in April 2023, EFF and ECNL submitted comments to the Oversight Board addressing the over-moderation of the word ‘shaheed’ and other Arabic-language content by Meta, particularly through the use of automated content moderation tools. In their response, the Oversight Board found that Meta’s approach disproportionately restricts free expression, is unnecessary, and that the company should end the blanket ban to remove all content using the “shaheed”.

Beyond Pride Month: Protecting Digital Identities For LGBTQ+ People

The internet provides people space to build communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And for LGBTQ+ individuals, digital spaces enable people that are not yet out to engage with their gender and sexual orientation.

In the age of so much passive surveillance, it can feel daunting if not impossible to strike any kind of privacy online. We can’t blame you for feeling this way, but there’s plenty you can do to keep your information private and secure online. What’s most important is that you think through the specific risks you face and take the right steps to protect against them. 

The first step is to create a security plan. Following that, consider some of the recommended advice below and see which steps fit best for your specific needs:  

  • Use multiple browsers for different use cases. Compartmentalization of sensitive data is key. Since many websites are finicky about the type of browser you’re using, it’s normal to have multiple browsers installed on one device. Designate one for more sensitive activities and configure the settings to have higher privacy.
  • Use a VPN to bypass local censorship, defeat local surveillance, and connect your devices securely to the network of an organization on the other side of the internet. This is extra helpful for accessing pro-LGBTQ+ content from locations that ban access to this material.
  • If your cell phone allows it, hide sensitive apps away from the home screen. Although these apps will still be available on your phone, this hides them into a special folder so that prying eyes are less likely to find them.
  • Separate your digital identities to mitigate the risk of doxxing, as the personal information exposed about you is often found in public places like “people search” sites and social media.
  • Create a security plan for incidents of harassment and threats of violence. Especially if you are a community organizer, activist, or prominent online advocate, you face an increased risk of targeted harassment. Developing a plan of action in these cases is best done well before the threats become credible. It doesn’t have to be perfect; the point is to refer to something you were able to think up clear-headed when not facing a crisis. 
  • Create a plan for backing up images and videos to avoid losing this content in places where governments slow down, disrupt, or shut down the internet, especially during LGBTQ+ events when network disruptions inhibit quick information sharing.
  • Use two-factor authentication where available to make your online accounts more secure by adding a requirement for additional proof (“factors”) alongside a strong password.
  • Obscure people’s faces when posting pictures of protests online (like using tools such as Signal’s in-app camera blur feature) to protect their right to privacy and anonymity, particularly during LGBTQ+ events where this might mean staying alive.
  • Harden security settings in Zoom for large video calls and events, such as enabling security settings and creating a process to remove opportunistic or homophobic people disrupting the call. 
  • Explore protections on your social media accounts, such as switching to private mode, limiting comments, or using tools like blocking users and reporting posts. 

For more information on these topics, visit the following:

Beyond Pride Month: Protections for LGBTQ+ People All Year Round

The end of June concluded LGBTQ+ Pride month, yet the risks LGBTQ+ people face persist every month of the year. This year, LGBTQ+ Pride took place at a time of anti-LGBTQ+ violence, harassment and vandalism and back in May, US officials had warned that LGBTQ+ events around the world might be targeted during Pride Month. Unfortunately, that risk is likely to continue for some time. So too will activist actions, community organizing events, and other happenings related to LGBTQ+ liberation. 

We know it feels overwhelming to think about how to keep yourself safe, so here are some quick and easy steps you can take to protect yourself at in-person events, as well as to protect your data—everything from your private messages with friends to your pictures and browsing history.

There is no one-size-fits-all security solution to protect against everything, and it’s important to ask yourself questions about the specific risks you face, balancing their likelihood of occurrence with the impact if they do come about. In some cases, the privacy risks brought about by technologies may actually be worth risking for the convenience that they offer. For example, is it more of a risk to you that phone towers are able to identify your cell phone’s device ID, or that you have your phone turned on and handy to contact others in the event of danger? Carefully thinking through these types of questions is the first step in keeping yourself safe. Here’s an easy guide on how to do just that.

Tips For In-Person Events And Protests


For your devices:

  • Enable full disk encryption for your device to ensure all files across your entire device cannot be accessed if taken by law enforcement or others.
  • Install an encrypted messenger app such as Signal (for iOS or Android) to guarantee that only you and your chosen recipient can see and access your communications. Turn on disappearing messages, and consider shortening the amount of time messages are kept in the app when you are actually attending an event. If instead you have a burner device with you, be sure to save the numbers for emergency contacts.
  • Remove biometric device unlock like fingerprint or FaceID to prevent police officers from physically forcing you to unlock your device with your fingerprint or face. You can password-protect your phone instead.
  • Log out of accounts and uninstall apps or disable app notifications to avoid app activity in precarious legal contexts from being used against you, such as using gay dating apps in places where homosexuality is illegal. 
  • Turn off location services on your devices to avoid your location history from being used to identify your device’s comings and goings. For further protections, you can disable GPS, Bluetooth, Wi-Fi, and phone signals when planning to attend a protest.

For you:

  • Wearing a mask during a protest is advisable, particularly as gathering in large crowds increases the risk of law enforcement deploying violent tactics like tear gas, as well as increasing the possibility of being targeted through face recognition technology
  • Tell friends or family when you plan to attend and leave an event so that they can follow up to make sure you are safe if there are arrests, harassment, or violence. 
  • Cover your tattoos to reduce the possibility of image recognition technologies like facial recognition, iris recognition and tattoo recognition identifying you.
  • Wearing the same clothing as everyone in your group can help hide your identity during the protest and keep you from being identified and tracked afterwards. Dressing in dark and monochrome colors will help you blend into a crowd.
  • Say nothing except to assert your rights if you are arrested. Without a warrant, law enforcement cannot compel you to unlock your devices or answer questions, beyond basic identification in some jurisdictions. Refuse consent to a search of your devices, bags, vehicles, or home, and wait until you have a lawyer before speaking.

Given the increase in targeted harassment and vandalism towards LGBTQ+ people, it’s especially important to consider counterprotesters showing up at various events. Since the boundaries between parade and protest might be blurred, you must take precautions. Our general guide for attending a protest covers the basics for protecting your smartphone and laptop, as well as providing guidance on how to communicate and share information responsibly. We also have a handy printable version available here.

LGBTQ+ Pride is about recognition of our differences and claiming honor in our presence in public spaces. Because of this, it’s an odd thing to have to take careful privacy precautions to keep yourself safe during Pride events. Consider it like you would any aspect of bodily autonomy and self determination—only you get to decide what aspects of yourself you share with others. You get to decide how you present to the world and what things you keep private. With a bit of care, you can maintain privacy, safety, and pride in doing so.

EFF Submission to the Oversight Board on Posts That Include “From the River to the Sea”

As part of the Oversight Board’s consultation on the moderation of social media posts that include reference to the phrase “From the river to the sea, Palestine will be free,” EFF recently submitted comments highlighting that moderation decisions must be made on an individualized basis because the phrase has a significant historical usage that is not hateful or otherwise in violation of Meta’s community standards.

“From the river to the sea, Palestine will be free” is a historical political phrase or slogan referring geographically to the area between the Jordan River and the Mediterranean Sea, an area that includes Israel, the West Bank, and Gaza. Today, the meaning of the slogan for many continues to be one of freedom, liberation, and solidarity against the fragmentation of Palestinians over the land which the Israeli state currently exercises its sovereignty—from Gaza, to the West Bank, and within the Israeli state.

But for others, the phrase is contentious and constitutes support for extremism and terrorism. Hamas—a group that is a designated terrorist organization by governments such as the United States and the European Union—adopted the phrase in its 2017 charter, leading to the claim that the phrase is solely a call for the extermination of Israel. And since Hamas’ deadly attack on Israel on October 7th 2023, opponents have argued that the phrase is a hateful form of expression targeted at Jews in the West.

But international courts have recognized that despite its co-optation by Hamas, the phrase continues to be used by many as a rallying call for liberation and freedom that is explicit both in its meaning on a physical and symbolic level. The censorship of such a phrase due to a perceived “hidden meaning” of inciting hatred and extremism constitutes an infringement on free speech in those situations.

Meta has a responsibility to uphold the free expression of people using the phrase in its protected sense, especially when those speakers are otherwise persecuted and marginalized. 

Read our full submission here

EFF, Human Rights Organizations Call for Urgent Action in Case of Alaa Abd El Fattah

Following an urgent appeal filed to the United Nations Working Group on Arbitrary Detention (UNWGAD) on behalf of blogger and activist Alaa Abd El Fattah, EFF has joined 26 free expression and human rights organizations calling for immediate action.

The appeal to the UNWGAD was initially filed in November 2023 just weeks after Alaa’s tenth birthday in prison. The British-Egyptian citizen is one of the most high-profile prisoners in Egypt and has spent much of the past decade behind bars for his pro-democracy writing and activism following Egypt’s revolution in 2011.

EFF and Media Legal Defence Initiative submitted a similar petition to the UNGWAD on behalf of Alaa in 2014. This led to the Working Group issuing an opinion that Alaa’s detention was arbitrary and called for his release. In 2016, the UNWGAD declared Alaa's detention (and the law under which he was arrested) a violation of international law, and again called for his release.

We once again urge the UN Working Group to urgently consider the recent petition and conclude that Alaa’s detention is arbitrary and contrary to international law. We also call for the Working Group to find that the appropriate remedy is a recommendation for Alaa’s immediate release.

Read our full letter to the UNWGAD and follow Free Alaa for campaign updates.

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

EFF and 34 Civil Society Organizations Call on Ghana’s President to Reject the Anti-LGBTQ+ Bill 

MPs in Ghana’s Parliament voted to pass the country’s draconian ‘Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill’ on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

EFF has joined 34 civil society organizations to demand that President Akufo-Addo vetoes the Family Values Bill.

The legislation criminalizes being LGBTQ+ or an ally of LGBTQ+ people, and also imposes custodial sentences for users and social media companies in punishment for vague, ill-defined offenses like promoting “change in public opinion of prohibited acts” on social media. This would effectively ban all speech and activity online and offline that even remotely supports LGBTQ+ rights.

The letter concludes:

“We also call on you to reaffirm Ghana’s obligation to prevent acts that violate and undermine LGBTQ+ people’s fundamental human rights, including the rights to life, to information, to free association, and to freedom of expression.”

Read the full letter here.

Congress Should Give Up on Unconstitutional TikTok Bans

Congress’ unfounded plan to ban TikTok under the guise of protecting our data is back, this time in the form of a new bill—the “Protecting Americans from Foreign Adversary Controlled Applications Act,” H.R. 7521 — which has gained a dangerous amount of momentum in Congress. This bipartisan legislation was introduced in the House just a week ago and is expected to be sent to the Senate after a vote later this week.

A year ago, supporters of digital rights across the country successfully stopped the federal RESTRICT Act, commonly known as the “TikTok Ban” bill (it was that and a whole lot more). And now we must do the same with this bill. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

As a first step, H.R. 7521 would force TikTok to find a new owner that is not based in a foreign adversarial country within the next 180 days or be banned until it does so. It would also give the President the power to designate other applications under the control of a country considered adversarial to the U.S. to be a national security threat. If deemed a national security threat, the application would be banned from app stores and web hosting services unless it cuts all ties with the foreign adversarial country within 180 days. The bill would criminalize the distribution of the application through app stores or other web services, as well as the maintenance of such an app by the company. Ultimately, the result of the bill would either be a nationwide ban on the TikTok, or a forced sale of the application to a different company.

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place.

Make no mistake—though this law starts with TikTok specifically, it could have an impact elsewhere. Tencent’s WeChat app is one of the world’s largest standalone messenger platforms, with over a billion users, and is a key vehicle for the Chinese diaspora generally. It would likely also be a target. 

The bill’s sponsors have argued that the amount of private data available to and collected by the companies behind these applications — and in theory, shared with a foreign government — makes them a national security threat. But like the RESTRICT Act, this bill won’t stop this data sharing, and will instead reduce our rights online. User data will still be collected by numerous platforms—possibly even TikTok after a forced sale—and it will still be sold to data brokers who can then sell it elsewhere, just as they do now. 

The only solution to this pervasive ecosystem is prohibiting the collection of our data in the first place. Ultimately, foreign adversaries will still be able to obtain our data from social media companies unless those companies are forbidden from collecting, retaining, and selling it, full stop. And to be clear, under our current data privacy laws, there are many domestic adversaries engaged in manipulative and invasive data collection as well. That’s why EFF supports such consumer data privacy legislation

Congress has also argued that this bill is necessary to tackle the anti-American propaganda that young people are seeing due to TikTok’s algorithm. Both this justification and the national security justification raise serious First Amendment concerns, and last week EFF, the ACLU, CDT, and Fight for the Future wrote to the House Energy and Commerce Committee urging them to oppose this bill due to its First Amendment violations—specifically for those across the country who rely on TikTok for information, advocacy, entertainment, and communication. The US has rightfully condemned other countries when they have banned, or sought a ban, on specific social media platforms.

Montana’s ban was as unprecedented as it was unconstitutional

And it’s not just civil society saying this. Late last year, the courts blocked Montana’s TikTok ban, SB 419, from going into effect on January 1, 2024, ruling that the law violated users’ First Amendment rights to speak and to access information online, and the company’s First Amendment rights to select and curate users’ content. EFF and the ACLU had filed a friend-of-the-court brief in support of a challenge to the law brought by TikTok and a group of the app’s users who live in Montana. 

Our brief argued that Montana’s ban was as unprecedented as it was unconstitutional, and we are pleased that the district court upheld our free speech rights and blocked the law from going into effect. As with that state ban, the US government cannot show that a federal ban is narrowly tailored, and thus cannot use the threat of unlawful censorship as a cudgel to coerce a business to sell its property. 

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Instead of passing this overreaching and misguided bill, Congress should prevent any company—regardless of where it is based—from collecting massive amounts of our detailed personal data, which is then made available to data brokers, U.S. government agencies, and even foreign adversaries, China included. We shouldn’t waste time arguing over a law that will get thrown out for silencing the speech of millions of Americans. Instead, Congress should solve the real problem of out-of-control privacy invasions by enacting comprehensive consumer data privacy legislation.

Access to Internet Infrastructure is Essential, in Wartime and Peacetime

We’ve been saying it for 20 years, and it remains true now more than ever: the internet is an essential service. It enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. More specifically, during wartime and conflict, internet and phone services enable the communication of information between people in challenging situations, as well as the reporting by on-the-ground journalists and ordinary people of the news. 

Unfortunately, governments across the world are very aware of their power to cut off this crucial lifeline, and frequently undertake targeted initiatives to do so. These internet shutdowns have become a blunt instrument that aid state violence and inhibit free speech, and are routinely deployed in direct contravention of human rights and civil liberties.

And this is not a one-dimensional situation. Nearly twenty years after the world’s first total internet shutdowns, this draconian measure is no longer the sole domain of authoritarian states but has become a favorite of a diverse set of governments across three continents. For example:

In Iran, the government has been suppressing internet access for many years. In the past two years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media, and sometimes cutting off internet access altogether. 

In Sudan, authorities have enacted a total telecommunications blackout during a massive conflict and displacement crisis. Shutting down the internet is a deliberate strategy blocking the flow of information that brings visibility to the crisis and prevents humanitarian aid from supporting populations endangered by the conflict. The communications blackout has extended for weeks, and in response a global campaign #KeepItOn has formed to put pressure on the Sudanese government to restore its peoples' access to these vital services. More than 300 global humanitarian organizations have signed on to support #KeepItOn.

And in Palestine, where the Israeli government exercises near-total control over both wired internet and mobile phone infrastructure, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. The latest blackout in January 2024 occurred amid a widespread crackdown by the Israeli government on digital rights—including censorship, surveillance, and arrests—and amid accusations of bias and unwarranted censorship by social media platforms. On that occasion, the internet was restored after calls from civil society and nations, including the U.S. As we’ve noted, internet shutdowns impede residents' ability to access and share resources and information, as well as the ability of residents and journalists to document and call attention to the situation on the ground—more necessary than ever given that a total of 83 journalists have been killed in the conflict so far. 

Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world. In 2022, President Biden offered to upgrade the West Bank and Gaza to 4G, but the initiative stalled. While some Palestinians are able to circumvent the blackout by utilizing Israeli SIM cards (which are difficult to obtain) or Egyptian eSIMs, these workarounds are not solutions to the larger problem of blackouts, which the National Security Council has said: “[deprive] people from accessing lifesaving information, while also undermining first responders and other humanitarian actors’ ability to operate and to do so safely.”

Access to internet infrastructure is essential, in wartime as in peacetime. In light of these numerous blackouts, we remain concerned about the control that authorities are able to exercise over the ability of millions of people to communicate. It is imperative that people’s access to the internet remains protected, regardless of how user platforms and internet companies transform over time. We continue to shout this, again and again, because it needs to be restated, and unfortunately today there are ever more examples of it happening before our eyes.




EFF’s Submission to Ofcom’s Consultation on Illegal Harms

More than four years after it was first introduced, the Online Safety Act (OSA) was passed by the U.K. Parliament in September 2023. The Act seeks to make the U.K. “the safest place” in the world to be online and provides Ofcom, the country’s communications regulator, with the power to enforce this.

EFF has opposed the Online Safety Act since it was first introduced. It will lead to a more censored, locked-down internet for British users. The Act empowers the U.K. government to undermine not just the privacy and security of U.K. residents, but internet users worldwide. We joined civil society organizations, security experts, and tech companies to unequivocally ask for the removal of clauses that require online platforms to use government-approved software to scan for illegal content. 

Under the Online Safety Act, websites, and apps that host content deemed “harmful” minors will face heavy penalties; the problem, of course, is views vary on what type of content is “harmful,” in the U.K. as with all other societies. Soon, U.K. government censors will make that decision. 

The Act also requires mandatory age verification, which undermines the free expression of both adults and minors. 

Ofcom recently published the first of four major consultations seeking information on how internet and search services should approach their new duties on illegal content. While we continue to oppose the concept of the Act, we are continuing to engage with Ofcom to limit the damage to our most fundamental rights online. 

EFF recently submitted information to the consultation, reaffirming our call on policymakers in the U.K. to protect speech and privacy online. 

Encryption 

For years, we opposed a clause contained in the then Online Safety Bill allowing Ofcom to serve a notice requiring tech companies to scan their users–all of them–for child abuse content. We are pleased to see that Ofcom’s recent statements note that the Online Safety Act will not apply to end-to-end encrypted messages. Encryption backdoors of any kind are incompatible with privacy and human rights. 

However, there are places in Ofcom’s documentation where this commitment can and should be clearer. In our submission, we affirmed the importance of ensuring that people’s rights to use and benefit from encryption—regardless of the size and type of the online service. The commitment to not scan encrypted data must be firm, regardless of the size of the service, or what encrypted services it provides. For instance, Ofcom has suggested that “file-storage and file-sharing” may be subject to a different risk profile for mandating scanning. But encrypted “communications” are not significantly different from encrypted “file-storage and file-sharing.”

In this context, Ofcom should also take note of new milestone judgment in PODCHASOV v. RUSSIA (Application no. 33696/19) where the European Court of Human Rights (ECtHR) ruled that weakening encryption can lead to general and indiscriminate surveillance of communications for all users, and violates the human right to privacy. 

Content Moderation

An earlier version of the Online Safety Bill enabled the U.K. government to directly silence user speech and imprison those who publish messages that it doesn’t like. It also empowered Ofcom to levy heavy fines or even block access to sites that offend people. We were happy to see this clause removed from the bill in 2022. But a lot of problems with the OSA remain. Our submission on illegal harms affirmed the importance of ensuring that users have: greater control over what content they see and interact with, are equipped with knowledge about how various controls operate and how they can use them to their advantage, and have the right to anonymity and pseudonymity online.

Moderation mechanisms must not interfere with users’ freedom of expression rights, and moderators should receive ample training and materials to ensure cultural and linguistic competence in content moderation. In cases where time-related pressure is placed on moderators to make determinations, companies often remove more than necessary to avoid potential liability, and are incentivized towards using automated technologies for content removal and upload filters. These are notoriously inaccurate and prone to overblocking legitimate material. Moreover, the moderation of terrorism-related content is prone to error and any new mechanism like hash matching or URL detection must be provided with expert oversight. 

Next Steps

Throughout this consultation period, EFF will continue contributing to and monitoring Ofcom’s drafting of the regulation. And we will continue to hold the U.K. government accountable to the international and European human rights protections to which they are signatories.

Read EFF's full submission to Ofcom

Four Voices You Should Hear this International Women’s Day

Around the globe, freedom of expression varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship can have a negative impact on different communities, and especially marginalized or vulnerable ones. This International Women’s Day, spend some time with four stories of hope and inspiration that teach us how to reflect on the past to build a better future.

1. Podcast Episode: Safer Sex Work Makes a Safer Internet

An internet that is safe for sex workers is an internet that is safer for everyone. Though the effects of stigmatization and criminalization run deep, the sex worker community exemplifies how technology can help people reduce harm, share support, and offer experienced analysis to protect each other. Public interest technology lawyer Kendra Albert and sex worker, activist, and researcher Danielle Blunt have been fighting for sex workers’ online rights for years and say that holding online platforms legally responsible for user speech can lead to censorship that hurts us all. They join EFF’s Cindy Cohn and Jason Kelley in this podcast to talk about protecting all of our free speech rights.

2. Speaking Freely: Sandra Ordoñez

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park in Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools. In this interview with EFF’s Jillian C. York, Sandra discusses free speech and how communities that are often the most directly affected are the last consulted.

3. Story: Coded Resistance, the Comic!

From the days of chattel slavery until the modern Black Lives Matter movement, Black communities have developed innovative ways to fight back against oppression. EFF's Director of Engineering, Alexis Hancock, documented this important history of codes, ciphers, underground telecommunications and dance in a blog post that became one of our favorite articles of 2021. In collaboration with The Nib and illustrator Chelsea Saunders, "Coded Resistance" was adapted into comic form to further explore these stories, from the coded songs of Harriet Tubman to Darnella Frazier recording the murder of George Floyd.

4. Speaking Freely: Evan Greer

Evan Greer is many things: a musician, an activist for LGBTQ issues, the Deputy Director of Fight for the Future, and a true believer in the free and open internet. In this interview, EFF’s Jillian C. York spoke with Evan about the state of free expression, and what we should be doing to protect the internet for future activism. Among the many topics discussed was how policies that promote censorship—no matter how well-intentioned—have historically benefited the powerful and harmed vulnerable or marginalized communities. Evan talks about what we as free expression activists should do to get at that tension and find solutions that work for everyone in society.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Actions You Can Take To Protect Digital Rights this International Women’s Day

This International Women’s Day, defend free speech, fight surveillance, and support innovation by calling on our elected politicians and private companies to uphold our most fundamental rights—both online and offline.

1. Pass the “My Body, My Data” Act

Privacy fears should never stand in the way of healthcare. That's why this common-sense federal bill, sponsored by U.S. Rep. Sara Jacobs, will require businesses and non-governmental organizations to act responsibly with personal information concerning reproductive health care. Specifically, it restricts them from collecting, using, retaining, or disclosing reproductive health information that isn't essential to providing the service someone asks them for. The protected information includes data related to pregnancy, menstruation, surgery, termination of pregnancy, contraception, basal body temperature or diagnoses. The bill would protect people who, for example, use fertility or period-tracking apps or are seeking information about reproductive health services. It also lets people take on companies that violate their privacy with a strong private right of action.

2. Ban Government Use of Face Recognition

Study after study shows that facial recognition algorithms are not always reliable, and that error rates spike significantly when involving faces of folks of color, especially Black women, as well as trans and nonbinary people. Because of face recognition errors, a Black woman, Porcha Woodruff, was wrongfully arrested, and another, Lamya Robinson, was wrongfully kicked out of a roller rink.

Yet this technology is widely used by law enforcement for identifying suspects in criminal investigations, including to disparately surveil people of color. At the local, state, and federal level, people across the country are urging politicians to ban the government’s use of face surveillance because it is inherently invasive, discriminatory, and dangerous. Many U.S. cities have done so, including San Francisco and Boston. Now is our chance to end the federal government’s use of this spying technology. 

3. Tell Congress: Don’t Outlaw Encrypted Apps

Advocates of women's equality often face surveillance and repression from powerful interests. That's why they need strong end-to-end encryption. But if the so-called “STOP CSAM Act” passes, it would undermine digital security for all internet users, impacting private messaging and email app providers, social media platforms, cloud storage providers, and many other internet intermediaries and online services. Free speech for women’s rights advocates would also be at risk. STOP CSAM would also create a carveout in Section 230, the law that protects our online speech, exposing platforms to civil lawsuits for merely hosting a platform where part of the illegal conduct occurred. Tell Congress: don't pass this law that would undermine security and free speech online, two critical elements for fighting for equality for all genders.  

4. Tell Facebook: Stop Silencing Palestine

Since Hamas’ attack on Israel on October 7, Meta’s biased moderation tools and practices, as well as policies on violence and incitement and on dangerous organizations and individuals (DOI) have led to Palestinian content and accounts being removed and banned at an unprecedented scale. As Palestinians and their supporters have taken to social platforms to share images and posts about the situation in the Gaza strip, some have noticed their content suddenly disappear, or had their posts flagged for breaches of the platforms’ terms of use. In some cases, their accounts have been suspended, and in others features such liking and commenting have been restricted

This has an exacerbated impact for the most at risk groups in Gaza, such as those who are pregnant or need reproductive healthcare support, as sharing information online is both an avenue to communicating the reality with the world, as well as sharing information with others who need it the most.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Voices You Should Hear this International Women’s Day

Four Infosec Tools for Resistance this International Women’s Day 

While online violence is alarmingly common globally, women are often more likely to be the target of mass online attacks, nonconsensual leaks of sensitive information and content, and other forms of online violence. 

This International Women’s Day, visit EFF’s Surveillance Self-Defense (SSD) to learn how to defend yourself and your friends from surveillance. In addition to tutorials for installing and using security-friendly software, SSD walks you through concepts like making a security plan, the importance of strong passwords, and protecting metadata.

1. Make Your Own Security Plan

This IWD, learn what a security plan looks like and how you can build one. Trying to protect your online data—like pictures, private messages, or documents—from everything all the time is impractical and exhausting. But, have no fear! Security is a process, and through thoughtful planning, you can put together a plan that’s best for you. Security isn’t just about the tools you use or the software you download. It begins with understanding the unique threats you face and how you can counter those threats. 

2. Protect Yourself on Social Networks

Depending on your circumstances, you may need to protect yourself against the social network itself, against other users of the site, or both. Social networks are among the most popular websites on the internet. Facebook, TikTok, and Instagram each have over a billion users. Social networks were generally built on the idea of sharing posts, photographs, and personal information. They have also become forums for organizing and speaking. Any of these activities can rely on privacy and pseudonymity. Visit our SSD guide to learn how to protect yourself.

3. Tips for Attending Protests

Keep yourself, your devices, and your community safe while you make your voice heard. Now, more than ever, people must be able to hold those in power accountable and inspire others through the act of protest. Protecting your electronic devices and digital assets before, during, and after a protest is vital to keeping yourself and your information safe, as well as getting your message out. Theft, damage, confiscation, or forced deletion of media can disrupt your ability to publish your experiences, and those engaging in protest may be subject to search or arrest, or have their movements and associations surveilled. 

4. Communicate Securely with Signal or WhatsApp

Everything you say in a chat app should be private, viewable by only you and the person you're talking with. But that's not how all chats or DMs work. Most of those communication tools aren't end-to-end encrypted, and that means that the company who runs that software could view your chats, or hand over transcripts to law enforcement. That's why it's best to use a chat app like Signal any time you can. Signal uses end-to-end encryption, which means that nobody, not even Signal, can see the contents of your chats. Of course, you can't necessarily force everyone you know to use the communication tool of your choice, but thankfully other popular tools, like Apple's Messages, WhatsApp and more recently, Facebook's Messenger, all use end-to-end encryption too, as long as you're communicating with others on those same platforms. The more people who use these tools, even for innocuous conversations, the better.

On International Women’s Day and every day, stay safe out there! Surveillance self-defense can help.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Reasons to Protect the Internet this International Women’s Day

Today is International Women’s Day, a day celebrating the achievements of women globally but also a day marking a call to action for accelerating equality and improving the lives of women the world over. 

The internet is a vital tool for women everywhere—provided they have access and are able to use it freely. Here are four reasons why we’re working to protect the free and open internet for women and everyone.

1. The Fight For Reproductive Privacy and Information Access Is Not Over

Data privacy, free expression, and freedom from surveillance intersect with the broader fight for reproductive justice and safe access to abortion. Like so many other aspects of managing our healthcare, these issues are fundamentally tied to our digital lives. With the decision of Dobbs v. Jackson to overturn the protections that Roe v. Wade offered for people seeking abortion healthcare in the United States, what was benign data before is now potentially criminal evidence. This expanded threat to digital rights is especially dangerous for BIPOC, lower-income, immigrant, LGBTQ+ people and other traditionally marginalized communities, and the healthcare providers serving these communities. The repeal of Roe created a lot of new dangers for people seeking healthcare. EFF is working hard to protect your rights in two main areas: 1) your data privacy and security, and 2) your online right to free speech.

2. Governments Continue to Cut Internet Access to Quell Political Dissidence   

The internet is an essential service that enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. Governments are very aware of their power to cut off access to this crucial lifeline, and frequently undertake targeted initiatives to shut down civilian access to the internet. In Iran, people have suffered Internet and social media blackouts on and off for nearly two years, following an activist movement rising up after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention, and in response, the Iranian government rushed to control visibility on the injustice. Social media has been banned in Iran and intermittent shutdowns of the entire peoples’ access to the Internet has cost the country millions, all in effort to control the flow of information and quell political dissidence.

3. People Need to Know When They Are Being Stalked Through Tracking Tech 

At EFF, we’ve been sounding the alarm about the way physical trackers like AirTags and Tiles can be slipped into a target’s bag or car, allowing stalkers and abusers unprecedented access to a person’s location without their knowledge. We’ve also been calling attention to stalkerware, commercially-available apps that are designed to be covertly installed on another person’s device for the purpose of monitoring their activity without their knowledge or consent. This is a huge threat to survivors of domestic abuse as stalkers can track their locations, as well as access a lot of sensitive information like all passwords and documents. For example, Imminent Monitor, once installed on a victim’s computer, could turn on their webcam and microphone, allow perpetrators to view their documents, photographs, and other files, and record all keystrokes entered. Everyone involved in these industries has the responsibility to create a safeguard for people.

4. LGBTQ+ Rights Online Are Being Attacked 

An increase in anti-LGBTQ+ intolerance is harming individuals and communities both online and offline across the globe. Several countries are introducing explicitly anti-LGBTQ+ initiatives to restrict freedom of expression and privacy, which is in turn fuelling offline intolerance against LGBTQ+ people. Across the United States, a growing number of states prohibited transgender youths from obtaining gender-affirming health care, and some restricted access for transgender adults. That’s why we’ve worked to pass data sanctuary laws in pro-LGBTQ+ states to shield health records from disclosure to anti-LGBTQ+ states.

The problem is global. In Jordan, the new Cybercrime Law of 2023 in Jordan restricts encryption and anonymity in digital communications. And in Ghana, the country’s Parliament just voted to pass the country’s draconian Family Values Bill, which introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. EFF is working to expose and resist laws like these, and we hope you’ll join us!

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Infosec Tools for Resistance this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Ghana's President Must Refuse to Sign the Anti-LGBTQ+ Bill

After three years of political discussions, MPs in Ghana's Parliament voted to pass the country’s draconian Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill on February 28th. The bill now heads to Ghana’s President Nana Akufo-Addo to be signed into law. 

President Nana Akufo-Addo must protect the human rights of all people in Ghana and refuse to provide assent to the bill.

This anti-LGBTQ+ legislation introduces prison sentences for those who partake in LGBTQ+ sexual acts, as well as those who promote the rights of gay, lesbian or other non-conventional sexual or gender identities. This would effectively ban all speech and activity on and offline that even remotely supports LGBTQ+ rights.

Ghanaian authorities could probe the social media accounts of anyone applying for a visa for pro-LGBTQ+ speech or create lists of pro-LGBTQ+ supporters to be arrested upon entry. They could also require online platforms to suppress content about LGBTQ+ issues, regardless of where it was created. 

Doing so would criminalize the activity of many major cultural and commercial institutions. If President Akufo-Addo does approve the bill, musicians, corporations, and other entities that openly support LGBTQ+ rights would be banned in Ghana.

Despite this direct threat to online freedom of expression, tech giants are yet to speak out publicly against the LGBTQ+ persecution in Ghana. Twitter opened its first African office in Accra in April 2021, citing Ghana as “a supporter of free speech, online freedom, and the Open Internet.” Adaora Ikenze, Facebook’s head of Public Policy in Anglophone West Africa has said: “We want the millions of people in Ghana and around the world who use our services to be able to connect, share and express themselves freely and safely, and will continue to protect their ability to do that on our platforms.” Both companies have essentially dodged the question.

For many countries across Africa, and indeed the world, the codification of anti-LGBTQ+ discourses and beliefs can be traced back to colonial rule, and a recent CNN investigation from December 2023 found alleged links between the drafting of homophobic laws in Africa and a US nonprofit. The group denied those links, despite having hosted a political conference in Accra shortly before an early version of this bill was drafted.

Regardless of its origin, the past three years of political and social discussion have contributed to a decimation of LGBTQ+ rights in Ghana, and the decision by MPs in Ghana’s Parliament to pass this bill creates severe impacts not just for LGBTQ+ people in Ghana, but for the very principle of free expression online and off. President Nana Akufo-Addo must reject it.

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

❌