Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON 32

Hacker Summer Camp is almost here... and with it comes the Third Annual EFF Benefit Poker Tournament at DEF CON 32 hosted by security expert Tarah Wheeler.

Please join us at the same place and time as last year: Friday, August 9th, at high noon at the Horseshoe Poker Room. The fees haven’t changed; it’s still $250 to register plus $100 the day of the tournament with unlimited rebuys.

Tarah Wheeler—EFF board member and resident poker expert—has been working hard on the tournament since last year! Not only has she created a custom EFF playing card deck as a gift for each player, but she also recruited Cory Doctorow to emcee this year. Be sure to register today and see Cory in action!

Did we mention there will be Celebrity Bounties? Knock out Jake “MalwareJake” Williams, Deviant Ollam, or Runa Sandvik and get neat EFF swag plus the respect of your peers! As always, knock out Tarah’s dad, Mike, and she will donate $250 to the EFF in your name!

Register Now!

Find Full Event Details and Registration


play
Privacy info. This embed will serve content from youtube-nocookie.com

Anyone who pre-registers and plays will receive a custom EFF playing card deck (if you don’t show up to the tournament by 30 minutes after the start time your deck may be given away).

The winner will receive a treasure chest curated from Tarah’s own collection. The chest is filled with real gems, including emeralds, black pearls, amethysts, diamonds, and more! The winner will also receive our now traditional Jellybean Trophy! 

Have you played some poker before but could use a refresher on rules, strategy, table behavior, and general Vegas slang at the poker table? Tarah will run a poker clinic from 11 am-11:45 am just before the tournament. Even if you know poker pretty well, come a bit early and help out. Just show up and donate anything to EFF. Make it over $50 and Tarah will teach you chip riffling, the three biggest tells, and how to stare blankly and intimidatingly through someone’s soul while they’re trying to decide if you’re bluffing.

Register today and reserve your deck. Be sure to invite your friends to join you!

 

How the FTC Can Make the Internet Safe for Chatbots

No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI.

Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.

But Chair Khan didn’t treat the question as an excuse to launch into the policymaker’s verbal equivalent of a compulsory gymnastics exhibition.

Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act (FTCA5).

Section 5 gives the FTC a broad mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices.” Chair Khan has made extensive use of these powers during her first term as chair, for example, by banning noncompetes and taking action on online privacy.

At EFF, we share many of the widespread concerns over privacy, fairness, and labor rights raised by AI. We think that copyright law is the wrong tool to address those concerns, both because of what copyright law does and doesn’t permit, and because establishing copyright as the framework for AI model-training will not address the real privacy and labor issues posed by generative AI. We think that privacy problems should be addressed with privacy policy and that labor issues should be addressed with labor policy.

That’s what made Chair Khan’s remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on. The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights.

Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”

The question of whether chatbots can be taught not to “hallucinate” is far from settled. Some industry leaders think the problem can never be solved, even as startups publish (technically impressive-sounding, but non-peer reviewed) papers claiming to have solved the problem.

Whether the problem can be solved, it’s clear that for the commercial chatbot offerings in the market today, “hallucinations” come with the package. Or, put more simply: today’s chatbots lie, and no one can stop them.

That’s a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm. It’s hard enough to attend your grandmother’s funeral without the added pain of your airline’s chatbot lying to you about the bereavement fare.

Here’s where the FTC’s powers can help the American public:

The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an “unfair and deceptive practice” that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails.

After all, if a company doesn’t get in trouble when its chatbot lies to a customer, why would they pay extra for a chatbot that has been designed not to lie? And if there’s no reason to pay extra for a chatbot that doesn’t lie, why would anyone invest in solving the “hallucination” problem?

Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace. If you can prove that your chatbot won’t lie to your customers’ users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot’s output.

But until someone does figure out how to make a “hallucination”-free chatbot, guidance promising serious consequences for chatbots that deceive users with “hallucinated” lies will push companies to limit the use of chatbots to low-stakes environments, leaving human workers to do their jobs.

The FTC has already started down this path. Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency’s thinking on how companies should present their chatbots to users.

We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions – especially if that guidance is backed up with muscular enforcement.

Mississippi Can’t Wall Off Everyone’s Social Media Access to Protect Children

In what is becoming a recurring theme, Mississippi became the latest state to pass a law requiring social media services to verify users’ ages and block lawful speech to young people. Once again, EFF explained to the court why the law is unconstitutional.

Mississippi’s law (House Bill 1126) requires social media services to verify the ages of all users, to obtain parental consent for any minor users, and to block minor users from being exposed to “harmful” material. NetChoice, the trade association that represents some of the largest social media services, filed suit and sought to block the law from going into effect in July.

EFF submitted a friend-of-the-court brief in support of NetChoice’s First Amendment challenge to the statute to explain how invasive and chilling online age verification mandates can be. “Such restrictions frustrate everyone’s ability to use one of the most expressive mediums of our time—the vast democratic forums of the internet that we all use to create art, share photos with loved ones, organize for political change, and speak,” the brief argues.

Online age verification laws are fundamentally different and more burdensome than laws requiring adults to show their identification in physical spaces, EFF’s brief argues:

Unlike in-person age-gates, online age restrictions like Mississippi’s require all users to submit, not just momentarily display, data-rich government-issued identification or other proof-of-age, and in some commercially available methods, a photo.

The differences in online age verification create significant burdens on adults’ ability to access lawful speech online. Most troublingly, age verification requirements can completely block millions of U.S. adults who don’t have government-issued identification or lack IDs that would satisfy Mississippi’s verification requirements, such as by not having an up-to-date address or current legal name.

“Certain demographics are also disproportionately burdened when government-issued ID is used in age verification,” EFF’s brief argues. “Black Americans and Hispanic Americans are disproportionately less likely to have current and up-to-date driver’s licenses. And 30% of Black Americans do not have a driver’s license at all.”

Moreover, relying on financial and credit records to verify adults’ identities can also exclude large numbers of adults. As EFF’s brief recounts, some 20 percent of U.S. households do not have a credit card and 35 percent do not own a home.

The data collection required by age-verification systems can also deter people from using social media entirely, either because they want to remain anonymous online or are concerned about the privacy and security of any data they must turn over. HB 1126 thus burdens people’s First Amendment rights to anonymity and their right to privacy.

Regarding HB 1126’s threat to anonymity, EFF’s brief argued:

The threats to anonymity are real and multilayered. All online data is transmitted through a host of intermediaries. This means that when a website shares identifying information with its third-party age-verification vendor, that data is not only transmitted between the website and the vendor, but also between a series of third parties. Under the plain language of HB 1126, those intermediaries are not required to delete users’ identifying data and, unlike the digital service providers themselves, they are also not restricted from sharing, disclosing, or selling that sensitive data.

Regarding data privacy and security, EFF’s brief argued:

The personal data that HB 1126 requires platforms to collect or purchase is extremely sensitive and often immutable. By exposing this information to a vast web of websites and intermediaries, third-party trackers, and data brokers, HB 1126 poses the same concerns to privacy-concerned internet users as it does to the anonymity-minded users.

Finally, EFF’s brief argues that although HB 1126 contains data privacy protections for children that are laudable, they cannot be implemented without the state first demanding that every user verify their age so that services can apply those privacy protections to children. As a result, the state cannot enforce those provisions.

EFF’s brief notes, however, that should Mississippi pass “comprehensive data privacy protections, not attached to content-based, speech-infringing, or privacy-undermining schemes,” that law would likely be constitutional.

EFF remains ready to support Mississippi’s effort to protect all its residents’ privacy. HB 1126, however, unfortunately seeks to provide only children with privacy protections we all desperately need while at the same time restricting adults and children’s access to lawful speech on social media.

Victory! Grand Jury Finds Sacramento Cops Illegally Shared Driver Data

For the past year, EFF has been sounding the alarm about police in California illegally sharing drivers' location data with anti-abortion states, putting abortion seekers and providers at risk of prosecution. We thus applaud the Sacramento County Grand Jury for hearing this call and investigating two police agencies that had been unlawfully sharing this data out-of-state.

The grand jury, a body of 19 residents charged with overseeing local government including law enforcement, released their investigative report on Wednesday. In it, they affirmed that the Sacramento County Sheriff's Office and Sacramento Police Department violated state law and "unreasonably risked" aiding the potential prosecution of "women who traveled to California to seek or receive healthcare services."

In May 2023, EFF, along with the American Civil Liberties Union of Northern California and the American Civil Liberties Union of Southern California, sent letters to 71 California police agencies demanding that they stop sharing automated license plate reader (ALPR) data with law enforcement agencies in other states. This sensitive location information can reveal where individuals work, live, worship, and seek medical care—including reproductive health services. Since the Supreme Court overturned Roe v. Wade with its decision in Dobbs v. Jackson Women’s Health Organization, ALPR data has posed particular risks to those who seek or assist abortions that have been criminalized in their home states.

Since 2016, California law has prohibited sharing ALPR data with out-of-state or federal law enforcement agencies. Despite this, dozens of rogue California police agencies continued sharing this information with other states, even after the state's attorney general issued legal guidance in October "reminding" them to stop.

In Sacramento County, both the Sacramento County Sheriff's Office and the Sacramento Police Department have dismissed calls for them to start obeying the law. Last year, the Sheriff's Office even claimed on Twitter that EFF's concerns were part "a broader agenda to promote lawlessness and prevent criminals from being held accountable." That agency, at least, seems to have had a change of heart: The Sacramento County Grand Jury reports that, after they began investigating police practices, the Sacramento County Sheriff's Office agreed to stop sharing ALPR data with out-of-state law enforcement agencies.

The Sacramento Police Department, however, has continued to share ALPR data with out-of-state agencies. In their report, the grand jury calls for the department to comply with the California Attorney General's legal guidance. The grand jury also recommends that all Sacramento law enforcement agencies make their ALPR policies available to the public in compliance with the law.

As the grand jury's report notes, EFF and California's ACLU affiliates "were among the first" organizations to call attention to police in the state illegally sharing ALPR data. While we are glad that many police departments have since complied with our demands that they stop this practice, we remain committed to bringing attention and pressure to agencies, like the Sacramento Police Department, that have not. In January, for instance, EFF and the ACLU sent a letter urging the California Attorney General to enforce the state's ALPR laws.

For nearly a decade, EFF has been investigating and raising the alarm about the illegal mass-sharing of ALPR data by California law enforcement agencies. The grand jury's report details what is just the latest in a series of episodes in which Sacramento agencies violated the law with ALPR. In December 2018, the Sacramento County Department of Human Assistance terminated its program after public pressure resulting from EFF's revelation that the agency was accessing ALPR data in violation of the law. The next year, EFF successfully lobbied the state legislature to order an audit of four agencies, including the Sacramento County Sheriff's Office, and how they use ALPR. The result was a damning report that the sheriff had fallen short of many of the basic requirements under state law.

Drone As First Responder Programs Are Swarming Across the United States

Law enforcement wants more drones, and we’ll probably see many more of them overhead as police departments seek to implement a popular project justifying the deployment of unmanned aerial vehicles (UAVs): the “drone as first responder” (DFR).

Police DFR programs involve a fleet of drones, which can range in number from four or five to hundreds. In response to 911 calls and other law enforcement calls for service, a camera-equipped drone is launched from a regular base (like the police station roof) to get to the incident first, giving responding officers a view of the scene before they arrive. In theory and in marketing materials, the advance view from the drone will help officers understand the situation more thoroughly before they get there, better preparing them for the scene and assisting them in things such as locating wanted or missing individuals more quickly. Police call this “situational awareness.”

In practice, law enforcement's desire to get “a view of the scene” becomes a justification for over-surveilling neighborhoods that produce more 911 calls and for collecting information on anyone who happens to be in the drone’s path. For example, a drone responding to a vandalism case may capture video footage of everyone it passes along the way. Also, drones are subject to the same mission-creep issues that already plague other police tools designed to record the public; what is pitched as a solution to violent crime can quickly become a tool for policing homelessness or low-level infractions that otherwise wouldn't merit police resources.

With their birds-eye view, drones can observe individuals in previously private and constitutionally protected spaces, like their backyards, roofs, and even through home windows. And they can capture crowds of people, like protestors and other peaceful gatherers exercising their First Amendment rights. Drones can be equipped with cameras, thermal imaging, microphones, license plate readers, face recognition, mapping technology, cell-site simulators, weapons, and other payloads. Proliferation of these devices enables state surveillance even for routine operations and in response to innocuous calls —situations unrelated to the original concerns of terrorism or violent crime originally used to justify their adoption.

Drones are also increasingly tied into other forms of surveillance. More departments — including those in Las Vegas, Louisville, and New York City — are toying with the idea of dispatching drones in response to ShotSpotter gunshot detection alerts, which are known to send many false positive alerts. This could lead to drone surveillance of communities that happen to have a higher concentration of ShotSpotter microphones or other acoustic gunshot detection technology. Data revealed recently shows that a disproportionate number of these gunshot detection sensors  are located in Black communities in the United States. Also, artificial intelligence is also being added to drone data collection; connecting what's gathered from the sky to what has been gathered on the street and through other methods is a trending part of the police panopticon plan.

A CVPD official explains the DFR program to EFF staff in 2022. Credit: Jason Kelley (EFF)

DFR programs have been growing in popularity since first launched by the Chula Vista Police Department in 2018. Now there are a few dozen departments with known DFR programs among the approximately 1,500 police departments known to have any drone program at all, according to EFF’s Atlas of Surveillance, the most comprehensive dataset of this kind of information. The Federal Aviation Administration (FAA) regulates use of drones and is currently mandated to prepare new regulations for how they can be operated beyond the operator’s line of sight (BVLOS), the kind of long-distance flight that currently requires a special waiver. All the while, police departments and the companies that sell drones are eager to move forward with more DFR initiatives.

Agency State
Arapahoe County Sheriff's Office CO
Beverly Hills Police Department CA
Brookhaven Police Department GA
Burbank Police Department CA
Chula Vista Police Department CA
Clovis Police Department CA
Commerce City Police Department CO
Daytona Beach Police Department FL
Denver Police Department CO
Elk Grove Police Department CA
Flagler County Sheriff's Office FL
Fort Wayne Police Department IN
Fremont Police Department CA
Gresham Police Department OR
Hawthorne Police Department CA
Hemet Police Department CA
Irvine Police Department CA
Montgomery County Police Department MD
New York City Police Department NY
Oklahoma City Police Department OK
Oswego Police Department NY
Redondo Beach CA
Santa Monica Police Department CA
West Palm Beach Police Department FL
Yonkers Police Department NY
Schenectady Police Department NY
Queen Creek Police Department AZ
Greenwood Village Police Department CO
Hawthorne Police Department CA

Transparency around the acquisition and use of drones will be important to the effort to protect civilians from government and police overreach and abuse as agencies commission more of these flying machines. A recent Wired investigation raised concerns about Chula Vista’s program, finding that roughly one in 10 drone flights lacked a stated purpose, and for nearly 500 of its recent flights, the reason for deployment was an “unknown problem.” That same investigation also found that each average drone flight exposes nearly 5,000 city residents to enhanced surveillance, primarily in predominantly Black and brown neighborhoods.

“For residents we spoke to,” Wired wrote, “the discrepancy raises serious concerns about the accuracy and reliability of the department's transparency efforts—and experts say the use of the drones is a classic case of self-perpetuating mission creep, with their existence both justifying and necessitating their use.”

Chula Vista's "Drone-Related Activity Dashboard" indicates that more than 20 percent of drone flights are welfare checks or mental health crises, while only roughly 6% are responding to assault calls. Chula Vista Police claim that the DFR program lets them avoid potentially dangerous or deadly interactions with members of the public, with drone responses resulting in their department avoiding sending a patrol unit in response to 4,303 calls. However, this theory and the supporting data needs to be meaningfully evaluated by independent researchers.

This type of analysis is not possible without transparency around the program in Chula Vista, which, to its credit, publishes regular details like the location and reason for each of its deployments. Still, that department has also tried to prevent the public from learning about its program, rejecting California Public Records Act (CPRA) requests for drone footage. This led to a lawsuit in which EFF submitted an amicus brief, and ultimately the California Court of Appeal correctly found that drone footage is not exempt from CPRA requests.

While some might take for granted that the government is not allowed to conduct surveillance — intentional, incidental, or otherwise — on you in spaces like your fenced-in backyard, this is not always the case. It took a lawsuit and a recent Alaska Supreme Court decision to ensure that police in that state must obtain a warrant for drone surveillance in otherwise private areas. While some states do require a warrant to use a drone to violate the privacy of a person’s airspace, Alaska, California, Hawaii, and Vermont are currently the only states where courts have held that warrantless aerial surveillance violates residents’ constitutional protections against unreasonable search and seizure absent specific exceptions.

Clear policies around the use of drones are a valuable part of holding police departments accountable for their drone use. These policies must include rules around why a drone is deployed and guardrails on the kind of footage that is collected, the length of time it is retained, and with whom it can be shared.

A few state legislatures have taken some steps toward providing some public accountability over growing drone use.

  • In Minnesota, law enforcement agencies are required to annually report their drone programs' costs and the number of times they deployed drones with, including how many times they were deployed without a warrant.
  • In Illinois, the Drones as First Responders Act went into effect June 2023, requiring agencies to report whether they own drones; how many are owned; the number of times the drones were deployed, as well as the date, location, and reason for the deployment; and whether video was captured and then retained from each deployment. Illinois agencies also must share a copy of their latest use policies, drone footage is generally supposed to be deleted after 24 hours, and the use of face recognition technology is prohibited except in certain circumstances.
  • In California, AB 481 — which took effect in May 2022 with the aim of providing public oversight over military-grade police equipment — requires police departments to publicly share a regular inventory of the drones that they use. Under this law, police acquisition of drones and the policies governing their use require approval from local elected officials following an opportunity for public comment, giving communities an important chance to provide feedback.

DFR programs are just one way police are acquiring drones, but law enforcement and UAV manufacturers are interested in adding drones in other ways, including as part of regular patrols and in response to high-speed vehicle pursuits. These uses also create the risk of law enforcement bypassing important safeguards.  Reasonable protections for public privacy, like robust use policies, are not a barrier to public safety but a crucial part of ensuring just and constitutional policing.

Companies are eager to tap this growing market. Police technology company Axon —known for its Tasers and body-worn cameras — recently acquired drone company Dedrone, specifically citing that company’s efforts to push DFR programs as one reason for the acquisition. Axon since has established a partnership with Skydio in order to expand their DFR sales.

It’s clear that as the skies open up for more drone usage, law enforcement will push to procure more of these flying surveillance tools. But police and lawmakers must exercise far more skepticism over what may ultimately prove to be a flashy trend that wastes resources, infringes on people's rights, and results in unforeseen shifts in policing strategy. The public must be kept aware of how cops are coming for their privacy from above.

Government Has Extremely Heavy Burden to Justify TikTok Ban, EFF Tells Appeals Court

New Law Subject to Strictest Scrutiny Because It Imposes Prior Restraint, Directly Restricts Free Speech, and Singles Out One Platform for Prohibition, Brief Argues

SAN FRANCISCO — The federal ban on TikTok must be put under the finest judicial microscope to determine its constitutionality, the Electronic Frontier Foundation (EFF) and others argued in a friend-of-the-court brief filed Wednesday to the U.S. Court of Appeals for the D.C. Circuit. 

The amicus brief says the Court must review the Protecting Americans from Foreign Adversary Controlled Applications Act — passed by Congress and signed by President Biden in April — with the most demanding legal scrutiny because it imposes a prior restraint that would make it impossible for users to speak, access information, and associate through TikTok. It also directly restricts protected speech and association, and deliberately singles out a particular medium for a blanket prohibition. This demanding First Amendment test must be used even when the government asserts national security concerns. 

The Court should see this law for what it is: “a sweeping ban on free expression that triggers the most exacting scrutiny under the First Amendment,” the brief argues, adding it will be extremely difficult for the government to justify this total ban. 

Joining EFF in this amicus brief are the Freedom of the Press Foundation, TechFreedom, Media Law Resource Center, Center for Democracy and Technology, First Amendment Coalition, and Freedom to Read Foundation. 

TikTok hosts a wide universe of expressive content from musical performances and comedy to politics and current events, the brief notes, and with more than 150 million users in the United States and 1.6 billion users worldwide, the platform hosts enormous national and international communities that most U.S. users cannot readily reach elsewhere. It plays an especially important and outsized role for minority communities seeking to foster solidarity online and to highlight issues vital to them. 

“The First Amendment protects not only TikTok’s US users, but TikTok itself, which posts its own content and makes editorial decisions about what user content to carry and how to curate it for each individual user,” the brief argues.  

Congress’s content-based justifications for the ban make it clear that the government is targeting TikTok because it finds speech that Americans receive from it to be harmful, and simply invoking national security without clearly demonstrating a threat doesn’t overcome the ban’s unconstitutionality, the brief argues. 

“Millions of Americans use TikTok every day to share and receive ideas, information, opinions, and entertainment from other users around the world lies, and that’s squarely within the protections of the First Amendment,” EFF Civil Liberties Director David Greene said. “By barring all speech on the platform before it can happen, the law effects the kind of prior restraint that the Supreme Court has rejected for the past century as unconstitutional in all but the rarest cases.” 

For the brief: https://www.eff.org/document/06-26-2024-eff-et-al-amicus-brief-tiktok-v-garland

For EFF’s stance on TikTok bans: https://www.eff.org/deeplinks/2023/03/government-hasnt-justified-tiktok-ban 

Contact: 
David
Greene
Civil Liberties Director

The Global Suppression of Online LGBTQ+ Speech Continues

A global increase in anti-LGBTQ+ intolerance is having a significant impact on digital rights. As we wrote last year, censorship of LGBTQ+ websites and online content is on the rise. For many LGBTQ+ individuals the world over, the internet can be a safer space for exploring identity, finding community, and seeking support. But with anti-LGBTQ+ bills restricting free expression and privacy to content moderation decisions that disproportionately impact LGBTQ+ users, digital spaces that used to seem like safe havens are, for many, no longer so.

EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world, and that includes LGBTQ+ communities, which all too often face threats, censorship, and other risks when they go online. This Pride month—and the rest of the year—we’re highlighting some of those risks, and what we’re doing to help change online spaces for the better.

Worsening threats in the Americas

In the United States, where EFF is headquartered, recent gains in rights have been followed by an uptick in intolerance that has led to legislative efforts, mostly at the state level. In 2024 alone, 523 anti-LGBTQ+ bills have been proposed by state legislatures, many of which restrict freedom of expression. In addition to these bills, a drive in mostly conservative areas to ban books in school libraries—many of which contain LGBTQ themes—is creating an environment in which queer youth feel even more marginalized.

At the national level, an effort to protect children from online harms—the Kids Online Safety Act (KOSA)—risks alienating young people, particularly those from marginalized communities, by restricting their access to certain content on social media. EFF spoke with young people about KOSA, and found that many are concerned that they will lose access to help, education, friendship, and a sense of belonging that they have found online. At a time when many young people have just come out of several years of isolation during the pandemic and reliance on online communities for support, restricting their access could have devastating consequences.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Similarly, age-verification bills being put forth by state legislatures often seek to prevent access to material deemed harmful to minors. If passed, these measures would restrict access to vital content, including education and resources that LGBTQ+ youth without local support often rely upon. These bills often contain vague and subjective definitions of “harm” and are all too often another strategy in the broader attack on free expression that includes book bans, censorship of reproductive health information, and attacks on LGBTQ+ youth.

Moving south of the border, in much of South and Central America, legal progress has been made with respect to rights, but violence against LGBTQ+ people is particularly high, and that violence often has online elements to it. In the Caribbean, where a number of countries have strict anti-LGBTQ+ laws on the books often stepping from the colonial era, online spaces can be risky and those who express their identities in them often face bullying and doxxing, which can lead to physical harm.

In many other places throughout the world, the situation is even worse. While LGBTQ+ rights have progressed considerably over the past decade in a number of democracies, the sense of freedom and ease that these hard-won freedoms created for many are suffering serious setbacks. And in more authoritarian countries where the internet may have once been a lifeline, crackdowns on expression have coincided with increases in user growth and often explicitly target LGBTQ+ speech.

In Europe, anti-LGBTQ+ violence at a record high

In recent years, legislative efforts aimed at curtailing LGBTQ+ rights have gained momentum in several European countries, largely the result of a rise in right-wing populism and conservatism. In Hungary, for instance, the Orban government has enacted laws that restrict LGBTQ+ rights under the guise of protecting children. In 2021, the country passed a law banning the portrayal or promotion of LGBTQ+ content to minors. In response, the European Commission launched legal cases against Hungary—as well as some regions in Poland—over LGBTQ+ discrimination, with Commission President Ursula von der Leyen labeling the law as "a shame" and asserting that it clearly discriminates against people based on their sexual orientation, contravening the EU's core values of equality and human dignity​.

In Russia, the government has implemented severe restrictions on LGBTQ+ content online. A law initially passed in 2013 banning the promotion of “non-traditional sexual relations” among minors was expanded in 2022 to apply to individuals of all ages, further criminalizing LGBTQ+ content. The law prohibits the mention or display of LGBTQ+ relationships in advertising, books, media, films, and on online platforms, and has created a hostile online environment. Media outlets that break the law can be fined or shut down by the government, while foreigners who break the law can be expelled from the country. 

Among the first victims of the amended law were seven migrant sex workers—all trans women—from Central Asia who were fined and deported in 2023 after they published their profiles on a dating website. Also in 2023, six online streaming platforms were penalised for airing movies with LGBTQ-related scenes. The films included “Bridget Jones: The Edge of Reason”, “Green Book”, and the Italian film “Perfect Strangers.”

Across the continent, as anti-LGBTQ+ violence is at a record high, queer communities are often the target of online threats. A 2022 report by the European Digital Media Observatory reported a significant increase in online disinformation campaigns targeting LGBTQ+ communities, which often frame them as threats to traditional family values. 

Across Africa, LGBTQ+ rights under threat

In 30 of the 54 countries on the African continent, homosexuality is prohibited. Nevertheless, there is a growing movement to decriminalize LGBTQ+ identities and push toward achieving greater rights and equality. As in many places, the internet often serves as a safer space for community and organizing, and has therefore become a target for governments seeking to crack down on LGBTQ+ people.

In Tanzania, for instance, where consensual same-sex acts are prohibited under the country’s colonial-era Penal Code, authorities have increased digital censorship against LGBTQ+ content, blocking websites and social media platforms that provide support and information to the LGBTQ+ community .This crackdown is making it increasingly difficult for people to find safe spaces online. As a result of these restrictions, many online groups used by the LGBTQ+ community for networking and support have been forced to disband, driving individuals to riskier public spaces to meet and socialize​. 

In other countries across the continent, officials are weaponizing legal systems to crack down on LGBTQ+ people and their expression. According to Access Now, a proposed law in Kenya, the Family Protection Bill, seeks to ban a variety of actions, including public displays of affection, engagement in activities that seek to change public opinion on LGBTQ+ issues, and the use of the internet, media, social media platforms, and electronic devices to “promote homosexuality.” Furthermore, the prohibited acts would fall under the country’s Computer Misuse and Cybercrimes Act of 2018, giving law enforcement the power to monitor and intercept private communications during investigations, as provided by Section 36 of the National Intelligence Service Act, 2012. 

A draconian law passed in Uganda in 2023, the Anti-Homosexuality Act, introduced capital punishment for certain acts, while allowing for life imprisonment for others. The law further imposes a 20-year prison sentence for people convicted of “promoting homosexuality,” which includes the publication of LGBTQ+ content, as well as “the use of electronic devices such as the internet, mobile phones or films for the purpose of homosexuality or promoting homosexuality.”

In Ghana, if passed, the anti-LGBTQ+ Promotion of Proper Human Sexual Rights and Ghanaian Family Values Bill would introduce prison sentences for those who engage in LGBTQ+ sexual acts as well as those who promote LGBTQ+ rights. As we’ve previously written, ban all speech and activity on and offline that even remotely supports LGBTQ+ rights. Though the bill passed through parliament in March, he won’t sign the bill until the country’s Supreme Court rules on its constitutionality.

And in Egypt and Tunisia, authorities have integrated technology into their policing of LGBTQ+ people, according to a 2023 Human Rights Watch report. In Tunisia, where homosexuality is punishable by up to three years in prison, online harassment and doxxing are common, threatening the safety of LGBTQ+ individuals. Human Rights Watch has documented cases in which social media users, including alleged police officers, have publicly harassed activists, resulting in offline harm.

Egyptian security forces often monitor online LGBTQ+ activity and have used social media platforms as well as Grindr to target and arrest individuals. Although same-sex relations are not explicitly banned by law in the country, authorities use various morality provisions to effectively criminalize homosexual relations. More recently, prosecutors have utilized cybercrime and online morality laws to pursue harsher sentences.

In Asia, Cybercrime laws threaten expression

LGBTQ+ rights in Asia vary widely. While homosexual relations are legal in a majority of countries, they are strictly banned in twenty, and same-sex marriage is only legal in three—Taiwan, Nepal, and Thailand. Online threats are also varied, ranging from harassment and self-censorship to the censoring of LGBTQ+ content—such as in Indonesia, Iran, China, Saudi Arabia, the UAE, and Malaysia, among other nations—as well as legal restrictions with often harsh penalties.

The use of cybercrime provisions to target LGBTQ+ expression is on the rise in a number of countries, particularly in the MENA region. In Jordan, the Cybercrime Law of 2023, passed last August, imposes restrictions on freedom of expression, particularly for LGBTQ+ individuals. Articles 13 and 14 of the law impose penalties for producing, distributing, or consuming “pornographic activities or works” and for using information networks to “facilitate, promote, incite, assist, or exhort prostitution and debauchery, or seduce another person, or expose public morals.” Jordan follows in the footsteps of neighboring Egypt, which instituted a similar law in 2018.

The LGBTQ+ movement in Bangladesh is impacted by the Cyber Security Act, quietly passed in 2023. Several provisions of the Act can be used to target LGBTQ+ sites; Section 8 enables the government to shut down websites, while section 42 grants law enforcement agencies the power to search and seize a person’s hardware, social media accounts, and documents, both online and offline, without a warrant. And section 25 criminalizes published contents that tarnish the image or reputation of the country.

The online struggle is global

In addition to national-level restrictions, LGBTQ+ individuals often face content suppression on social media platforms. While some of this occurs as the result of government requests, much of it is actually due to platforms’ own policies and practices. A recent GLAAD case study points to specific instances where content promoting or discussing LGBTQ+ issues is disproportionately flagged and removed, compared to non-LGBTQ+ content. The GLAAD Social Media Safety Index also provides numerous examples where platforms inconsistently enforce their policies. For instance, posts that feature LGBTQ+ couples or transgender individuals are sometimes taken down for alleged policy violations, while similar content featuring heterosexual or cisgender individuals remains untouched. This inconsistency suggests a bias in content moderation that EFF has previously documented and leads to the erasure of LGBTQ+ voices in online spaces.

Likewise, the community now faces threats at the global level, in the form of the impending UN Cybercrime Convention, currently in negotiations. As we’ve written, the Convention would expand cross-border surveillance powers, enabling nations to potentially exploit these powers to probe acts they controversially label as crimes based on subjective moral judgements rather than universal standards. This could jeopardize vulnerable groups, including the LGBTQ+ community.

EFF is pushing back to ensure that the Cybercrime Treaty's scope must be narrow, and human rights safeguards must be a priority. You can read our written and oral interventions and follow our Deeplinks Blog for updates. Earlier this year, along with Access Now, we also submitted comment to the U.N. Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) to inform the Independent Expert’s thematic report presented to the U.N. Human Rights Council at its fifty-sixth session.

But just as the struggle for LGBTQ+ rights and recognition is global, so too is the struggle for a safer and freer internet. EFF works year round to highlight that struggle and to ensure LGBTQ+ rights are protected online. We collaborate with allies around the world, and work to ensure that both states and companies protect and respect the rights of LGBTQ+ communities worldwide.

We also want to help LGBTQ+ communities stay safer online. As part of our Surveillance Self-Defense project, we offer a number of guides for safer online communications, including a guide specifically for LGBTQ+ youth.

EFF believes in preserving an internet that is free for everyone. While there are numerous harms online as in the offline world, digital spaces are often a lifeline for queer youth, particularly those living in repressive environments. The freedom of discovery, the sense of community, and the access to information that the internet has provided for so many over the years must be preserved. 



Hack of Age Verification Company Shows Privacy Danger of Social Media Laws

We’ve said it before: online age verification is incompatible with privacy. Companies responsible for storing or processing sensitive documents like drivers’ licenses are likely to encounter data breaches, potentially exposing not only personal data like users’ government-issued ID, but also information about the sites that they visit. 

This threat is not hypothetical. This morning, 404 Media reported that a major identity verification company, AU10TIX, left login credentials exposed online for more than a year, allowing access to this very sensitive user data. 

A researcher gained access to the company’s logging platform, “which in turn contained links to data related to specific people who had uploaded their identity documents,” including “the person’s name, date of birth, nationality, identification number, and the type of document uploaded such as a drivers’ license,” as well as images of those identity documents. Platforms reportedly using AU10TIX for identity verification include TikTok and X, formerly Twitter. 

Lawmakers pushing forward with dangerous age verifications laws should stop and consider this report. Proposals like the federal Kids Online Safety Act and California’s Assembly Bill 3080 are moving further toward passage, with lawmakers in the House scheduled to vote in a key committee on KOSA this week, and California's Senate Judiciary committee set to discuss  AB 3080 next week. Several other laws requiring age verification for accessing “adult” content and social media content have already passed in states across the country. EFF and others are challenging some of these laws in court. 

In the final analysis, age verification systems are surveillance systems. Mandating them forces websites to require visitors to submit information such as government-issued identification to companies like AU10TIX. Hacks and data breaches of this sensitive information are not a hypothetical concern; it is simply a matter of when the data will be exposed, as this breach shows. 

Data breaches can lead to any number of dangers for users: phishing, blackmail, or identity theft, in addition to the loss of anonymity and privacy. Requiring users to upload government documents—some of the most sensitive user data—will hurt all users. 

According to the news report, so far the exposure of user data in the AU10TIX case did not lead to exposure beyond what the researcher showed was possible. If age verification requirements are passed into law, users will likely find themselves forced to share their private information across networks of third-party companies if they want to continue accessing and sharing online content. Within a year, it wouldn’t be strange to have uploaded your ID to a half-dozen different platforms. 

No matter how vigilant you are, you cannot control what other companies do with your data. If age verification requirements become law, you’ll have to be lucky every time you are forced to share your private information. Hackers will just have to be lucky once. 

EFF Livestream Series Coming to a Platform Near You!

EFF is excited to kick off a new series of livestream events this summer! Please join EFF staff and fellow digital freedom supporters as we dive into three topics near and dear to our hearts.

July 18: The U.S. Supreme Court Takes on the Internet

In the first segment of EFF's livestream series, we'll dive into the impact of the U.S. Supreme Court's recent opinions on technology and civil liberties. Get an expert's look at the court cases making the biggest waves for tech users with our panel featuring EFF Civil Liberties Director David Greene.

RSVP TODAY!


August 28:
Reproductive Justice in the Digital Age

This summer marks the two-year anniversary of the Dobbs decision overturning Roe vs Wade. Join EFF for a livestream discussion about restrictions to reproductive healthcare and the choices people seeking an abortion must face in the digital age where everything is connected, and surveillance is rampant. Learn what’s happening across the United States and how you can get involved.

RSVP TODAY!


October 17:
How to Protest with Privacy in Mind

Do you know what to do if you’re subjected to a search or arrest at a protest? Join EFF for a livestream discussion about how to protect your electronic devices and digital assets before, during, and after a demonstration. Learn how you can avoid confiscation or forced deletion of media, and keep your movements and associations private.

RSVP TODAY!


We hope you can join for all three events!
Be sure to share this post with any interested friends and tell them to join us! Thank you for helping EFF spread the word about privacy and free expression online.

We encourage everyone to join us live for these discussions. Please note that they will be recorded. Recordings will be available following each event.

EFF Welcomes Tarah Wheeler to Its Board of Directors

Wheeler Brings Perspectives on Information Security and International Conflict to the Board of Directors

SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce today that Tarah Wheeler — a social scientist studying international conflict, an author, and a poker player who is CEO of the cybersecurity compliance company Red Queen Dynamics — has joined EFF’s Board of Directors. 

Wheeler has served on EFF’s advisory board since June 2020. She is the Senior Fellow for Global Cyber Policy at Council on Foreign Relations and was elected to Life Membership at CFR in 2023. She is an inaugural contributing cybersecurity expert for the Washington Post, and a Foreign Policy contributor on cyber warfare. She is the author of the best-selling “Women In Tech: Take Your Career to The Next Level With Practical Advice And Inspiring Stories” (2016). 

“I am very excited to have Tarah bring her judgment, her technical expertise and her enthusiasm to EFF’s Board,” EFF Executive Director Cindy Cohn said. “She has supported us in many ways before now, including creating and hosting the ‘Betting on Your Digital Rights: EFF Benefit Poker Tournament at DEF CON,’ which will have its third year this summer. Now we get to have her in a governance role as well.” 

"I am deeply honored to join the Board of Directors at the Electronic Frontier Foundation,” Wheeler said. “EFF's mission to defend civil liberties in the digital world is more critical than ever, and I am humbled to be invited to serve in this work. EFF has been there for me and other information security researchers when we needed a champion the most. Together, we will continue to fight for the rights and freedoms that ensure a free and open internet for all." 

Wheeler has been a US/UK Fulbright Scholar in Cyber Security and Fulbright Visiting Scholar at the Centre for the Resolution of Intractable Conflict at the University of Oxford, the Brookings Institution’s contributing cybersecurity editor, a Cyber Project Fellow at the Belfer Center for Science and International Affairs at Harvard University‘s Kennedy School of Government, and an International Security Fellow at New America leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative. She has been Head of Offensive Security & Technical Data Privacy at Splunk & Senior Director of Engineering and Principal Security Advocate at Symantec Website Security. She has led projects at Microsoft Game Studios (Halo and Lips) and architected systems at encrypted mobile communications firm Silent Circle. She has two cashes and $4,722 in lifetime earnings in the World Series of Poker. 

Members of the Board of Directors ensure EFF’s sustainability by adopting sound, ethical, and legal governance and financial management policies so that the organization has adequate resources to advance its mission.  

Shari Steele — who had been on EFF’s Board since 2015 when she ceased being EFF’s Executive Director — has rotated off the Board. Gigi Sohn has been elected Chair of the Board. 

For the full roster of EFF’s Board of Directors: https://www.eff.org/about/board

EFF Statement on Assange Plea Deal

The United States has now, for the first time in the more than 100-year history of the Espionage Act, obtained an Espionage Act conviction for basic journalistic acts. Here, Assange's Criminal Information is for obtaining newsworthy information from a source, communicating it to the public, and expressing an openness to receiving more highly newsworthy information. This sets a dangerous practical precedent, and all those who value a free press should work to make sure that it never happens again. While we are pleased that Assange can now be freed for time served and return to Australia, these charges should never have been brought.

Additional information about this charge: 

EFF Opposes the American Privacy Rights Act

Protecting people's privacy is the first step we should take to create meaningful online regulation. That's why EFF has previously expressed concerns about the American Privacy Rights Act (APRA) which, rather than set up strong protections, instead freezes consumer data privacy protections in place, preempts existing state laws, and would prevent states from creating stronger protections in the future

While the bill has not yet been formally introduced, subsequent discussion drafts of the bill have not addressed our concerns; in fact, they've only deepened them. So, earlier this month, EFF told Congress that it opposes APRA and signed two letters to reiterate why overriding stronger state laws—and preventing states from passing stronger laws—hurts everyone.

EFF has a clear position on this: federal privacy laws should not roll back state privacy protections. And there is no reason that we must trade strong state laws for weaker national privacy protection. Companies that collect and use data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy, even though existing federal privacy and civil rights laws operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. In a letter opposing the preemption sections of the bill, our allies at the American Civil Liberties Union (ACLU) stated it this way: "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling." Advocates from ten states signed on to the letter warning how APRA, as written, would preempt dozens of stronger state laws. These include laws protecting AI regulation in Colorado, internet privacy in Maine, healthcare and tenant privacy in New York, and biometric privacy in Illinois, just to name a handful. 

APRA would also override a California law passed to rein in data brokers and replace it with weaker protections. EFF last year joined Privacy Rights Clearinghouse (PRC) and others to support and pass the California Delete Act, which gives people an easy way to delete information held by data brokers. In a letter opposing APRA, several organizations that supported California's law highlighted ways that APRA falls short of what's already on the books in California. "By prohibiting authorized agents, omitting robust transparency and audit requirements, removing stipulated fines, and, fundamentally, preempting stronger state laws, the APRA risks leaving consumers vulnerable to ongoing privacy violations and undermining the progress made by trailblazing legislation like the California Delete Act," the letter said.

EFF continues to advocate for strong privacy legislation and encourages APRA's authors to center strong consumer protections in future drafts.

To view the coalition letter on the preemption provisions of APRA, click here: https://www.eff.org/document/aclu-letter-apra-preemption

To view the coalition letter opposing APRA because of its data broker provisions, click here: https://www.eff.org/document/prc-letter-apra-data-broker-provisions

🌜 A voice cries out under the crescent moon...

EFF needs your help to defend privacy and free speech online. Learn why you're crucial to the fight in this edition of campfire tales from our friends, The Encryptids. These cunning critters have come out of hiding to help us celebrate EFF’s summer membership drive for internet freedom.

Through EFF's 34th birthday on July 10, you can be a member for just $20 and receive 2 rare gifts (including a Bigfoot enamel pin!), and as a bonus new recurring monthly or annual donations get a free match! Join us today.

Today’s post comes from international vocal icon Banshee. She may not be a beast like many cryptids, but she is a *BEAST* when it comes to free speech and local activism...

-Aaron Jue
EFF Membership Team

_______________________________________

Banshee in pink floating in a forest saying "Free as in speech!"W

hat’s that saying about being well behaved and making history? Most people picture me shrieking across the Irish countryside. It's a living, but my voice has real power: it can help me speak truth to power, and it can lend support to the people in my communities.

Free expression is a human right, full stop. And it’s tough to get it right on the internet. Just look at messy content moderation from social media giants. Or the way politicians, celebrities, and companies abuse copyright and trademark law to knock their critics offline. And don’t get me started on repressive governments cutting the internet during protests. Censorship hits disempowered groups the hardest. That’s why I raise my voice to prop up the people around me, and why EFF is such an important ally in the fight to protect speech in the modern world.

Free expression is a human right, full stop.

The things you create, say, and share can change the world, and there’s never been a better megaphone than the internet. A free web carries your voice whether your cause is the environment, workers’ rights, gender equality, or your local parent-teacher group. For all the sewage that people spew online, we must fight back with better ideas and a brighter vision for the future.

EFF’s lawyers, policy analysts, tech experts, and activists know free speech, creativity, and privacy online better than anyone. Hell, EFF even helped establish computer code as legally protected speech back in the 90s. I hope you’ll use your compassion to protect our freedom online with even a small donation to EFF (or even start a monthly donation!).

Join EFF

Free expression is a human right

So the next time someone tells you that you’re being shrill, remind him to STFU because you have something to say. And be grateful that people around the world support EFF to protect our rights online.

Down for the Cause,

Banshee

_______________________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

For The Bragging Rights: EFF’s 16th Annual Cyberlaw Trivia Night

This post was authored by the mysterious Raul Duke.

The weather was unusually cool for a summer night. Just the right amount of bitterness in the air for attorneys from all walks of life to gather in San Francisco’s Mission District for EFF’s 16th annual Cyberlaw Trivia Night.

Inside Public Works, attorneys filled their plates with chicken and waffles, grabbed a fresh tech-inspired cocktail, and found their tables—ready to compete against their colleagues in obscure tech law trivia. The evening started promptly six minutes late, 7:06 PM PT, with Aaron Jue, EFF's Director of Member Engagement, introducing this year’s trivia tournament.

A lone Quizmaster, Kurt Opsahl, took the stage, noting that his walk-in was missing a key component, until The Blues Brothers started playing, filling the quizmaster with the valor to thank EFF’s intern fund supporters Fenwick and Morrison Forrester. The judges begrudgingly took the stage as the quizmaster reminded them that they have jobs at this event.

One of the judges, EFF’s Civil Liberties Director David Greene, gave some fiduciary advice to the several former EFF interns that were in the crowd. It was anyone’s guess as to whether they had gleaned any inside knowledge about the trivia.

I asked around as to what the attorneys had to gain by participating in this trivia night. I learned that not only were bragging rights on the table, but additionally teams had a chance to win champion steins.

The prizes: EFF steins!

With formalities out of the way, the first round of trivia - “General” - started with a possibly rousing question about the right to repair. Round one ended with the eighth question, which included a major typo calling the “Fourth Amendment is Not for Sale Act” the “First Amendment...” The proofreaders responsible for this mistake have been dealt with.

I was particularly struck by the names of each team: “Run DMCA,” “Ineffective Altruists,” “Subpoena Colada,” “JDs not LLM,” “The little VLOP that could,” and “As a language model, I can't answer that question.” Who knew attorneys could create such creative names?

I asked one of the lawyers if he could give me legal advice on a personal matter (I won’t get into the details here, but it concerns both maritime law and equine law). The lawyer gazed at me with the same look one gives a child who has just proudly thew their food all over the floor. I decided to drop the matter.

Back to the event. It was a close game until the sixth and final round, though we wouldn’t hear the final winners until after the tiebreaker questions.

After several minutes, the tiebreaker was announced. The prompt: which team could get the closest to Pi without going over. This sent your intrepid reporter into an existential crisis. Could one really get to the end of pi? I’m told you could get to Pluto with just the first four and didn’t see any reason in going further than that. During my descent into madness, it was revealed that team “JDs not LLMs” knew 22 digits of pi.

After that shocking revelation, the final results were read, with the winning trivia masterminds being:

1st Place: JDs not LLMs

2nd Place: The Little VLOP That Could

3rd Place: As A Language Model, I Can't Answer That Question

EFF Membership Advocate Christian Romero taking over for Raul Duke.

EFF hosts Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for tech users. Among the many firms that dedicate their time, talent, and resources to the cause, we would especially like to thank Fenwick and Morrison Foerster for supporting EFF’s Intern Fund!

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist.

Are you interested in attending or sponsoring an upcoming EFF Trivia Night? Please reach out to tierney@eff.org for more information.

Be sure to check EFF’s events page and mark your calendar for next year’s 17th annual Cyberlaw Trivia Night

Opposing a Global Surveillance Disaster | EFFector 36.8

Join EFF on a road trip through the information superhighway! As you choose the perfect playlist for the trip we'll share our findings about the latest generation of cell-site simulators; share security tips for protestors at college campuses; and rant about the surveillance abuses that could come from the latest UN Cybercrime Convention draft.

As we reach the end of our road trip, know that you can stay up-to-date on these issues with our EFFector newslettter! You can read the full issue here, or subscribe to get the next one in your inbox automatically! You can also listen to the audio version of the newsletter on the Internet Archive, or by clicking the button below:

LISTEN ON YouTube

EFFECTOR 36.8 - Opposing A Global Surveillance Disaster

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Police are Using Drones More and Spending More For Them

Police in Minnesota are buying and flying more drones than ever before, according to an annual report recently released by the state’s Bureau of Criminal Apprehension (BCA). Minnesotan law enforcement flew their drones without a warrant 4,326 times in 2023, racking up a state-wide expense of over $1 million. This marks a large, 41 percent increase from 2022, when departments across the state used drones 3,076 times and spent $646,531.24 on using them. The data show that more was spent on drones last year than in the previous two years combined. Minneapolis Police Department, the state’s largest police department, implemented a new drone program at the end of 2022 and reported that its 63 warrantless flights in 2023 cost nearly $100,000.

Since 2020, the state of Minnesota has been obligated to put out a yearly report documenting every time and reason law enforcement agencies in the state — local, county, or state-wide — used unmanned aerial vehicles (UAVs), more commonly known as drones, without a warrant. This is partly because Minnesota law requires a warrant for law enforcement to use drones except for specific situations listed in the statute. The State Court Administrator is also required to provide a public report of the number of warrants issued for the use of UAVs, and the data gathered by them. These regular reports give us a glimpse into how police are actually using these devices and how often. As more and more police departments around the country use drones or experiment with drones as first responders, it offers an example of how transparency around drone adoption can be done.

You can read our blog about the 2021 Minnesota report here.

According to EFF’s Atlas of Surveillance, 130 of Minnesota’s 408 law enforcement agencies have drones. Of the Minnesota agencies known to have drones prior to this month’s report, 29 of them did not provide the BCA with 2023 use and cost data.

One of the more revealing aspects of drone deployment provided by  the report is the purpose for which police are using them. A vast majority of uses, almost three-quarters of every time police in Minnesota used drones, were either related to obtaining an aerial view of incidents involving injuries  or death, like car accidents, or for police training and public relations purposes.

Are drones really just a 1 million dollar training tool? We’ve argued many times that tools deployed by police for very specific purposes often find punitive uses that far outreach their original, possibly more innocuous intention. In the case of Minnesota’s drone usage, that can be seen in the other exceptions to the warrant requirement, such as surveilling a public event where there’s a “heightened risk” for participant security. The warrant requirement is meant to prevent using aerial surveillance in violation of civil liberties, but these exceptions open the door to surveillance of First Amendment-protected gatherings and demonstrations. 

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

Government officials across the U.S. frequently promote the supposed, and often anecdotal, public safety benefits of automated license plate readers (ALPRs), but rarely do they examine how this very same technology poses risks to public safety that may outweigh the crimes they are attempting to address in the first place. When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats.

The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials.

To give a sense of the scale of the data collected with ALPRs, EFF found that just 80 agencies in California using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates. An EFF analysis from 2021 found that 99.9% of this data is unrelated to any public safety interest when it's collected. If accessed by malicious parties, the information could be used to harass, stalk, or even extort innocent people.

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems. And while a person can turn off their phone if they are engaging in a sensitive activity, such as visiting a reproductive health facility or attending a protest, tampering with your license plate is a crime in many jurisdictions. Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology.

It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that  public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." 

That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs.

In 2015, building off the previous works of University of Arizona researchers, EFF published an investigation that found more than 100 ALPR cameras in Louisiana, California and Florida were connected unsecured to the internet, many with publicly accessible websites that anyone could use to manipulate the controls of the cameras or siphon off data. Just by visiting a URL, a malicious actor, without any specialized knowledge, could view live feeds of the cameras, including one that could be used to spy on college students at the University of Southern California. Some of the agencies involved fixed the problem after being alerted about that problem. However, 3M, which had recently bought the ALPR manufacturer PIPS Technology (which has since been sold to Neology), claimed zero responsibility for the problem, saying instead that it was the agencies' responsibility to manage the devices' cybersecurity. "The security features are clearly explained in our packaging," they wrote. Four years later, TechCrunch found that the problem still persisted.

In 2019, Customs & Border Protections' vendor providing ALPR technology for Border Patrol checkpoints was breached, with hackers gaining access to 105,000 license plate images, as well as more than 184,000 images of travelers from a face recognition pilot program. Some of those images made it onto the dark web, according to reporting by journalist Joseph Cox.

If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage. The initial discovery was made by the Michigan State Police Michigan Cyber Command Center, which passed the information onto CISA, which then worked with Motorola Solutions to address the problems.

The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities.

One of the most severe vulnerabilities (given a score of 8.6 out of 10,) was that every camera sold by Motorola had a wifi network turned on by default that used the same hardcoded password as every other camera, meaning that if someone was able to find the password to connect to one camera they could connect to any other camera as long as they were near it.

Someone with physical access to the camera could also easily install a backdoor, which would allow them access to the camera even if the wifi was turned off. An attacker could even log into the system locally using a default username and password. Once they connected to that camera they would be able to see live video and control the camera, even disable it. Or they could view historic recordings of license plate data stored without any kind of encryption. They would also see logs containing authentication information which could be used to connect to a back-end server where more information is stored. Motorola claims that they have mitigated all of these vulnerabilities.

When vulnerabilities are found, it's not enough for them be patched: They must be used as a stark warnings for policy makers and the courts. Following EFF's report in 2015, Louisiana Gov. Bobby Jindal spiked a statewide ALPR program, writing in his veto message:

Camera programs such as these that make private information readily available beyond the scope of law enforcement, pose a fundamental risk to personal privacy and create large pools of information belonging to law abiding citizens that unfortunately can be extremely vulnerable to theft or misuse.

In May, a Norfolk Circuit Court Judge reached the same conclusion, writing in an order suppressing the data collected by ALPRs in a criminal case:

The Court cannot ignore the possibility of a potential hacking incident either. For example, a team of computer scientists at the University of Arizona was able to find vulnerable ALPR cameras in Washington, California, Texas, Oklahoma, Louisiana, Mississippi, Alabama, Florida, Virginia, Ohio, and Pennsylvania. (Italics added for emphasis.) … The citizens of Norfolk may be concerned to learn the extent to which the Norfolk Police Department is tracking and maintaining a database of their every movement for 30 days. The Defendant argues “what we have is a dragnet over the entire city” retained for a month and the Court agrees.

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. Meanwhile, recently the Orrville (Ohio) Police Department released a driver's raw ALPR scans to a total stranger in response to a public records request, 404 Media reported.

Public safety agencies must resist the allure of marketing materials promising surveillance omniscience, and instead collect only the data they need for actual criminal investigations. They must never store more data than they adequately protect within their limited resources–or they must keep the public safe from data breaches by not collecting the data at all.

California Lawmakers Should Reject Mandatory Internet ID Checks

California lawmakers are debating an ill-advised bill that would require internet users to show their ID in order to look at sexually explicit content. EFF has sent a letter to California legislators encouraging them to oppose Assembly Bill 3080, which would have the result of censoring the internet for all users. 

If you care about a free and open internet for all, and are a California resident, now would be a good time to contact your California Assemblymember and Senator and tell them you oppose A.B. 3080. 

Adults Have The Right To Free And Anonymous Internet Browsing

If A.B. 3080 passes, it would make it illegal to show websites with one-third or more “sexually explicit content” to minors. These “explicit” websites would join a list of products or services that can’t be legally sold to minors in California, including things like firearms, ammunition, tobacco, and e-cigarettes. 

But these things are not the same, and should not be treated the same under state or federal law. Adults have a First Amendment right to look for information online, including sexual content. One of the reasons EFF has opposed mandatory age verification is because there’s no way to check ID online just for minors without drastically harming the rights of adults to read, get information, and to speak and browse online anonymously. 

As EFF explained in a recent amicus brief on the issue, collecting ID online is fundamentally differentand more dangerousthan in-person ID checks in the physical world. Online ID checks are not just a momentary displaythey require adults “to upload data-rich, government-issued identifying documents to either the website or a third-party verifier” and create a “potentially lasting record” of their visit to the establishment. 

The more information a website collects about visitors, the more chances there are for such data to get into the hands of a criminal or other bad actor, a marketing company, or someone who has filed a subpoena for it. So-called “anonymized” data can be reassembled, especially when it consists of data-rich government ID together with browsing data like IP addresses. 

Data breaches are a fact of life. Once governments insist on creating these ID logs for visiting websites with sexual content, those data breaches will become more dangerous. 

This Bill Mandates ID Checks For A Wide Range Of Content 

The bar is set low in this bill. It’s far from clear what websites prosecutors will consider to have one-third content that’s not appropriate for minors, as that can vary widely by community and even family standards. The bill will surely rope in general-use websites that allow some explicit content. A sex education website for high-school seniors, for instance, could be considered “offensive” and lacking in educational value for young minors. 

Social media sites, online message forums, and even email lists may have some portion of content that isn’t appropriate for younger minors, but also a large amount of general-interest content. Bills like California’s that require ID checks for any site with 33% content that prosecutors deem explicit is similar to having Netflix require ID checks at login, whether a user wants to watch a G-rated movie or an R-rated movie. 

Adults’ Right To View Websites Of Their Choice Is Settled Law 

U.S. courts have already weighed in numerous times on government efforts to age-gate content, including sexual content. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. 

The high court again considered the issue in 2004 in ACLU v. Ashcroft, when it found that a federal law of that era, which sought to impose age-verification requirements on sexual online content, was likely unconstitutional. 

Other States Will Follow 

In the past year, several other state legislatures have passed similar unwise and unconstitutional “online ID check” laws. They are being subject to legal challenges now working their way through courts, including a Texas age verification law that EFF has asked the Supreme Court to look at. 

Elected officials in many other states, however, wisely refused to enact mandatory online ID laws, including Minnesota, Illinois, and Wisconsin. In April, Arizona’s governor vetoed a mandatory ID-check bill that was passed along partisan lines in her state, stating that the bill “goes against settled case law” and insisting any future proposal must be bipartisan and also “work within the bounds of the First Amendment.” 

California is not only the largest state, it is the home of many of the nation’s largest creative industries. It has also been a leader in online privacy law. If California passes A.B. 3080, it will be a green light to other states to pass online ID-checking laws that are even worse. 

Tennessee, for instance, recently passed a mandatory ID bill that includes felony penalties for anyone who “publishes or distributes” a website with one-third adult content. Tennessee’s fiscal review committee estimated that the state will incarcerate one person per year under this law, and has budgeted accordingly. 

California lawmakers have a chance to restore some sanity to our national conversation about how to protect minors online. Mandatory ID checks, and fines or incarceration for those who fail to use them, are not the answer. 

Further reading: 

How to Clean Up Your Bluesky Feed

In our recent comparison of Mastodon, Bluesky, and Threads, we detail a few of the ways the similar-at-a-glance microblogging social networks differ, and one of the main distinctions is how much control you have over what you see as a user. We’ve detailed how to get your Mastodon feed into shape before, and now it’s time to clean up your Bluesky feed. We’ll do this mostly through its moderation tools.

Currently, Bluesky is mostly a single experience that operates on one set of flagship services operated by the Bluesky corporation. As the AT Protocol expands and decentralizes, so will the variety of moderation and custom algorithmic feed options. But for the time being, we have Bluesky.

Bluesky’s current moderation filters operate on two levels: the default options built in the Bluesky app, and community created filters called “labelers”. The company’s default system includes options and company labelers which hide the sorts of things we’re all used to having restricted on social networks, like spam or adult content. It also includes defaults to hiding other categories like engagement farming and certain extremist views. Community options use Bluesky’s own moderation tool, Ozone, and are built exactly the same system as the company’s default ones; the only difference is which ones are built into the app. All this choice ends up being both powerful and overwhelming. So let’s walk through how to use it to make your Bluesky experience as good as possible.

Familiarize Yourself with Bluesky’s Moderation Tools

Bluesky offers several ways to control what appears in your feed: labeling and curation tools to hide (or warn about) the content of a post, and tools to block accounts from your feed entirely. Let’s start with customizing the content you see.

Get to Know Bluesky’s Built-In Settings

By default, Bluesky offers a basic moderation tool that allows you to show, hide, or warn about a range of content related to everything from topics like self-harm, extremist views, or intolerance, to more traditional content moderation like security concerns, scams, or inauthentic accounts.

This build-your-own filter approach is different from other social networks, which tend to control moderation on a platform level, leaving little up to the end user. This gives you control over what you see in your feed, but it’s also overwhelming to wrap your head around. We suggest popping into the moderation screen to see how it’s set up, and tweak any options you’d like:

Tap > Settings > Moderation > Bluesky Moderation Service to get to the settings. You can choose from three display options for each type of post: off (you’ll see it), warn (you’ll get a warning before you can view the post), or hide (you won’t see the post at all).

There’s no way currently to entirely opt out of Bluesky’s defaults, though the company does note that any separate client app (i.e., not the official Bluesky app) can set up its own rules. However, you can subscribe to custom label sets to layer on top of the Bluesky defaults. These labels are similar to the Block Together tool formerly supported by Twitter, and allow individual users or communities to create their own moderation filters. As with the default moderation options, you can choose to have anything that gets labeled hidden or see a warning if it’s flagged. These custom services can include all sorts of highly specific labels, like whether an image is suspected to be made with AI, includes content that may trigger phobias (like spiders), and more. There’s currently no way to easily search for these labeling services, but Bluesky notes a few here, and there’s a broad list here.

To enable one of these, search for the account name of a labeler, like “@xblock.aendra.dev” and then subscribe to it. Once you subscribe, you can toggle any labeling filters the account offers. If you decide you no longer want to use the service or you want to change the settings, you can do so on the same moderation page noted above.

Build Your Own Mute and Block Lists (or Subscribe to Others)

Custom moderation and labels don’t replace one of the most common tools in all of social media: the ability to block accounts entirely. Here, Bluesky offers something new with the old, though. Not only can you block and mute users, you can also subscribe to block lists published by other users, similar to tools like Block Party.

To mute or block someone, tap their user profile picture to get to their profile, then the three-dot icon, then choose to “Mute Account,” which makes it so they don’t appear in your feed, but they can still see yours, or “Block Account,” which makes it so they don’t appear in your feed and they can’t view yours. Note that a list of your Muted accounts is private, but your Blocked accounts are public. Anyone can see who you’ve blocked, but not who you’ve muted.

You can also use built-in algorithmic tools like muting specific words or phrases. Tap > Settings > Moderation and then tap “Mute words & tags.” Type in any word or phrase you want to mute, select whether to mute it if it appears “text & tags” or just in “tags only,” and then it’ll be hidden from your feed.

Users can also experiment with more elaborate algorithmic curation options, such as using tools like Blacksky to completely reshape your feed.

If all this manual work makes you tired, then mute lists might be the answer. These are curated lists made by other Bluesky users that mass mute accounts. These mute lists, unlike muted accounts, are public, though, so keep that in mind before you create or sign up for one.

As with community run moderation services, there’s not currently a great way to search for these lists. To sign up for mute list you’ll need to know the username of someone who has created a block or mute list that you want to use. Search for their profile, tap the “Lists” option from their profile page, tap the list you’re interested in, then “Subscribe.” Confusingly, from this screen, a “List” can be a feed you subscribe to of posts you want to see (like if someone made a list of “people who work at EFF,”) or a block or mute list. If it's referred to as a “user list” and has the option to “Pin to home,” then it’s a feed you can follow, otherwise it’s a mute or block list.

Clean Up Your Timeline

Is there some strange design decision in the app that makes you question why you use it? Perhaps you hate seeing reposts? Bluesky offers a few ways to choose how information is displayed in the app that can make it easier to use. These are essentially custom algorithms, which Bluesky calls “Feeds,” that filter and focus your content however you want.

Subscribe to (or Build Your Own) Custom Feeds

Unlike most social networks, Bluesky gives you control over the algorithm that displays content. By default, you’ll get a chronological feed, but you can pick and choose from other options using custom feeds. These let you tinker with your feed, create entirely new ones, and more. Custom feeds make it so you can look at a feed of very specific types of posts, like only mutuals (people who also follow you back), quiet posters (people who don’t post much), news organizations, or just photos of cats. Here, unlike with some of the other custom tools, Bluesky does at least provide a way to search for feeds to use.

Tap > Settings > Feeds. You’ll find a list of your current feeds here, and if you scroll down you’ll find a search bar to look for new ones. These can be as broad as “Posters in Japan,” to as focused as “Posts about Taylor Swift.” Once you pick a few, these custom feeds will appear at the top of your main timeline. If you ever want to rearrange what order these appear in, head back to the Feeds page, then tap the gear icon in the top-right to get to a screen where you can change the order. If you’re still struggling to find useful feeds, this search engine might help.

Customize How Replies Work, and Other Little Things in Your Feed

Bluesky has one last trick to making it a little nicer to use than other social networks, and that’s the amount of control you get over your main “following” feed. From your feed, tap the controls icon in the top right to get to the “Following Feed Preferences” page.

Here, you can do everything from hide replies to controlling what replies you do see (like only seeing replies to posts from people you follow, or only for posts with more than two replies). You can also hide reposts and quote posts, and even allow for posts from some of your custom feeds to get injected into your main feed. For example, if you enable the “Show Posts from My Feeds” option and you have subscribed to “Quiet Posters,” you’ll occasionally get a post from someone you follow outside of a strictly chronological time.

Final bonus tip: enable two-factor authentication: Bluesky rolled out email-based two-factor authentication well after many people signed up. If you’ve never looked at your settings, you probably never noticed this was offered. We suggest you turn it on to better secure your account. Head to > Settings, then scroll down to “Require email code to log into your account,” and enable it.

Phew, if that all felt a little overwhelming, that’s because it is. Sure, many people can sign up for Bluesky and never touch any of this stuff, but for those who want a safe, customizable experience, the whole thing feels a bit too crunchy in its current state. And while this sort of empowerment for users, which gives so many levers to control the content, is great, it’s also a lot. The good news is that Bluesky’s defaults are currently good enough to get started. But one of the benefits of community-based moderation like we see on Mastodon or certain Subreddits, is that volunteers do a lot of this heavy lifting for everyone. AT Protocol is still new however, and perhaps as more developers shape its future through new tools and services, these difficulties will be eased.

❌