Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

The 2024 U.S. Election is Over. EFF is Ready for What's Next.

Par : Cindy Cohn
6 novembre 2024 à 11:56

The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high. 

No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running. 

During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally.  We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.

Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data. 

EFF is at its best when the stakes are high. 

But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech. 

Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid: 

  • Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.  
  • Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
  • Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress. 
  • Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions. 

We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user. 

We’re not merely fighting against individual companies or elected officials or even specific administrations.  We are fighting for you. That won’t stop no matter who’s in office. 

DONATE TODAY

The Human Toll of ALPR Errors

1 novembre 2024 à 23:17

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

"Is My Phone Listening To Me?"

31 octobre 2024 à 13:32

The short answer is no, probably not! But, with EFF’s new site, Digital Rights Bytes, we go in-depth on this question—and many others.

Whether you’re just starting to question some of the effects of technology in your life or you’re the designated tech wizard of your family looking for resources to share, Digital Rights Bytes is here to help answer some common questions that may be bugging you about the devices you use.  

We often hear the question, “Is my phone listening to me?” Generally, the answer is no, but the reason you may think that your phone is listening to you is actually quite complicated. Data brokers and advertisers have some sneaky tactics at their disposal to serve you ads that feel creepy in the moment and may make you think that your device is secretly taking notes on everything you say. 

Watch the short videofeaturing a cute little penguin discovering how advertisers collect and track their personal dataand share it with your family and friends who have asked similar questions! Curious to learn more? We also have information about how to mitigate this tracking and what EFF is doing to stop these data brokers from collecting your information. 

Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. Got any additional questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes so we can find it!

Triumphs, Trials, and Tangles From California's 2024 Legislative Session

California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this year.

EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.

Celebrating Our Victories

This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.

First, we commend the California Legislature for not advancing A.B. 3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. We understand Asm. Alanis' concerns, but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.

Second, EFF worked with dozens of organizations to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.  Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read more about this bill here, and we are glad to see the California legislature reject this dangerous bill.

EFF also worked to oppose and defeat S.B. 1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076 would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it. 

Lastly, EFF opposed S.B. 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act authored by Senator Scott Wiener (San Francisco). This bill aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.

Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.  

Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:

The Lost Promise of A.B. 3048

Throughout this session, EFF and our partners supported A.B. 3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their privacy rights without creating burdensome requirements for developers or hindering innovation.

Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process. But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy rights.

This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their rights. 

Wrong Solutions to Real Problems

A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included bills intended to address young people’s safety online and deepfakes in elections.

While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online, and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms. 

Similarly, A.B. 2839, authored by Assemblymember Gail Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers, etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.

There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they were signed, a federal judge put A.B. 2839’s enforcement on pause (via a preliminary injunction) on First Amendment grounds.

Privacy Risks in State Databases

We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways, threatened to weaken privacy protections within state databases. Specifically,  A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data privacy.

A.B. 518 authorizes numerous agencies in California to share, without restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this personal information confidential, and no restrictions on what DSS can do with the information.

On the other hand, A.B. 2723 assigns a governing board to the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:

  1.     the right for subjects whose information is kept in the data system to receive notice their data is in the system;
  2.     the right to consent or, more meaningfully, to withhold consent;
  3.     and the right to request correction of erroneous information.

By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first place. 

Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.

A Call for Comprehensive Local Protections

A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be addressed.

Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.

Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.

Preemption Playbook: Big Tech’s Blueprint Comes Straight from Big Tobacco

16 octobre 2024 à 16:53

Big Tech is borrowing a page from Big Tobacco's playbook to wage war on your privacy, according to Jake Snow of the ACLU of Northern California. We agree.  

In the 1990s, the tobacco industry attempted to use federal law to override a broad swath of existing state laws and prevent states from future action on those areas. For Big Tobacco, it was the “Accommodation Program,” a national campaign ultimately aimed to override state indoor smoking laws with weaker federal law. Big Tech is now attempting this with federal privacy bills, like the American Privacy Rights Act (APRA), that would preempt many state privacy laws.  

In “Big Tech is Trying to Burn Privacy to the Ground–And They’re Using Big Tobacco’s Strategy to Do It,” Snow outlines a three-step process that both industries have used to weaken state laws. Faced with a public relations crisis, the industries:

  1. Muddy the waters by introducing various weak bills in different states.
  2. Complain that they are too confusing to comply with, 
  3. Ask for “preemption” of grassroots efforts.

“Preemption” is a legal doctrine that allows a higher level of government to supersede the power of a lower level of government (for example, a federal law can preempt a state law, and a state law can preempt a city or county ordinance).  

EFF has a clear position on this: we oppose federal privacy laws that preempt current and future state privacy protections, especially by a lower federal standard.  

Congress should set a nationwide baseline for privacy, but should not take away states' ability to react in the future to current and unforeseen problems. Earlier this year, EFF joined ACLU and dozens of digital and human rights organizations in opposing APRA’s preemption sections. The letter points out that, "the soundest approach to avoid the harms from preemption is to set the federal standard as a national baseline for privacy protections — and not a ceiling.” EFF led a similar coalition effort in 2018.  

Companies that collect and use our data—and have worked to kill strong state privacy bills time and again— want Congress to believe a "patchwork" of state laws is unworkable for data privacy. But many existing federal laws concerning privacy, civil rights, and more operate as regulatory floors and do not prevent states from enacting and enforcing their own stronger statutes. Complaints of this “patchwork” have long been a part of the strategy for both Big Tech and Big Tobacco.  

States have long been the “laboratories of democracy” and have led the way in the development of innovative privacy legislation. Because of this, federal laws should establish a floor and not a ceiling, particularly as new challenges rapidly emerge. Preemption would leave consumers with inadequate protections, and make them worse off than they would be in the absence of federal legislation.  

Congress never preempted states' authority to enact anti-smoking laws, despite Big Tobacco’s strenuous efforts. So there is hope that Big Tech won’t be able to preempt state privacy law, either. EFF will continue advocating against preemption to ensure that states can protect their citizens effectively. 

Read Jake Snow’s article here.

EFF to New York: Age Verification Threatens Everyone's Speech and Privacy

15 octobre 2024 à 14:11

Young people have a right to speak and access information online. Legislatures should remember that protecting kids' online safety shouldn't require sweeping online surveillance and censorship.

EFF reminded the New York Attorney General of this important fact in comments responding to the state's recently passed Stop Addictive Feeds Exploitation (SAFE) for Kids Act—which requires platforms to verify the ages of people who visit them. After New York's legislature passed the bill, it is now up to the state attorney general's office to write rules to implement it.

We urge the attorney general's office to recognize that age verification requirements are incompatible with privacy and free expression rights for everyone. As we say in our comments:

[O]nline age-verification mandates like that imposed by the New York SAFE For Kids Act are unconstitutional because they block adults from content they have a First Amendment right to access, burden their First Amendment right to browse the internet anonymously, and chill data security- and privacy-minded individuals who are justifiably leery of disclosing intensely personal information to online services. Further, these mandates carry with them broad, inherent burdens on adults’ rights to access lawful speech online. These burdens will not and cannot be remedied by new developments in age-verification technology.

We also noted that none of the methods of age verification listed in the attorney general's call for comments is both privacy-protective and entirely accurate. They each have their own flaws that threaten everyone's privacy and speech rights. "These methods don’t each fit somewhere on a spectrum of 'more safe' and 'less safe,' or 'more accurate' and 'less accurate.' Rather, they each fall on a spectrum of 'dangerous in one way' to 'dangerous in a different way'," we wrote in the comments.

Read the full comments here: https://www.eff.org/document/eff-comments-ny-ag-safe-kids-sept-2024

New IPANDETEC Report Shows Panama’s ISPs Still Lag in Protecting User Data

Par : Karen Gullo
10 octobre 2024 à 14:20

Telecom and internet service providers in Panama are entrusted with the personal data of millions of users, bearing a responsibility to not only protect users’ privacy but also be transparent about their data handling policies. Digital rights organization IPANDETEC has evaluated how well companies have lived up to their responsibilities in ¿Quien Defiende Tus Datos? (“Who Defends Your Data?”) reports released in 2019, 2020, and 2022, which showed persistent deficiencies.

IPANDETEC’s new Panama report, released today, reveals that, with a few notable exceptions, providers in Panama continue to struggle to meet important best practice standards like publishing transparency reports, notifying users about government requests for their data, and requiring authorities to obtain judicial authorization for data requests, among other criteria.

As in its prior reports, IPANDETEC assessed mobile phone operators Más Móvil, Digicel, and Tigo. Claro, assessed in earlier reports, was acquired by Más Móvil in 2021 and as such was dropped. This year’s report also ranked fixed internet service providers InterFast Panama, Celero Fiber, and DBS Networks.

Companies were evaluated in nine categories, including disclosure of data protection policies and transparency reports, data security practices, public promotion of human rights, procedures for authorities seeking user data, publication of services and policies in native languages, and making policies and customer service available to people with disabilities. IPANDETEC also assessed whether mobile operators have opposed mandatory facial recognition for users' activation of their services.

Progress Made

Companies are awarded stars and partial stars for meeting parameters set for each category. Más Móvil scored highest with four stars, while Tigo received two and one-half stars and Digicel one and a half. Celero scored highest among fixed internet providers with one and three-quarters stars. Interfast and DBS received three-fourths of a star and one-half star, respectively.

The report showed progress on a few fronts: Más Móvil and Digicel publish privacy policy for their services, while Más Móvil has committed to follow relevant legal procedures before providing authorities with the content of its users’ communications, a significant improvement compared to 2021.

Tigo maintains its commitment to require judicial authorization or follow established procedures before providing data and to reject requests that don’t comply with legal requirements.

Más Móvil and Tigo also stand out for joining human rights-related initiatives. Más Móvil is a signatory of the United Nations Global Compact and belongs to SUMARSE, an organization that promotes Corporate Social Responsibility (CSR) in Panama.

Tigo, meanwhile, has projects aimed at digital and social transformation, including Conectadas: Empowering Women in the Digital World, Entrepreneurs in Action: Promoting the Success of Micro and Medium-sized Enterprises, and Connected Teachers: The Digital Age for teachers.

All three fixed internet service providers received partial credit for meeting some parameters for digital security.

Companies Lag in Key Areas

Still, the report showed that internet providers in Panama have a long way to go to incorporate best practices in most categories. For instance, no company published transparency reports with detailed quantitative data for Panama.

Both mobile and fixed internet telecommunications companies are not committed to informing users about requests or orders from authorities to access their personal data, according to the report. As for digital security, companies have chosen to maintain a passive position regarding the promotion of digital security.

None of the mobile providers have opposed requiring users to undergo facial recognition to register or access their mobile phone services. As the report underlines, companies' resignation "marks a significant step backwards and affects human rights, such as the right to privacy, intimacy and the protection of personal data." Mandating face recognition as a condition to use mobile services is "an abusive intrusion into the privacy of users, setting a worrying precedent with the supposed objective of fighting crime," the report says.

No company has a website or relevant documents available in native languages. Likewise, no company has a declaration and/or accessibility policy for people with disabilities (in physical and digital environments) or important documents in an accessible format.

But it's worth noting that Más Móvil has alternative channels for people with sensory disabilities and Contact Center services for blind users, as well as remote control with built-in voice commands to improve accessibility.  Tigo, too, stands out for being the only company to have a section on its website about discounts for retired and disabled people.

IPANDETEC’s Quien Defiende Tus Datos series of reports is part of a region-wide initiative, akin to EFF’s Who Has Your Back project, which tracks and rates ISPs’ privacy policies and commitments in Latin America and Spain. 

A Sale of 23andMe’s Data Would Be Bad for Privacy. Here’s What Customers Can Do.

The CEO of 23andMe has recently said she’d consider selling the genetic genealogy testing company–and with it, the sensitive DNA data that it’s collected, and stored, from many of its 15 million customers. Customers and their relatives are rightly concerned. Research has shown that a majority of white Americans can already be identified from just 1.3 million users of a similar service, GEDMatch, due to genetic likenesses, even though GEDMatch has a much smaller database of genetic profiles. 23andMe has about ten times as many customers.

Selling a giant trove of our most sensitive data is a bad idea that the company should avoid at all costs. And for now, the company appears to have backed off its consideration of a third-party buyer. Before 23andMe reconsiders, it should at the very least make a series of privacy commitments to all its users. Those should include: 

  • Do not consider a sale to any company with ties to law enforcement or a history of security failures
  • Prior to any acquisition, affirmatively ask all users if they would like to delete their information, with an option to download it beforehand.
  • Prior to any acquisition, seek affirmative consent from all users before transferring user data. The consent should give people a real choice to say “no.” It should be separate from the privacy policy, contain the name of the acquiring company, and be free of dark patterns.
  • Prior to any acquisition, require the buyer to make strong privacy and security commitments. That should include a commitment to not let law enforcement indiscriminately search the database, and to prohibit disclosing any person’s genetic data to law enforcement without a particularized warrant. 
  • Reconsider your own data retention and sharing policies. People primarily use the service to obtain a genetic test. A survey of 23andMe customers in 2017 and 2018 showed that over 40% were unaware that data sharing was part of the company’s business model.  

23andMe is already legally required to provide users in certain states with some of these rights. But 23andMe—and any company considering selling such sensitive data—should go beyond current law to assuage users’ real privacy fears. In addition, lawmakers should continue to pass and strengthen protections for genetic privacy. 

Existing users can demand that 23andMe delete their data 

The privacy of personal genetic information collected by companies like 23andMe is always going to be at some level of risk, which is why we suggest consumers think very carefully before using such a service. Genetic data is immutable and can reveal very personal details about you and your family members. Data breaches are a serious concern wherever sensitive data is stored, and last year’s breach of 23andMe exposed personal information from nearly half of its customers. The data can be abused by law enforcement to indiscriminately search for evidence of a crime. Although 23andMe’s policies require a warrant before releasing information to the police, some other companies do not. In addition, the private sector could use your information to discriminate against you. Thankfully, existing law prevents genetic discrimination in health insurance and employment.  

What Happens to My Genetic Data If 23andMe is Sold to Another Company?

In the event of an acquisition or liquidation through bankruptcy, 23andMe must still obtain separate consent from users in about a dozen states before it could transfer their genetic data to an acquiring company. Users in those states could simply refuse. In addition, many people in the United States are legally allowed to access and delete their data either before or after any acquisition. Separately, the buyer of 23andMe would, at a minimum, have to comply with existing genetic privacy laws and 23andMe's current privacy policies. It would be up to regulators to enforce many of these protections. 

Below is a general legal lay of the land, as we understand it.  

  • 23andMe must obtain consent from many users before transferring their data in an acquisition. Those users could simply refuse. At least a dozen states have passed consumer data privacy laws specific to genetic privacy. For example, Montana’s 2023 law would require consent to be separate from other documents and to list the buyer’s name. While the consent requirements vary slightly, similar laws exist in Alabama, Arizona, California, Kentucky, Nebraska, Maryland, Minnesota, Tennessee, Texas, Virginia, Utah, Wyoming. Specifically, Wyoming’s law has a private right of action, which allows consumers to defend their own rights in court. 
  • Many users have the legal right to access and delete their data stored with 23andMe before or after an acquisition. About 19 states have passed comprehensive privacy laws which give users deletion and access rights, but not all have taken effect. Many of those laws also classify genetic data as sensitive and require companies to obtain consent to process it. Unfortunately, most if not all of these laws allow companies like 23andMe to freely transfer user data as part of a merger, acquisition, or bankruptcy. 
  • 23andMe must comply with its own privacy policy. Otherwise, the company could be sanctioned for engaging in deceptive practices. Unfortunately, its current privacy policy allows for transfers of data in the event of a merger, acquisition, or bankruptcy. 
  • Any buyer of 23andMe would likely have to offer existing users privacy rights that are equal or greater to the ones offered now, unless the buyer obtains new consent. The Federal Trade Commission has warned companies not to engage in the unfair practice of quietly reducing privacy protections of user data after an acquisition. The buyer would also have to comply with the web of comprehensive and genetic-specific state privacy laws mentioned above. 
  • The federal Genetic Information Nondiscrimination Act of 2008 prevents genetic-based discrimination by health insurers and employers. 

What Can You Do to Protect Your Genetic Data Now?

Existing users can demand that 23andMe delete their data or revoke some of their past consent to research. 

If you don’t feel comfortable with a potential sale, you can consider downloading a local copy of your information to create a personal archive, and then deleting your 23andMe account. Doing so will remove all your information from 23andMe, and if you haven’t already requested it, the company will also destroy your genetic sample. Deleting your account will also remove any genetic information from future research projects, though there is no way to remove anything that’s already been shared. We’ve put together directions for archiving and deleting your account here. When you get your archived account information, some of your data will be in more readable formats than others. For example, your “Reports Summary” will arrive as a PDF that’s easy to read and includes information about traits and your ancestry report. Other information, like the family tree, arrives in a less readable format, like a JSON file.

You also may be one of the 80% or so of users who consented to having your genetic data analyzed for medical research. You can revoke your consent to future research as well by sending an email. Under this program, third-party researchers who conduct analyses on that data have access to this information, as well as some data from additional surveys and other information you provide. Third-party researchers include non-profits, pharmaceutical companies like GlaxoSmithKline, and research institutions. 23andMe has used this data to publish research on diseases like Parkinson’s. According to the company, this data is deidentified, or stripped of obvious identifying information such as your name and contact information. However, genetic data cannot truly be de-identified. Even if separated from obvious identifiers like name, it is still forever linked to only one person in the world. And at least one study has shown that, when combined with data from GenBank, a National Institutes of Health genetic sequence database, data from some genealogical databases can result in the possibility of re-identification. 

What Can 23andMe, Regulators, and Lawmakers Do?

Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act

As mentioned above, 23andMe must follow existing law. And it should make a series of additional commitments before ever reconsidering a sale. Most importantly, it must give every user a real choice to say “no” to a data transfer and ensure that any buyer makes real privacy commitments. Other consumer genetic genealogy companies should proactively take these steps as well. Companies should be crystal clear about where the information goes and how it’s used, and they should require an individualized warrant before allowing police to comb through their database. 

Government regulators should closely monitor the company’s plans and press the company to explain how it will protect user data in the event of a transfer of ownership—similar to the FTC’s scrutiny of the prior Facebook WhatsApp acquisition. 

Lawmakers should also work to pass stronger comprehensive privacy protections in general and genetic privacy protections in particular. While many of the state-based genetic privacy laws are a good start, they generally lack a private right of action and only protect a slice of the U.S. population. EFF has long advocated for a strong federal privacy law that includes a private right of action. 

Our DNA is quite literally what makes us human. It is inherently personal and deeply revealing, not just of ourselves but our genetic relatives as well, making it deserving of the strongest privacy protections. Acquisition talk about a company with a giant database of sensitive data should be a wakeup call for lawmakers and regulators to act, and when they do, EFF will be ready to support them. 

FTC Findings on Commercial Surveillance Can Lead to Better Alternatives

8 octobre 2024 à 13:04

On September 19, the FTC published a staff report following a multi-year investigation of nine social media and video streaming companies. The report found a myriad of privacy violations to consumers stemming largely from the ad-revenue based business models of companies including Facebook, YouTube, and X (formerly Twitter) which prompted unbridled consumer surveillance practices. In addition to these findings, the FTC points out various ways in which user data can be weaponized to lock out competitors and dominate the respective markets of these companies.

The report finds that market dominance can be established and expanded by acquisition and maintenance of user data, creating an unfair advantage and preventing new market entrants from fairly competing. EFF has found that  this is not only true for new entrants who wish to compete by similarly siphoning off large amounts of user data, but also for consumer-friendly companies who carve out a niche by refusing to play the game of dominance-through-surveillance. Abusing user data in an anti-competitive manner means users may not even learn of alternatives who have their best interests, rather than the best interests of the company advertising partners, in mind.

The relationship between privacy violations and anti-competitive behavior is elaborated upon in a section of the report which points out that “data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.” In contrast with the recent United States v. Google LLC (2020) ruling, where Judge Amit P. Mehta found that the data collection practices of Google, though injurious to consumers, were outweighed by an improved user experience, the FTC highlighted a dangerous feedback loop in which privacy abuses beget further privacy abuses. We agree with the FTC and find the identification of this ‘unvirtuous cycle’ a helpful focal point for further antitrust action.

In an interesting segment focusing on the existing protections the European Union’s General Data Protection Regulation (GDPR) specifies for consumers’ data privacy rights which the US lacks, the report explicitly mentions not only the right of consumers to delete or correct the data held by companies, but importantly also the right to transfer (or port) one’s data to the third party of their choice. This is a right EFF has championed time and again in pointing out the strength of the early internet came from nascent technologies’ imminent need (and implemented ability) to play nicely with each other in order to make any sense—let alone be remotely usable—to consumers. It is this very concept of interoperability which can now be re-discovered and give users control over their own data by granting them the freedom to frictionlessly pack up their posts, friend connections, and private messages and leave when they are no longer willing to let the entrenched provider abuse them.

We hope and believe that the significance of the FTC staff report comes not only from the abuses they have meticulously documented, but the policy and technological possibilities that can follow from the willingness to embrace alternatives. Alternatives where corporate surveillance cementing dominant players based on selling out their users is not the norm. We look forward to seeing these alternatives emerge and grow.

Germany Rushes to Expand Biometric Surveillance

7 octobre 2024 à 16:07

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

How to Stop Advertisers From Tracking Your Teen Across the Internet

This post was written by EFF fellow Miranda McClellan.

Teens between the ages of  13 and 17 are being tracked across the internet using identifiers known as Advertising IDs. When children turn 13, they age out of the data protections provided by the Children’s Online Privacy Protection Act (COPPA). Then, they become targets for data collection from data brokers that collect their information from social media apps, shopping history, location tracking services, and more. Data brokers then process and sell the data. Deleting Advertising IDs off your teen’s devices can increase their privacy and stop advertisers collecting their data.

What is an Advertising ID?

Advertising identifiers – Android's Advertising ID (AAID) and Identifier for Advertising (IDFA) on iOS – enable third-party advertising by providing device and activity tracking information to advertisers. The advertising ID is a string of letters and numbers that uniquely identifies your phone, tablet, or other smart device.

How Teens Are Left Vulnerable

In most countries, children must be over 13 years old to manage their own Google account without a supervisory parent account through Google Family Link. Children over 13 gain the right to manage their own account and app downloads without a supervisory parent account—and they also gain an Advertising ID.

At 13, children transition abruptly between two extremes—from potential helicopter parental surveillance to surveillance advertising that connects their online activity and search history to marketers serving targeted ads.

Thirteen is a historically significant age. In the United States, both Facebook and Instagram require users to be at least 13 years old to make an account, though many children pretend to be older. The Children’s Online Privacy Protection Act (COPPA), a federal law, requires companies to obtain “verifiable parental consent” before collecting personal information from children under 13 for commercial purposes.

But this means that teens can lose valuable privacy protections even before becoming adults.

How to Protect Children and Teens from Tracking

 Here are a few steps we recommend that protect children and teens from behavioral tracking and other privacy-invasive advertising techniques:

  • Delete advertising IDs for minors aged 13-17.
  • Require schools using Chromebooks, Android tablets, or iPads to educate students and parents about deleting advertising IDs off school devices and accounts to preserve student privacy.
  • Advocate for extended privacy protections for everyone.

How to Delete Advertising IDs

 Advertising IDs track devices and activity from connected accounts. Both Android and iOS users can reset or delete their advertising IDs from the device. Removing the advertising ID removes a key component advertisers use to identify audiences for targeted ad delivery. While users will still see ads after resetting or deleting their advertising ID, the ads will be severed from previous online behaviors and provide less personally targeted ads.

Follow these instructions, updated from a previous EFF blog post:

On Android

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don't see an option to "delete" your ad ID, you can use the older version of Android's privacy controls to reset it and ask apps not to track you.

On iOS

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Miranda McClellan served as a summer fellow at EFF on the Public Interest Technology team. Miranda has a B.S. and M.Eng. in Computer Science from MIT. Before joining EFF, Miranda completed a Fulbright research fellowship in Spain to apply machine learning to 5G networks, worked as a data scientist at Microsoft where she built machine learning models to detect malware, and was a fellow at the Internet Society. In her free time, Miranda enjoys running, hiking, and crochet.

At EFF, Miranda conducted research focused on understanding the data broker ecosystem and enhancing children’s privacy. She received funding from the National Science Policy Network.

FTC Report Confirms: Commercial Surveillance is Out of Control

Par : Lena Cohen
26 septembre 2024 à 10:55

A new Federal Trade Commission (FTC) report confirms what EFF has been warning about for years: tech giants are widely harvesting and sharing your personal information to fuel their online behavioral advertising businesses. This four-year investigation into the data practices of nine social media and video platforms, including Facebook, YouTube, and X (formerly Twitter), demonstrates how commercial surveillance leaves consumers with little control over their privacy. While not every investigated company committed the same privacy violations, the conclusion is clear: companies prioritized profits over privacy. 

While EFF has long warned about these practices, the FTC’s investigation offers detailed evidence of how widespread and invasive commercial surveillance has become. Here are key takeaways from the report:

Companies Collected Personal Data Well Beyond Consumer Expectations

The FTC report confirms that companies collect data in ways that far exceed user expectations. They’re not just tracking activity on their platforms, but also monitoring activity on other websites and apps, gathering data on non-users, and buying personal information from third-party data brokers. Some companies could not, or would not, disclose exactly where their user data came from. 

The FTC found companies gathering detailed personal information, such as the websites you visit, your location data, your demographic information, and your interests, including sensitive interests like “divorce support” and “beer and spirits.” Some companies could only report high-level descriptions of the user attributes they tracked, while others produced spreadsheets with thousands of attributes. 

There’s Unfettered Data Sharing With Third Parties

Once companies collect your personal information, they don’t always keep it to themselves. Most companies reported sharing your personal information with third parties. Some companies shared so widely that they claimed it was impossible to provide a list of all third-party entities they had shared personal information with. For the companies that could identify recipients, the lists included law enforcement and other companies, both inside and outside the United States. 

Alarmingly, most companies had no vetting process for third parties before sharing your data, and none conducted ongoing checks to ensure compliance with data use restrictions. For example, when companies say they’re just sharing your personal information for something that seems unintrusive, like analytics, there's no guarantee your data is only used for the stated purpose. The lack of safeguards around data sharing exposes consumers to significant privacy risks.

Consumers Are Left in the Dark

The FTC report reveals a disturbing lack of transparency surrounding how personal data is collected, shared, and used by these companies. If companies can’t tell the FTC who they share data with, how can you expect them to be honest with you?

Data tracking and sharing happens behind the scenes, leaving users largely unaware of how much privacy they’re giving up on different platforms. These companies don't just collect data from their own platforms—they gather information about non-users and from users' activity across the web. This makes it nearly impossible for individuals to avoid having their personal data swept up into these vast digital surveillance networks. Even when companies offer privacy controls, the controls are often opaque or ineffective. The FTC also found that some companies were not actually deleting user data in response to deletion requests.

The scale and secrecy of commercial surveillance described by the FTC demonstrates why the burden of protecting privacy can’t fall solely on individual consumers.

Surveillance Advertising Business Models Are the Root Cause

The FTC report underscores a fundamental issue: these privacy violations are not just occasional missteps—they’re inherent to the business model of online behavioral advertising. Companies collect vast amounts of data to create detailed user profiles, primarily for targeted advertising. The profits generated from targeting ads based on personal information drive companies to develop increasingly invasive methods of data collection. The FTC found that the business models of most of the companies incentivized privacy violations.

FTC Report Underscores Urgent Need for Legislative Action

Without federal privacy legislation, companies have been able to collect and share billions of users’ personal data with few safeguards. The FTC report confirms that self-regulation has failed: companies’ internal data privacy policies are inconsistent and inadequate, allowing them to prioritize profits over privacy. In the FTC’s own words, “The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse.”

To address this, the EFF advocates for federal privacy legislation. It should have many components, but these are key:

  1. Data minimization and user rights: Companies should be prohibited from processing a person’s data beyond what’s necessary to provide them what they asked for. Users should have the right to access their data, port it, correct it, and delete it.
  2. Ban on Online Behavioral Advertising: We should tackle the root cause of commercial surveillance by banning behavioral advertising. Otherwise, businesses will always find ways to skirt around privacy laws to keep profiting from intrusive data collection.
  3. Strong Enforcement with Private Right of Action: To give privacy legislation bite, people should have a private right of action to sue companies that violate their privacy. Otherwise, we’ll continue to see widespread violation of privacy laws due to limited government enforcement resources. 

Using online services shouldn't mean surrendering your personal information to countless companies to use as they see fit.  When you sign up for an account on a website, you shouldn’t need to worry about random third-parties getting your information or every click being monitored to serve you ads. For now, our Privacy Badger extension can help you block some of the tracking technologies detailed in the FTC report. But the scale of commercial surveillance revealed in this investigation requires significant legislative action. Congress must act now and protect our data from corporate exploitation with a strong federal privacy law.

Prison Banned Books Week: Being in Jail Shouldn’t Mean Having Nothing to Read

Across the United States, nearly every state’s prison system offers some form of tablet access to incarcerated people, many of which boast of sizable libraries of eBooks. Knowing this, one might assume that access to books is on the rise for incarcerated folks. Unfortunately, this is not the case. A combination of predatory pricing, woefully inadequate eBook catalogs, and bad policies restricting access to paper literature has exacerbated an already acute book censorship problem in U.S. prison systems.

New data collected by the Prison Banned Books Week campaign focuses on the widespread use of tablet devices in prison systems, as well as their pricing structure and libraries of eBooks. Through a combination of interviews with incarcerated people and a nationwide FOIA campaign to uncover the details of these tablet programs, this campaign has found that, despite offering access to tens of thousands of eBooks, prisons’ tablet programs actually provide little in the way of valuable reading material. The tablets themselves are heavily restricted, and typically only designed by one of two companies: Securus and ViaPath. The campaign also found that the material these programs do provide may not be accessible to many incarcerated individuals.

“We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

Limited, Censored Selections at Unreasonable Prices

Many companies that offer tablets to carceral facilities advertise libraries of several thousand books. But the data reveals that a huge proportion of these books are public domain texts taken directly from Project Gutenberg. While Project Gutenberg is itself laudable for collecting freely accessible eBooks, and its library contains many of the “classics” of Western literary canon, a massive number of its texts are irrelevant and outdated. As Shawn Y., an incarcerated interviewee in Pennsylvania put it, “Books are available for purchase through the Securus systems, but most of the bookworms here [...] find the selection embarrassingly thin, laughable even. [...] We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

These limitations on eBook selections exacerbate the already widespread censorship of physical reading materials, based on a variety of factors including books being deemed “harmful” content, determinations based on the book’s vendor (which, reports indicate, can operate as a ban on publishers), and whether the incarcerated person obtained advance permission from a prison administrator. Such censorial decisionmaking undermines incarcerated individuals’ right to receive information.

These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Some facilities charge $0.99 or more per eBook—despite their often meager, antiquated selections. While this may not seem exorbitant to many people, a recent estimate of average hourly wages for incarcerated people in the US is $0.63 per hour. And these otherwise free eBooks can often cost much more: Larry, an individual incarcerated in Pennsylvania, explains, “[s]ome of the prices for other books [are] extremely outrageous.” In Larry’s facility, “[s]ome of those tablet prices range over twenty dollars and even higher.”

Even if one can afford to rent these eBooks, they may have to pay for the tablets required to read them. For some incarcerated individuals, these costs can be prohibitive: procurement contracts in some states appear to require incarcerated people to pay upwards of $99 to use them. These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Part of a Trend Toward Inadequate Digital Replacements

The trend of eliminating physical books and replacing them with digital copies accessible via tablets is emblematic of a larger trend from physical to digital that is occurring throughout our carceral system. These digital copies are not adequate substitutes. One of the hallmarks of tangible physical items is access: someone can open a physical book and read it when, how, and where they want. That’s not the case with the tablet systems prisons are adopting, and worryingly this trend has also extended to such personal items as incarcerated individual's personal mail.

EFF is actively litigating to defend incarcerated individuals’ rights to access and receive tangible reading materials with our ABO Comix lawsuit. There, we—along with the Knight First Amendment Institute and Social Justice Legal Foundation—are fighting a San Mateo County (California) policy that bans those in San Mateo jails from receiving physical mail. Our complaint explains that San Mateo’s policy requires the friends and families of those jailed in its facilities to send their letters to a private company that scans them, destroys the physical copy, and retains the scan in a searchable database—for at least seven years after the intended recipient leaves the jail’s custody. Incarcerated people can only access the digital copies through a limited number of shared tablets and kiosks in common areas within the jails.

Just as incarcerated peoples’ reading materials are censored, so is their mail when physical letters are replaced with digital facsimiles. Our complaint details how ripping open, scanning, and retaining mail has impeded the ability of those in San Mateo’s facilities to communicate with their loved ones, as well as their ability to receive educational and religious study materials. These digital replacements are inadequate both in and of themselves and because the tablets needed to access them are in short supply and often plagued by technical issues. Along with our free expression allegations, our complaint also alleges that the seizing, searching, and sharing of data from and about their letters violates the rights of both senders and recipients against unreasonable searches and seizures.

Our ABO Comix litigation is ongoing. We are hopeful that the courts will recognize the free expression and privacy harms to incarcerated individuals and those who communicate with them that come from digitizing physical mail. We are also hopeful, on the occasion of this Prison Banned Books Week, for an end to the censorship of incarcerated individuals’ reading materials: restricting what some of us can read harms us all.

Square Peg, Meet Round Hole: Previously Classified TikTok Briefing Shows Error of Ban

19 septembre 2024 à 16:07

A previously classified transcript reveals Congress knows full well that American TikTok users engage in First Amendment protected speech on the platform and that banning the application is an inadequate way to protect privacy—but it banned TikTok anyway.

The government submitted the partially redacted transcript as part of the ongoing litigation over the federal TikTok ban (which the D.C. Circuit just heard arguments about this week). The transcript indicates that that members of Congress and law enforcement recognize that Americans are engaging in First Amendment protected speech—the same recognition a federal district court made when it blocked Montana’s TikTok ban from going into effect. They also agreed that adequately protecting Americans’ data requires comprehensive consumer privacy protections.

Yet, Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

No Indication of Actual Harm, No New Arguments

The members and officials didn’t make any particularly new points about the dangers of TikTok. Further, they repeatedly characterized their fears as hypothetical. The transcript is replete with references to the possibility of the Chinese government using TikTok to manipulate the content Americans’ see on the application, including to shape their views on foreign and domestic issues. For example, the official representing the DOJ expressed concern that the public and private data TikTok users generate on the platform is

potentially at risk of going to the Chinese government, [and] being used now or in the future by the Chinese government in ways that could be deeply harmful to tens of millions of young people who might want to pursue careers in government, who might want to pursue careers in the human rights field, and who one day could end up at odds with the Chinese Government’s agenda.  

There is no indication from the unredacted portions of the transcript that this is happening. This DOJ official went on to express concern “with the narratives that are being consumed on the platform,” the Chinese government’s ability to influence those narratives, and the U.S. government’s preference for “responsible ownership” of the platform through divestiture.

At one point, Representative Walberg even suggested that “certain public policy organizations” that oppose the TikTok ban should be investigated for possible ties to ByteDance (the company that owns TikTok). Of course, the right to oppose an ill-conceived ban on a popular platform goes to the very reason the U.S. has a First Amendment.

Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

Americans’ Speech and Privacy Rights Deserved More

Rather than grandstanding about investigating opponents of the TikTok ban, Congress should spend its time considering the privacy and free speech arguments of those opponents. Judging by the (redacted) transcript, the committee failed to undertake that review here.

First, the First Amendment rightly subjects bans like this one for TikTok to extraordinarily exacting judicial scrutiny. That is true even with foreign propaganda, which Americans have a well-established First Amendment right to receive. And it’s ironic for the DOJ to argue that banning an application which people use for self-expression—a human right—is necessary to protect their ability to advance human rights.

Second, if Congress wants to stop the Chinese government from potentially acquiring data about social media users, it should pass comprehensive consumer privacy legislation that regulates how all social media companies can collect, process, store, and sell Americans’ data. Otherwise, foreign governments and adversaries will still be able to acquire Americans’ data by stealing it, or by using a straw purchaser to buy it.

It’s especially jarring to read that a foreign government’s potential collection of data supposedly justifies banning an application, given Congress’s recent renewal of an authority—Section 702 of the Foreign Intelligence Surveillance Act—under which the U.S. government actually collects massive amounts of Americans’ communications— and which the FBI immediately directed its agents to abuse (yet again).

EFF will continue fighting for TikTok users’ First Amendment rights to express themselves and to receive information on the platform. We will also continue urging Congress to drop these square peg, round hole approaches to Americans’ privacy and online expression and pass comprehensive privacy legislation that offers Americans genuine protection from the invasive ways any company uses data. While Congress did not fully consider the First Amendment and privacy interests of TikTok users, we hope the federal courts will.

Canada’s Leaders Must Reject Overbroad Age Verification Bill

19 septembre 2024 à 13:14

Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users.

First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.”

The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation. 

Flawed Definitions Would Encompass Nearly the Entire Internet 

The bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.

Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification. 

Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common.

Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom.

Promoting Dangerous Age Verification Methods 

S-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.”

This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected.

We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough.

Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit.

Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID.

Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous.

Dangers for Everyone, But This Can Be Stopped

Canada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns.

It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

The New U.S. House Version of KOSA Doesn’t Fix Its Biggest Problems

An amended version of the Kids Online Safety Act (KOSA) that is being considered this week in the U.S. House is still a dangerous online censorship bill that contains many of the same fundamental problems of a similar version the Senate passed in July. The changes to the House bill do not alter that KOSA will coerce the largest social media platforms into blocking or filtering a variety of entirely legal content, and subject a large portion of users to privacy-invasive age verification. They do bring KOSA closer to becoming law, and put us one step closer to giving government officials dangerous and unconstitutional power over what types of content can be shared and read online. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Reframing the Duty of Care Does Not Change Its Dangerous Outcomes

For years now, digital rights groups, LGBTQ+ organizations, and many others have been critical of KOSA's “duty of care.” While the language has been modified slightly, this version of KOSA still creates a duty of care and negligence standard of liability that will allow the Federal Trade Commission to sue apps and websites that don’t take measures to “prevent and mitigate” various harms to minors that are vague enough to chill a significant amount of protected speech.  

The biggest shift to the duty of care is in the description of the harms that platforms must prevent and mitigate. Among other harms, the previous version of KOSA included anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, “consistent with evidence-informed medical information.” The new version drops this section and replaces it with the "promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” The bill defines “serious emotional disturbance” as “the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.”  

Despite the new language, this provision is still broad and vague enough that no platform will have any clear indication about what they must do regarding any given piece of content. Its updated list of harms could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. It is still likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech—and important resources—about topics like addiction, eating disorders, and bullying. And it will stifle minors who are trying to find their own supportive communities online.  

Kids will, of course, still be able to find harmful content, but the largest platforms—where the most kids are—will face increased liability for letting any discussion about these topics occur. It will be harder for suicide prevention messages to reach kids experiencing acute crises, harder for young people to find sexual health information and gender identity support, and generally, harder for adults who don’t want to risk the privacy- and security-invasion of age verification technology to access that content as well.  

As in the past version, enforcement of KOSA is left up to the FTC, and, to some extent, state attorneys general around the country. Whether you agree with them or not on what encompasses a “diagnosable mental, behavioral, or emotional disorder,”  the fact remains that KOSA's flaws are as much about the threat of liability as about the actual enforcement. As long as these definitions remain vague enough that platforms have no clear guidance on what is likely to cross the line, there will be censorship—even if the officials never actually take action. 

The previous House version of the bill stated that “A high impact online company shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors.” The new version slightly modifies this to say that such a company "shall create and implement its design features to reasonably prevent and mitigate the following harms to minors.” These language changes are superficial; this section still imposes a standard that requires platforms to filter user-generated content and imposes liability if they fail to do so “reasonably.” 

House KOSA Edges Closer to Harmony with Senate Version 

Some of the latest amendments to the House version of KOSA bring it closer in line with the Senate version which passed a few months ago (not that this improves the bill).  

This version of KOSA lowers the bar, set by the previous House version, that determines  which companies would be impacted by KOSA’s duty of care. While the Senate version of KOSA does not have such a limitation (and would affect small and large companies alike), the previous House version created a series of tiers for differently-sized companies. This version has the same set of tiers, but lowers the highest bar from companies earning $2.5 billion in annual revenue, or having 150 million annual users, to companies earning $1 billion in annual revenue, or having 100 million annual users.  

This House version also includes the “filter bubble” portion of KOSA which was added to the Senate version a year ago. This requires any “public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user-generated content” to provide users with an algorithm that uses a limited set of information, such as search terms and geolocation, but not search history (for example). This section of KOSA is meant to push users towards a chronological feed. As we’ve said before, there’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.   

Lastly, the House authors have added language  that the bill would have no actual effect on how platforms or courts interpret the law, but which does point directly to the concerns we’ve raised. It states that, “a government entity may not enforce this title or a regulation promulgated under this title based upon a specific viewpoint of any speech, expression, or information protected by the First Amendment to the Constitution that may be made available to a user as a result of the operation of a design feature.” Yet KOSA does just that: the FTC will have the power to force platforms to moderate or block certain types of content based entirely on the views described therein.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Remains an Unconstitutional Censorship Bill 

KOSA remains woefully underinclusive—for example, Google's search results will not be impacted regardless of what they show young people, but Instagram is on the hook for a broad amount of content—while making it harder for young people in distress to find emotional, mental, and sexual health support. This version does only one important thing—it moves KOSA closer to passing in both houses of Congress, and puts us one step closer to enacting an online censorship regime that will hurt free speech and privacy for everyone.

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

6 septembre 2024 à 18:12

Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantaged, minority and LGBTQ youth.

The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.

That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?

Experts suggest it's because these supposed technical solutions are easier to implement than the effective social measures that schools often lack resources to implement. I spoke with Isabelle Barbour, a public health consultant who has experience working with schools to implement mental health supports. She pointed out that there are considerable barriers to families, kids, and youth accessing health care and mental health supports at a community level. There is also a lack of investment in supporting schools to effectively address student health and well-being. This leads to a situation where many students come to school with needs that have been unmet and these needs impact the ability of students to learn. Although there are clear and proven measures that work to address the burdens youth face, schools often need support (time, mental health expertise, community partners, and a budget) to implement these measures. Edtech companies market largely unproven plug-and-play products to educational professionals who are stretched thin and seeking a path forward to help kids. Is it any wonder why schools sign contracts which are easy to point to when questioned about what they are doing with regard to the youth mental health epidemic?

One example: Gaggle in marketing to school districts claims to have saved 5,790 student lives between 2018 and 2023, according to shaky metrics they themselves designed. All the while they keep the inner-workings of their AI monitoring secret, making it difficult for outsiders to scrutinize and measure its effectiveness.

We give Gaggle an “F”

Reports of the errors and inability of the AI flagging to understand context keep popping up. When the Lawrence, Kansas school district signed a $162,000 contract with Gaggle, no one batted an eye: It joined a growing number of school districts (currently ~1,500) nation-wide using the software. Then, school administrators called in nearly an entire class to explain photographs Gaggle’s AI had labeled as “nudity” because the software wouldn’t tell them:

“Yet all students involved maintain that none of their photos had nudity in them. Some were even able to determine which images were deleted by comparing backup storage systems to what remained on their school accounts. Still, the photos were deleted from school accounts, so there is no way to verify what Gaggle detected. Even school administrators can’t see the images it flags.”

Young journalists within the school district raised concerns about how Gaggle’s surveillance of students impacted their privacy and free speech rights. As journalist Max McCoy points out in his article for the Kansas Reflector, “newsgathering is a constitutionally protected activity and those in authority shouldn’t have access to a journalist’s notes, photos and other unpublished work.” Despite having renewed Gaggle’s contract, the district removed the surveillance software from the devices of student journalists. Here, a successful awareness campaign resulted in a tangible win for some of the students affected. While ad-hoc protections for journalists are helpful, more is needed to honor all students' fundamental right to privacy against this new front of technological invasions.

Tips for Students to Reclaim their Privacy

Students struggling with the invasiveness of school surveillance AI may find some reprieve by taking measures and forming habits to avoid monitoring. Some considerations:

  • Consider any school-issued device a spying tool. 
  • Don’t try to hack or remove the monitoring software unless specifically allowed by your school: it may result in significant consequences from your school or law enforcement. 
  • Instead, turn school-issued devices completely off when they aren’t being used, especially while at home. This will prevent the devices from activating the camera, microphone, and surveillance software.
  • If not needed, consider leaving school-issued devices in your school locker: this will avoid depending on these devices to log in to personal accounts, which will keep data from those accounts safe from prying eyes.
  • Don’t log in to personal accounts on a school-issued device (if you can avoid it - we understand sometimes a school-issued device is the only computer some students have access to). Rather, use a personal device for all personal communications and accounts (e.g., email, social media). Maybe your personal phone is the only device you have to log in to social media and chat with friends. That’s okay: keeping separate devices for separate purposes will reduce the risk that your data is leaked or surveilled. 
  • Don’t log in to school-controlled accounts or apps on your personal device: that can be monitored, too. 
  • Instead, create another email address on a service the school doesn’t control which is just for personal communications. Tell your friends to contact you on that email outside of school.

Finally, voice your concern and discomfort with such software being installed on devices you rely on. There are plenty of resources to point to, many linked to in this post, when raising concerns about these technologies. As the young journalists at Lawrence High School have shown, writing about it can be an effective avenue to bring up these issues with school administrators. At the very least, it will send a signal to those in charge that students are uncomfortable trading their right to privacy for an elusive promise of security.

Schools Can Do Better to Protect Students Safety and Privacy

It’s not only the students who are concerned about AI spying in the classroom and beyond. Parents are often unaware of the spyware deployed on school-issued laptops their children bring home. And when using a privately-owned shared computer logged into a school-issued Google Workspace or Microsoft account, a parent’s web search will be available to the monitoring AI as well.

New studies have uncovered some of the mental detriments that surveillance causes. Despite this and the array of First Amendment questions these student surveillance technologies raise, schools have rushed to adopt these unproven and invasive technologies. As Barbour put it: 

“While ballooning class sizes and the elimination of school positions are considerable challenges, we know that a positive school climate helps kids feel safe and supported. This allows kids to talk about what they need with caring adults. Adults can then work with others to identify supports. This type of environment helps not only kids who are suffering with mental health problems, it helps everyone.”

We urge schools to focus on creating that environment, rather than subjecting students to ever-increasing scrutiny through school surveillance AI.

EFF to Tenth Circuit: Protest-Related Arrests Do Not Justify Dragnet Device and Digital Data Searches

The Constitution prohibits dragnet device searches, especially when those searches are designed to uncover political speech, EFF explained in a friend-of-the-court brief filed in the U.S. Court of Appeals for the Tenth Circuit.

The case, Armendariz v. City of Colorado Springs, challenges device and data seizures and searches conducted by the Colorado Springs police after a 2021 housing rights march that the police deemed “illegal.” The plaintiffs in the case, Jacqueline Armendariz and a local organization called the Chinook Center, argue these searches violated their civil rights.

The case details repeated actions by the police to target and try to intimidate plaintiffs and other local civil rights activists solely for their political speech. After the 2021 march, police arrested several protesters, including Ms. Armendariz. Police alleged Ms. Armendariz “threw” her bike at an officer as he was running, and despite that the bike never touched the officer, police charged her with attempted simple assault. Police then used that charge to support warrants to seize and search six of her electronic devices—including several phones and laptops. The search warrant authorized police to comb through these devices for all photos, videos, messages, emails, and location data sent or received over a two-month period and to conduct a time-unlimited search of 26 keywords—including for terms as broad and sweeping as “officer,” “housing,” “human,” “right,” “celebration,” “protest,” and several common names. Separately, police obtained a warrant to search all of the Chinook Center’s Facebook information and private messages sent and received by the organization for a week, even though the Center was not accused of any crime.

After Ms. Armendariz and the Chinook Center filed their civil rights suit, represented by the ACLU of Colorado, the defendants filed a motion to dismiss the case, arguing the searches were justified and, in any case, officers were entitled to qualified immunity. The district court agreed and dismissed the case. Ms. Armendariz and the Center appealed to the Tenth Circuit.

As explained in our amicus brief—which was joined by the Center for Democracy & Technology, the Electronic Privacy Information Center, and the Knight First Amendment Institute at Columbia University—the devices searched contain a wealth of personal information. For that reason, and especially where, as here, political speech is implicated, it is imperative that warrants comply with the Fourth Amendment.

The U.S. Supreme Court recognized in Riley v. California that electronic devices such as smartphones “differ in both a quantitative and a qualitative sense” from other objects. Our electronic devices’ immense storage capacities means that just one type of data can reveal more than previously possible because they can span years’ worth of information. For example, location data can reveal a person’s “familial, political, professional, religious, and sexual associations.” And combined with all of the other available data—including photos, video, and communications—a device such as a smartphone or laptop can store a “digital record of nearly every aspect” of a person’s life, “from the mundane to the intimate.” Social media data can also reveal sensitive, private information, especially with respect to users' private messages.

It’s because our devices and the data they contain can be so revealing that warrants for this information must rigorously adhere to the Fourth Amendment’s requirements of probable cause and particularity.

Those requirements weren’t met here. The police’s warrants failed to establish probable cause that any evidence of the crime they charged Ms. Armendariz with—throwing her bike at an officer—would be found on her devices. And the search warrant, which allowed officers to rifle through months of her private records, was so overbroad and lacking in particularity as to constitute an unconstitutional “general warrant.” Similarly, the warrant for the Chinook Center’s Facebook messages lacked probable cause and was especially invasive given that access to these messages may well have allowed police to map activists who communicated with the Center and about social and political advocacy.

The warrants in this case were especially egregious because they appear designed to uncover First Amendment-protected activity. Where speech is targeted, the Supreme Court has recognized that it’s all the more crucial that warrants apply the Fourth Amendment’s requirements with “scrupulous exactitude” to limit an officer’s discretion in conducting a search. But that failed to happen here, and thus affected several of Ms. Armendariz and the Chinook Center’s First Amendment rights—including the right to free speech, the right to free association, and the right to receive information.

Warrants that fail to meet the Fourth Amendment’s requirements disproportionately burden disfavored groups. In fact, the Framers adopted the Fourth Amendment to prevent the “use of general warrants as instruments of oppression”—but as legal scholars have noted, law enforcement routinely uses low-level, highly discretionary criminal offenses to impose order on protests. Once arrests are made, they are often later dropped or dismissed—but the damage is done, because protesters are off the streets, and many may be chilled from returning. Protesters undoubtedly will be further chilled if an arrest for a low-level offense then allows police to rifle through their devices and digital data, as happened in this case.

The Tenth Circuit should let this case to proceed. Allowing police to conduct a virtual fishing expedition of a protester’s devices, especially when justification for that search is an arrest for a crime that has no digital nexus, contravenes the Fourth Amendment’s purposes and chills speech. It is unconstitutional and should not be tolerated.

The French Detention: Why We're Watching the Telegram Situation Closely

EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users.  

On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with  French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France.  

We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely.  

There appear to be three categories of Telegram-related charges:  

  • First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram.  
  • The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.”  
  • The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France.  

Now we are left to speculate. 

It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. 

Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. 

In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned.

In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. 

EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages  on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world. 

This arrest marks an alarming escalation by a state’s authorities. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure.  

It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities.  We are monitoring the situation closely and will continue to do so.  

❌
❌