Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierElectronic Frontier Foundation

The Human Toll of ALPR Errors

1 novembre 2024 à 23:17

This post was written by Gowri Nayar, an EFF legal intern.

Imagine driving to get your nails done with your family and all of a sudden, you are pulled over by police officers for allegedly driving a stolen car. You are dragged out of the car and detained at gun point. So are your daughter, sister, and nieces. The police handcuff your family, even the children, and force everyone to lie face-down on the pavement, before eventually realizing that they made a mistake. This happened to Brittney Gilliam and her family on a warm Sunday in Aurora, Colorado, in August 2020.

And the error? The police officers who pulled them over were relying on information generated by automated license plate readers (ALPRs). These are high-speed, computer-controlled camera systems that automatically capture all license plate numbers that come into view, upload them to a central server, and compare them to a “hot list” of vehicles sought by police. The ALPR system told the police that Gilliam’s car had the same license plate number as a stolen vehicle. But the stolen vehicle was a motorcycle with Montana plates, while Gilliam’s vehicle was an SUV with Colorado plates.

Likewise, Denise Green had a frightening encounter with San Francisco police officers late one night in March of 2009. She had just dropped her sister off at a BART train station, when officers pulled her over because their ALPR indicated that she was driving a stolen vehicle. Multiple officers ordered her to exit her vehicle, at gun point, and kneel on the ground as she was handcuffed. It wasn’t until roughly 20 minutes later that the officers realized they had made an error and let her go.

Turns out that the ALPR had misread a ‘3’ as a ‘7’ on Green’s license plate. But what is even more egregious is that none of the officers bothered to double-check the ALPR tip before acting on it.

In both of these dangerous episodes, the motorists were Black.  ALPR technology can exacerbate our already discriminatory policing system, among other reasons because too many police officers react recklessly to information provided by these readers.

Wrongful detentions like these happen all over the country. In Atherton, California, police officers pulled over Jason Burkleo on his way to work, on suspicion of driving a stolen vehicle. They ordered him at gun point to lie on his stomach to be handcuffed, only to later realize that their license plate reader had misread an ‘H’ for an ‘M’. In Espanola, New Mexico, law enforcement officials detained Jaclynn Gonzales at gun point and placed her 12 year-old sister in the back of a patrol vehicle, before discovering that the reader had mistaken a ‘2’ for a ‘7’ on their license plates. One study found that ALPRs misread the state of 1-in-10 plates (not counting other reading errors).

Other wrongful stops result from police being negligent in maintaining ALPR databases. Contra Costa sheriff’s deputies detained Brian Hofer and his brother on Thanksgiving day in 2019, after an ALPR indicated his car was stolen. But the car had already been recovered. Police had failed to update the ALPR database to take this car off the “hot list” of stolen vehicles for officers to recover.

Police over-reliance on ALPR systems is also a problem. Detroit police knew that the vehicle used in a shooting was a Dodge Charger. Officers then used ALPR cameras to find the license plate numbers of all Dodge Chargers in the area around the time. One such car, observed fully two miles away from the shooting, was owned by Isoke Robinson.  Police arrived at her house and handcuffed her, placed her 2-year old son in the back of their patrol car, and impounded her car for three weeks. None of the officers even bothered to check her car’s fog lights, though the vehicle used for the  shooting had a missing fog light.

Officers have also abused ALPR databases to obtain information for their own personal gain, for example, to stalk an ex-wife. Sadly, officer abuse of police databases is a recurring problem.

Many people subjected to wrongful ALPR detentions are filing and winning lawsuits. The city of Aurora settled Brittney Gilliam’s lawsuit for $1.9 million. In Denise Green’s case, the city of San Francisco paid $495,000 for her seizure at gunpoint, constitutional injury, and severe emotional distress. Brian Hofer received a $49,500 settlement.

While the financial costs of such ALPR wrongful detentions are high, the social costs are much higher. Far from making our communities safer, ALPR systems repeatedly endanger the physical safety of innocent people subjected to wrongful detention by gun-wielding officers. They lead to more surveillance, more negligent law enforcement actions, and an environment of suspicion and fear.

Since 2012, EFF has been resisting the safety, privacy, and other threats of ALPR technology through public records requests, litigation, and legislative advocacy. You can learn more at our Street-Level Surveillance site.

"Is My Phone Listening To Me?"

31 octobre 2024 à 13:32

The short answer is no, probably not! But, with EFF’s new site, Digital Rights Bytes, we go in-depth on this question—and many others.

Whether you’re just starting to question some of the effects of technology in your life or you’re the designated tech wizard of your family looking for resources to share, Digital Rights Bytes is here to help answer some common questions that may be bugging you about the devices you use.  

We often hear the question, “Is my phone listening to me?” Generally, the answer is no, but the reason you may think that your phone is listening to you is actually quite complicated. Data brokers and advertisers have some sneaky tactics at their disposal to serve you ads that feel creepy in the moment and may make you think that your device is secretly taking notes on everything you say. 

Watch the short videofeaturing a cute little penguin discovering how advertisers collect and track their personal dataand share it with your family and friends who have asked similar questions! Curious to learn more? We also have information about how to mitigate this tracking and what EFF is doing to stop these data brokers from collecting your information. 

Digital Rights Bytes also has answers to other common questions about device repair, ownership of your digital media, and more. Got any additional questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes so we can find it!

EFF Launches Digital Rights Bytes to Answer Tech Questions that Bug Us All

Par : Josh Richman
31 octobre 2024 à 11:55
New Site Dishes Up Byte-Sized, Yummy, Nutritious Videos and Other Information About Your Online Life

SAN FRANCISCO—The Electronic Frontier Foundation today launched “Digital Rights Bytes,” a new website with short videos offering quick, easily digestible answers to the technology questions that trouble us all. 

“It’s increasingly clear there is no way to separate our digital lives from everything else that we do — the internet is now everybody's hometown. But nobody handed us a map or explained how to navigate safely,” EFF Executive Director Cindy Cohn said. “We hope Digital Rights Bytes will provide easy-to-understand information people can trust, and an entry point for thinking more broadly about digital privacy, freedom of expression, and other civil liberties in our digital world.” 

Initial topics on Digital Rights Bytes include “Is my phone listening to me?”, “Why is device repair so costly?”, “Can the government read my text messages?” and others. More topics will be added over time. 

For each topic, the site provides a brief animated video and a concise, layperson’s explanation of how the technology works. It also provides advice and resources for what users can do to protect themselves and take action on important issues. 

EFF is the leading nonprofit defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology Development. Its mission is to ensure that technology supports freedom, justice and innovation for all people of the world. 

For the Digital Rights Bytes website: https://www.digitalrightsbytes.org/

Contact: 
Jason
Kelley
Activism Director

Sorry, Gas Companies - Parody Isn't Infringement (Even If It Creeps You Out)

30 octobre 2024 à 17:09

Activism comes in many forms. You might hold a rally, write to Congress, or fly a blimp over the NSA. Or you might use a darkly hilarious parody to make your point, like our client Modest Proposals recently did.

Modest Proposals is an activist collective that uses parody and culture jamming to advance environmental justice and other social causes. As part of a campaign shining a spotlight on the environmental damage and human toll caused by the liquefied natural gas (LNG) industry, Modest Proposals invented a company called Repaer. The fake company’s website offers energy companies the opportunity to purchase “life offsets” that balance the human deaths their activities cause by extending the lives of individuals deemed economically valuable. The website also advertises a “Plasma Pals” program that encourages parents to donate their child’s plasma to wealthy recipients. Scroll down on the homepage a bit, and you’ll see the logos for three (real) LNG companies—Repaer’s “Featured Partners.” 

Believe it or not, the companies didn’t like this. (Shocking!) Two of them—TotalEnergies and Equinor—sent our client stern emails threatening legal action if their names and logos weren’t removed from the website. TotalEnergies also sent a demand to the website’s hosting service, Netlify, that got repaer.earth taken offline. That was our cue to get involved.

We sent letters to both companies, explaining what should be obvious: the website was a noncommercial work of activism, unlikely to confuse any reasonable viewer. Trademark law is about protecting consumers; it’s not a tool for businesses to shut down criticism. We also sent a counternotice to Netlify denying TotalEnergies’ allegations and demanding that repaer.earth be restored. 

 We wish this were the first time we’ve had to send letters like that, but EFF regularly helps activists and critics push back on bogus trademark and copyright claims. This incident is also part of a broader and long-standing pattern of the energy industry weaponizing the law to quash dissent by environmental activists. These are just examples EFF has written about. We’ve been fighting these tactics for a long time, both by representing individual activist groups and through supporting legislative efforts like a federal anti-SLAPP bill. 

Frustratingly, Netlify made us go through the full DMCA counternotice process—including a 10-business-day waiting period to have the site restored—even though this was never a DMCA claim. (The DMCA is copyright law, not trademark, and TotalEnergies didn’t even meet the notice requirements that Netlify claims to follow.) Rather than wait around for Netlify to act, Modest Proposals eventually moved the website to a different hosting service. 

Equinor and TotalEnergies, on the other hand, have remained silent. This is a pretty common result when we help push back against bad trademark and copyright claims: the rights owners slink away once they realize their bullying tactics won’t work, without actually admitting they were wrong. We’re glad these companies seem to have backed off regardless, but victims of bogus claims deserve more certainty than this.

The Frightening Stakes of this Halloween’s Net Neutrality Hearing

Par : Kit Walsh
30 octobre 2024 à 16:29

The future of the open internet is in danger this October 31st, not from ghosts and goblins, but from the broadband companies that control internet access in most of the United States.  
 
These companies would love to use their oligopoly power to charge users and websites additional fees for “premium” internet access, which they can create by artificially throttling some connections and prioritizing others. Thanks to public pressure and a coalition of public interest groups, the Federal Communications Commission (FCC) has forbidden such paid prioritization and throttling, as well as outright blocking of websites. These net neutrality protections ensure that ISPs treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services. 

But the lure of making more money without investing in better service or infrastructure is hard for broadband services like Comcast and AT&T to resist. So the big telecom companies have challenged the FCC’s rules in court—and their case has now made its way to the Sixth Circuit Court of Appeals. 

A similar challenge was soundly rejected by the D.C. Circuit Court of Appeals in 2016. Unfortunately the FCC, led by a new Chair, repealed those hard-won rules in 2017—despite intense resistance from nonprofits, artists, tech companies large and small, libraries, and millions of regular internet users. A few years later, FCC membership changed again, and the new FCC restored net neutrality protections. As everyone expected, Team Telecom ran back to court, leading to this appeal. 

A few things have changed since 2017, however, and none of them good for Team Internet. For one thing, the case is being heard in the Sixth Circuit, which is not bound by the D.C. Circuit’s earlier reasoning, and which has already signaled its sympathy for Team Telecom in a preliminary ruling. 

And, of course, the makeup of the Supreme Court has changed dramatically. Justice Kavanaugh, in particular, dissented from the D.C. Circuit majority when it reviewed the 2015 order—a dissent that clearly influenced the Sixth Circuit’s initial ruling in the case. That influence may well be felt when this case inevitably makes its way to the Supreme Court.   

The central legal questions are: 1) what did Congress mean when it directed the FCC to regulate “telecommunications services” differently from “information services,” and 2) into which category does broadband fall. This matters because the rules that we need to preserve the open internet — such as forbidding discrimination against certain applications — require the FCC to treat access providers like “common carriers,” treatment that can only be applied to telecommunications services. If the FCC has to define broadband as an “information service,” it can impose regulations that “promote competition” (good) but it cannot do much to forbid paid prioritization, throttling or blocking (bad). 

The answers to those questions will likely depend on whether the Sixth Circuit thinks regulation of the internet is a “major question,” meaning whether it is an issue has “vast economic or political significance.” If so, the Supreme Court has said that agencies can only address it if Congress has clearly authorized them to do so.  

The “major questions doctrine” is on the rise thanks to a Supreme Court majority that is deeply skeptical of the so-called administrative state. In the past few years, the majority has used it to reject multiple agency actions, such as the CDC’s temporary moratorium on evictions in areas hard-hit by Covid.  

Equally importantly, the Supreme Court recently changed the rules on whether and how court should defer to plausible agency interpretations of the statutes under which they operate. In the case of Loper Bright Enterprises v. Raimondo, the Court ended an era of judicial deference to agency determinations. Rather than allowing agencies to act according to the agencies’ own plausible determinations about the scope and meaning of the authorities granted to them by Congress, courts are now instructed to reach those determinations independently.  
 
Ironically, under the old rule of deference, in 2003 the Ninth Circuit independently concluded that broadband was a telecommunications service – the most straightforward and correct reading of the statute and the one that provides a sound legal basis for net neutrality protections. In fact, the court said it had been erroneous for the FCC to say otherwise. But the FCC and telecoms successfully argued that the courts should defer to the FCC’s contrary reading, and won at the Supreme Court based on the doctrine of judicial deference that Loper Bright has now overruled. 

Putting these legal threads together, Team Telecom is arguing that the FCC cannot classify current broadband offerings as a telecommunications service, even though that’s the best reading of the statute, because that classification is be a “major question” that only Congress can decide. Team Internet argues that Congress clearly delegated that decision-making power to the FCC, which is one reason the Supreme Court did not treat the issue as a “major question” the last time it looked at the issue. Team Telecom also argues that, after the Loper Bright decision, the court need not defer to the FCC’s interpretation of its own authority. Team Internet explains that, this time, the FCC’s interpretation aligns with the best understanding of the statute and the facts. 
 
EFF stands with Team Internet and so should the court. It will likely issue a decision in the first half of 2025, so the specter of uncertainty will be with us for some time. Even when the panel issues an opinion, the losing side will be able to request that the full Sixth Circuit rehear the case, and then the Supreme Court would be the next and final resting place of the matter. 

 

Triumphs, Trials, and Tangles From California's 2024 Legislative Session

California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this year.

EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.

Celebrating Our Victories

This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.

First, we commend the California Legislature for not advancing A.B. 3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. We understand Asm. Alanis' concerns, but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.

Second, EFF worked with dozens of organizations to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.  Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read more about this bill here, and we are glad to see the California legislature reject this dangerous bill.

EFF also worked to oppose and defeat S.B. 1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076 would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it. 

Lastly, EFF opposed S.B. 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act authored by Senator Scott Wiener (San Francisco). This bill aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.

Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.  

Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:

The Lost Promise of A.B. 3048

Throughout this session, EFF and our partners supported A.B. 3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their privacy rights without creating burdensome requirements for developers or hindering innovation.

Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process. But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy rights.

This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their rights. 

Wrong Solutions to Real Problems

A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included bills intended to address young people’s safety online and deepfakes in elections.

While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online, and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms. 

Similarly, A.B. 2839, authored by Assemblymember Gail Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers, etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.

There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they were signed, a federal judge put A.B. 2839’s enforcement on pause (via a preliminary injunction) on First Amendment grounds.

Privacy Risks in State Databases

We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways, threatened to weaken privacy protections within state databases. Specifically,  A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data privacy.

A.B. 518 authorizes numerous agencies in California to share, without restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this personal information confidential, and no restrictions on what DSS can do with the information.

On the other hand, A.B. 2723 assigns a governing board to the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:

  1.     the right for subjects whose information is kept in the data system to receive notice their data is in the system;
  2.     the right to consent or, more meaningfully, to withhold consent;
  3.     and the right to request correction of erroneous information.

By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first place. 

Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.

A Call for Comprehensive Local Protections

A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be addressed.

Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.

Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.

No Matter What the Bank Says, It's YOUR Money, YOUR Data, and YOUR Choice

30 octobre 2024 à 08:16

The Consumer Finance Protection Bureau (CFPB) has just finalized a rule that makes it easy and safe for you to figure out which bank will give you the best deal and switch to that bank, with just a couple of clicks. 

We love this kind of thing: the coolest thing about a digital world is how easy it is to switch from product or service to another—in theory. Digital tools are so flexible, anyone who wants your business can write a program to import your data into a new service and forward any messages or interactions that show up at the old service.

That's the theory. But in practice, companies have figured out how to use law - IP law, cybersecurity law, contract law, trade secrecy law—to literally criminalize this kind of marvelous digital flexibility, so that it can end up being even harder to switch away from a digital service than it is to hop around among traditional, analog ones.

Companies love lock-in. The harder it is to quit a product or service, the worse a company can treat you without risking your business. Economists call the difficulties you face in leaving one service for another the "switching costs" and businesses go to great lengths to raise the switching costs they can impose on you if you have the temerity to be a disloyal customer. 

So long as it's easier to coerce your loyalty than it is to earn it, companies win and their customers lose. That's where the new CFPB rule comes in.

Under this rule, you can authorize a third party - another bank, a comparison shopping site, a broker, or just your bookkeeping software - to request your account data from your bank. The bank has to give the third party all the data you've authorized. This data can include your transaction history and all the data needed to set up your payees and recurring transactions somewhere else.

That means that—for example—you can authorize a comparison shopping site to access some of your bank details, like how much you pay in overdraft fees and service charges, how much you earn in interest, and what your loans and credit cards are costing you. The service can use this data to figure out which bank will cost you the least and pay you the most. 

Then, once you've opened an account with your new best bank, you can direct it to request all your data from your old bank, and with a few clicks, get fully set up in your new financial home. All your payees transfer over, all your regular payments, all the transaction history you'll rely on at tax time. "Painless" is an admittedly weird adjective to apply to household finances, but this comes pretty darned close.

Americans lose a lot of money to banking fees and low interest rates. How much? Well, CFPB economists, using a very conservative methodology, estimate that this rule will make the American public at least $677 million better off, every year.

Now, that $677 million has to come from somewhere, and it does: it comes from the banks that are currently charging sky-high fees and paying rock-bottom interest. The largest of these banks are suing the CFPB in a bid to block the rule from taking effect.

These banks claim that they are doing this to protect us, their depositors, from a torrent of fraud that would be unleashed if we were allowed to give third parties access to our own financial data. Clearly, this is the only reason a giant bank would want to make it harder for us to change to a competitor (it can't possibly have anything to do with the $677 million we stand to save by switching).

We've heard arguments like these before. While EFF takes a back seat to no one when it comes to defending user security (we practically invented this), we reject the idea that user security is improved when corporations lock us in (and leading security experts agree with us).

This is not to say that a bad data-sharing interoperability rule wouldn't be, you know, bad. A rule that lacked the proper safeguards could indeed enable a wave of fraud and identity theft the likes of which we've never seen.

Thankfully, this is a good interoperability rule! We liked it when it was first proposed, and it got even better through the rulemaking process.

First, the CFPB had the wisdom to know that a federal finance agency probably wasn't the best—or only—group of people to design a data-interchange standard. Rather than telling the banks exactly how they should transmit data when requested by their customers, the CFPB instead said, "These are the data you need to share and these are the characteristics of a good standards body. So long as you use a standard from a good standards body that shares this data, you're in compliance with the rule." This is an approach we've advocated for years, and it's the first time we've seen it in the wild.

The CFPB also instructs the banks to fail safe: any time a bank gets a request to share your data that it thinks might be fraudulent, they have the right to block the process until they can get more information and confirm that everything is on the up-and-up.

The rule also regulates the third parties that can get your data, establishing stringent criteria for which kinds of entities can do this. It also limits how they can use your data (strictly for the purposes you authorize) and what they need to do with the data when that has been completed (delete it forever), and what else they are allowed to do with it (nothing). There's also a mini "click-to-cancel" rule that guarantees that you can instantly revoke any third party's access to your data, for any reason.

The CFPB has had the authority to make a rule like this since its founding in 2010, with the passage of the Consumer Financial Protection Act (CFPA). Back when the CFPA was working its way through Congress, the banks howled that they were being forced to give up "their" data to their competitors.

But it's not their data. It's your data. The decision about who you share it with belongs to you, and you alone.

Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff

29 octobre 2024 à 09:24

A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors, and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.

Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.

You love to see it.

Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores (Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get all their apps from the company's store.

This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can demand all kinds of concessions from you, in order to reach me. The most important concession is money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.

For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case, they only charge five times more than the Visa/Mastercard cartel).

Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last December, they prevailed in their case against Google.

This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to decide what to do about it.

It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The Verge

For the next three years, Google must meet the following criteria:

  • Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
  • Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as they'd install any app;
  • Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
  • Permit app vendors to tell users about other ways to pay for the things they buy in-app;
  • Permit app vendors to set their own prices.

Google is also prohibited from using its cash to fence out rivals, for example, by:

  • Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
  • Offering incentives to app vendors to avoid rival app stores;
  • Offering incentives to hardware makers to pre-install Google Play;
  • Offering incentives to hardware makers not to install rival app stores.

These provisions tie in with Google's other recent  loss; in Google v. DoJ, where the company was found to have operated a monopoly over search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers, browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that anyone who came up with a better search engine would have no hope of finding users.

What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software authors are at its mercy. By making it easy for users to switch from one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not because you're holding 3.3 billion Android users hostage."

Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a well-crafted interoperability order can cut through thorny tech problems. 

Google has vowed to appeal. They say they're being singled out, because Apple won a similar case earlier this year. It's true, a different  court got it wrong with Apple.

But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple responded to the EU by refusing to substantively comply with the law, teeing up another big, hairy battle.

In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game devs who are dolphins caught in Google's tuna net to the free press itself..

Cop Companies Want All Your Data and Other Takeaways from This Year’s IACP Conference

Par : Beryl Lipton
28 octobre 2024 à 10:52

Artificial intelligence dominated the technology talk on panels, among sponsors, and across the trade floor at this year’s annual conference of the International Association of Chiefs of Police (IACP).

IACP, held Oct. 19 - 22 in Boston, brings together thousands of police employees with the businesses who want to sell them guns, gadgets, and gear. Across the four-day schedule were presentations on issues like election security and conversations with top brass like Secretary of Homeland Security Alejandro Mayorkas. But the central attraction was clearly the trade show floor. 

Hundreds of vendors of police technology spent their days trying to attract new police customers and sell existing ones on their newest projects. Event sponsors included big names in consumer services, like Amazon Web Services (AWS) and Verizon, and police technology giants, like Axon. There was a private ZZ Top concert at TD Garden for the 15,000+ attendees. Giveaways — stuffed animals, espresso, beer, challenge coins, and baked goods — appeared alongside Cybertrucks, massage stations, and tables of police supplies: vehicles, cameras, VR training systems, and screens displaying software for recordkeeping and data crunching.

And vendors were selling more ways than ever for police to surveillance the public and collect as much personal data as possible. EFF will continue to follow up on what we’ve seen in our research and at IACP.

A partial view of the vendor booths at IACP 2024


Doughnuts provided by police tech vendor Peregrine

“All in On AI” Demands Accountability

Police are pushing forward full speed ahead on AI. 

EFF’s Atlas of Surveillance tracks use of AI-powered equipment like face recognition, automated license plate readers, drones, predictive policing, and gunshot detection. We’ve seen a trend toward the integration of these various data streams, along with private cameras, AI video analysis, and information bought from data brokers. We’ve been following the adoption of real-time crime centers. Recently, we started tracking the rise of what we call Third Party Investigative Platforms, which are AI-powered systems that claim to sort or provide huge swaths of data, personal and public, for investigative use. 

The IACP conference featured companies selling all of these kinds of surveillance. Also, each day contained multiple panels on how AI could be integrated into local police work, including featured speakers like Axon founder Rick Smith, Chula Vista Police Chief Roxana Kennedy, and Fort Collins Police Chief Jeff Swoboda, whose agency was among the first to use Axon’s DraftOne, software using genAI to create police reports. Drone as First Responder (DFR) programs were prominently featured by Skydio, Flock Safety, and Brinc. Clearview AI marketed its face recognition software. Axon offered a whole set of different tools, centering its whole presentation around AxonAI and the computer-driven future. 

The booth for police drone provider, Brinc

The policing “solution” du jour is AI, but in reality it demands oversight, skepticism, and, in some cases, total elimination. AI in policing carries a dire list of risks, including extreme privacy violations, bias, false accusations, and the sabotage of our civil liberties. Adoption of such tools at minimum requires community control of whether to acquire them, and if adopted, transparency and clear guardrails. 

The Corporate/Law Enforcement Data Surveillance Venn Diagram Is Basically A Circle

AI cannot exist without data: data to train the algorithms, to analyze even more data, to trawl for trends and generate assumptions. Police have been accruing their own data for years through cases, investigations, and surveillance. Corporations have also been gathering information from us: our behavior online, our purchases, how long we look at an image, what we click on. 

As one vendor employee said to us, “Yeah, it’s scary.” 

Corporate harvesting and monetizing of our data market is wildly unregulated. Data brokers have been busily vacuuming up whatever information they can. A whole industry provides law enforcement access to as much information about as many people as possible, and packages police data to “provide insights” and visualizations. At IACP, companies like LexisNexis, Peregrine, DataMinr, and others showed off how their platforms can give police access to evermore data from tens of thousands of sources. 

Some Cops Care What the Public Thinks

Cops will move ahead with AI, but they would much rather do it without friction from their constituents. Some law enforcement officials remain shaken up by the global 2020 protests following the police murder of George Floyd. Officers at IACP regularly referred to the “public” or the “activists” who might oppose their use of drones and other equipment. One featured presentation, “Managing the Media's 24-Hour News Cycle and Finding a Reporter You Can Trust,” focused on how police can try to set the narrative that the media tells and the public generally believes. In another talk, Chula Vista showed off professionally-produced videos designed to win public favor. 

This underlines something important: Community engagement, questions, and advocacy are well worth the effort. While many police officers think privacy is dead, it isn’t. We should have faith that when we push back and exert enough pressure, we can stop law enforcement’s full-scale invasion of our private lives.

Cop Tech is Coming To Every Department

The companies that sell police spy tech, and many departments that use it, would like other departments to use it, too, expanding the sources of data feeding into these networks. In panels like “Revolutionizing Small and Mid-Sized Agency Practices with Artificial Intelligence,” and “Futureproof: Strategies for Implementing New Technology for Public Safety,” police officials and vendors encouraged agencies of all sizes to use AI in their communities. Representatives from state and federal agencies talked about regional information-sharing initiatives and ways smaller departments could be connecting and sharing information even as they work out funding for more advanced technology.

A Cybertruck at the booth for Skyfire AI

“Interoperability” and “collaboration” and “data sharing” are all the buzz. AI tools and surveillance equipment are available to police departments of all sizes, and that’s how companies, state agencies, and the federal government want it. It doesn’t matter if you think your Little Local Police Department doesn’t need or can’t afford this technology. Almost every company wants them as a customer, so they can start vacuuming their data into the company system and then share that data with everyone else. 

We Need Federal Data Privacy Legislation

There isn’t a comprehensive federal data privacy law, and it shows. Police officials and their vendors know that there are no guardrails from Congress preventing use of these new tools, and they’re typically able to navigate around piecemeal state legislation. 

We need real laws against this mass harvesting and marketing of our sensitive personal information — a real line in the sand that limits these data companies from helping police surveil us lest we cede even more of our rapidly dwindling privacy. We need new laws to protect ourselves from complete strangers trying to buy and search data on our lives, so we can explore and create and grow without fear of indefinite retention of every character we type, every icon we click. 

Having a computer, using the internet, or buying a cell phone shouldn’t mean signing away your life and its activities to any random person or company that wants to make a dollar off of it.

EU to Apple: “Let Users Choose Their Software”; Apple: “Nah”

28 octobre 2024 à 10:48

This year, a far-reaching, complex new piece of legislation comes into effect in EU: the Digital Markets Act (DMA), which represents some of the most ambitious tech policy in European history. We don’t love everything in the DMA, but some of its provisions are great, because they center the rights of users of technology, and they do that by taking away some of the control platforms exercise over users, and handing that control back to the public who rely on those platforms.

Our favorite parts of the DMA are the interoperability provisions. IP laws in the EU (and the US) have all but killed the longstanding and honorable tradition of adversarial interoperability: that’s when you can alter a service, program or device you use, without permission from the company that made it. Whether that’s getting your car fixed by a third-party mechanic, using third-party ink in your printer, or choosing which apps run on your phone, you should have the final word. If a company wants you to use its official services, it should make the best services, at the best price – not use the law to force you to respect its business-model.

It seems the EU agrees with us, at least on this issue. The DMA includes several provisions that force the giant tech companies that control so much of our online lives (AKA “gatekeeper platforms”) to provide official channels for interoperators. This is a great idea, though, frankly, lawmakers should also restore the right of tinkerers and hackers to reverse-engineer your stuff and let you make it work the way you want.

One of these interop provisions is aimed at app stores for mobile devices. Right now, the only (legal) way to install software on your iPhone is through Apple’s App Store. That’s fine, so long as you trust Apple and you think they’re doing a great job, but pobody’s nerfect, and even if you love Apple, they won’t always get it right – like when they tell you you’re not allowed to have an app that records civilian deaths from US drone strikes, or a game that simulates life in a sweatshop, or a dictionary (because it has swear words!). The final word on which apps you use on your device should be yours.

Which is why the EU ordered Apple to open up iOS devices to rival app stores, something Apple categorically refuses to do. Apple’s “plan” for complying with the DMA is, shall we say, sorely lacking (this is part of a grand tradition of American tech giants wiping their butts with EU laws that protect Europeans from predatory activity, like the years Facebook spent ignoring European privacy laws, manufacturing stupid legal theories to defend the indefensible).

Apple’s plan for opening the App Store is effectively impossible for any competitor to use, but this goes double for anyone hoping to offer free and open source software to iOS users. Without free software – operating systems like GNU/Linux, website tools like WordPress, programming languages like Rust and Python, and so on – the internet would grind to a halt.

Our dear friends at Free Software Foundation Europe (FSFE) have filed an important brief with the European Commission, formally objecting to Apple’s ridiculous plan on the grounds that it effectively bars iOS users from choosing free software for their devices.

FSFE’s brief makes a series of legal arguments, rebutting Apple’s self-serving theories about what the DMA really means. FSFE shoots down Apple’s tired argument that copyrights and patents override any interoperability requirements. U.S. courts have been inconsistent on this issue, but we’re hopeful that the Court of Justice of the E.U. will reject the “intellectual property trump card.” Even more importantly, FSFE makes moral and technical arguments about the importance of safeguarding the technological self-determination of users by letting them choose free software, and about why this is as safe – or safer – than giving Apple a veto over its customers’ software choices.

Apple claims that because you might choose bad software, you shouldn’t be able to choose software, period. They say that if competing app stores are allowed to exist, users won’t be safe or private. We disagree – and so do some of the most respected security experts in the world.

It’s true that Apple can use its power wisely to ensure that you only choose good software. But it’s also used that power to attack its users, like in China, where Apple blocked all working privacy tools from iPhones and then neutered a tool used to organize pro-democracy protests.

It’s not just in China, either. Apple has blanketed the world with billboards celebrating its commitment to its users’ privacy, and they made good on that promise, blocking third-party surveillance (to the $10 billion dollar chagrin of Facebook). But right in the middle of all that, Apple also started secretly spying on iOS users to fuel its own surveillance advertising network, and then lied about it.

Pobody’s nerfect. If you trust Apple with your privacy and security, that’s great. But for people who don’t trust Apple to have the final word – for people who value software freedom, or privacy (from Apple), or democracy (in China), users should have the final say.

We’re so pleased to see the EU making tech policy we can get behind – and we’re grateful to our friends at FSFE for holding Apple’s feet to the fire when they flout that law.

The Real Monsters of Street Level Surveillance

Par : Rory Mir
25 octobre 2024 à 17:37

Safe trick-or-treating this Halloween means being aware of the real monsters of street-level surveillance. You might not always see these menaces, but they are watching you. The real-world harms of these terrors wreak havoc on our communities. Here, we highlight just a few of the beasts. To learn more about all of the street-level surveillance creeps in your community, check out our even-spookier resource, sls.eff.org

If your blood runs too cold, take a break with our favorite digital rights legends— the Encryptids.

The Face Stealer

 "The Face Stealer" text over illustration of a spider-like monster

Careful where you look. Around any corner may loom the Face Stealer, an arachnid mimic that captures your likeness with just a glance. Is that your mother in the woods? Your roommate down the alley? The Stealer thrives on your dread and confusion, luring you into its web. Everywhere you go, strangers and loved ones alike recoil, convinced you’re something monstrous. Survival means adapting to a world where your face is no longer yours—it’s a lure for the horror that claimed it.

The Real Monster

Face recognition technology (FRT) might not jump out at you, but the impacts of this monster are all too real. EFF wants to banish this monster with a full ban on government use, and prohibit companies from feeding on this data without permission. FRT is a tool for mass surveillance, snooping on protesters, and deepening social inequalities.

Three-eyed Beast

"The Three-eyed Beast" text over illustration of a rectangular face with a large camera as a snout, pinned to a shirt with a badge.

Freeze! In your weakest moment, you may  encounter the Three-Eyed Beast—and you don’t want to make any sudden movements. As it snarls, its third eye cracks open and sends a chill through your soul. This magical gaze illuminates your every move, identifying every flaw and mistake. The rest of the world is shrouded in darkness as its piercing squeals of delight turn you into a spectacle—sometimes calling in foes like the Face Stealer. The real fear sets in when the eye closes once more, leaving you alone in the shadows as you realize its gaze was the last to ever find you. 

The Real Monster

Body-worn cameras are marketed as a fix for police transparency, but instead our communities get another surveillance tool pointed at us. Officers often decide when to record and what happens to the footage, leading to selective use that shields misconduct rather than exposes it. Even worse, these cameras can house other surveillance threats like Face Recognition Technology. Without strict safeguards, and community control of whether to adopt them in the first place, these cameras do more harm than good.

Shrapnel Wraith

"The Shrapnel Wraith" text over illustration of a mechanical vulture dropping gears and bolts

If you spot this whirring abomination, it’s likely too late. The Shrapnel Wraith circles, unleashed on our most under-served and over-terrorized communities. This twisted heap of bolts and gears, puppeted by spiteful spirits into this gestalt form of a vulture. It watches your most private moments, but don’t mistake it for a mere voyeur; it also strikes with lethal force. Its junkyard shrapnel explodes through the air, only for two more vultures to rise from the wreckage. Its shadow swallows the streets, its buzzing sinking through your skin. Danger is circling just overhead.

The Real Monster

Drones and robots give law enforcement constant and often unchecked surveillance power. Frequently equipped with tools like high-definition cameras, heat sensors, and license plate readers, these products can extend surveillance into seemingly private spaces like one’s own backyard.  Worse, some can be armed with explosives and other weapons making them a potentially lethal threat.  Drone and robot use must have strong protections for people’s privacy, and we strongly oppose arming them with any weapons.

Doorstep Creep

"The Doorstep Creep" text over illustration of a cloaked figure in front of a door, holding a staff topped with a camera

Candy-seekers, watch which doors you ring this Halloween, as the Doorstep Creep lurks  at more and more homes. Slinking by the door, this ghoul fosters fear and mistrust in communities, transforming cozy entries into a fortress of suspicion. Your visit feels judged, unwanted, and in a shadow of loathing. As you walk away,  slanderous whispers echo in the home and down the street. You are not welcome here. Doors lock, blinds close, and the Creeps' dark eyes remind you of how alone you are.

The Real Monster

Community Surveillance Apps come in many forms, encouraging the adoption of more home security devices like doorway cameras, smart doorbells, and more crowd-sourced surveillance apps. People come to these apps out of fear and only find more of the same, with greater public paranoia, racial gatekeeping, and even vigilante violence. EFF believes the makers of these platforms should position them away from crime and suspicion and toward community support and mutual aid. 

Foggy Gremlin

"The Foggy Fremlin" text over illustration of a little monster with sharp teeth and a long tail, rising a GPS location pin.

Be careful where you step for this scavenger. The Foggy Gremlin sticks to you like a leech, and envelopes you in a psychedelic mist to draw in large predators. You can run, but no longer hide, as the fog spreads and grows denser. Anywhere you go, and anywhere you’ve been is now a hunting ground. As exhaustion sets in, a world once open and bright has become narrow, dark, and sinister.

The Real Monster

Real-time location tracking is a chilling mechanism that enables law enforcement to monitor individuals through data bought from brokers, often without warrants or oversight. Location data, harvested from mobile apps, can be weaponized to conduct area searches that expose sensitive information about countless individuals, the overwhelming majority of whom are innocent. We oppose this digital dragnet and advocate for legislation like the Fourth Amendment is Not For Sale Act to protect individuals from such tracking.

Street Level Surveillance

Fight the monsters in your community

Disability Rights Are Technology Rights

24 octobre 2024 à 17:57

At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological self-determination is important for every technology user, and it’s especially important for users with disabilities.

Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the design process. 

The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how things work, they’ll never work right.

So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start, not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.

Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to ensuring that their gadgets, code and systems can’t be modified by their users.

Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.

But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists who added more accessibility features to EME.

The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe effects, and dampens them if there are. 

It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole right to do so. No one should have that power.

It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the Federal Trade Commission has signaled that it will crack down on companies that use DRM to block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

(EFF extends our thanks to Liz Henry for their help with this article.)

The UK Must Act: Alaa Abd El-Fattah Still Imprisoned 25 Days After Release Date

23 octobre 2024 à 13:30

It’s been 25 days since September 29, the day that should have seen British-Egyptian blogger, coder, and activist Alaa Abd El Fattah walk free. Egyptian authorities refused to release him at the end of his sentence, in contradiction of the country's own Criminal Procedure Code, which requires that time served in pretrial detention count toward a prison sentence. In the days since, Alaa’s family has been able to secure meetings with high-level British officials, including Foreign Secretary David Lammy, but as of yet, the Egyptian government still has not released Alaa.

In early October, Alaa was named the 2024 PEN Writer of Courage by PEN Pinter Prize winner Arundhati Roy, who presented the award in a ceremony where it was received by Egyptian publication Mada Masr editor Lina Attalah on Alaa’s behalf.

Alaa’s mother, Laila Soueif, is now on her third week of hunger strike and says that she won’t stop until Alaa is free or she’s taken to the hospital. In recent weeks, Alaa’s mothers and sisters have met with several members of Parliament in the hopes of placing more pressure on officials. As the BBC reports, his family are “deeply disappointed with how the current government, and the previous one, have handled his case” and believe that the UK has more leverage with Egypt that it is not using.

Alaa deserves to finally return to his family, now in the UK, and to be reunited with his son, Khaled, who is now a teenager. We urge EFF supporters in the UK to write to their MP (external link) to place pressure on the UK’s Labour government to use their power to push for Alaa’s release. 

In Appreciation of David Burnham

Par : David Sobel
22 octobre 2024 à 16:32

We at EFF have long recognized the threats posed by the unchecked technological prowess of law enforcement and intelligence agencies. Since our founding in 1990, we have been in the forefront of efforts to impose meaningful legal controls and accountability on the secretive activities of those entities, including the National Security Agency (NSA). While the U.S. Senate’s Church Committee hearings and report in the mid-1970s documented the past abuses of government surveillance powers, it could not anticipate the dangers those interception and collection capabilities would bring to a networked environment. As Sen. Frank Church said in 1975 about an unchecked NSA, “No American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn't matter. There would be no place to hide.” The communications infrastructure was still in a mid-20th century analog mode.

One of the first observers to recognize the impact of NSA’s capabilities in the emerging digital landscape was David Burnham, a pioneering investigative journalist and author who passed away earlier this month at 91 years of age. While the obituary that ran at his old home, The New York Times, rightly emphasized Burnham’s ground-breaking investigations of police corruption and the shoddy safety standards of the nuclear power industry (depicted, respectively, in the films “Serpico” and “Silkwood”), those in the digital rights world are especially appreciative of his prescience when it came to the issues we care about deeply.

In 1983, Burnham published “The Rise of the Computer State,” one of the earliest examinations of the emerging challenges of the digital age. As Walter Cronkite wrote in his foreword to the book, “The same computer that enables us to explore the outer reaches of space and the mysteries of the atom can also be turned into an instrument of tyranny. We must ensure that the rise of the computer state does not also mean the demise of our civil liberties.” Here is what Burnham wrote in a piece for The New York Times Magazine based on the reporting in his book:

With unknown billions of Federal dollars, the [NSA] purchases the most sophisticated communications and computer equipment in the world. But truly to comprehend the growing reach of this formidable organization, it is necessary to recall once again how the computers that power the NSA are also gradually changing lives of Americans - the way they bank, obtain benefits from the Government and communicate with family and friends. Every day, in almost every area of culture and commerce, systems and procedures are being adopted by private companies and organizations...that make it easier for the NSA to dominate American society...

Remember, that was written in 1983. Ten years before the launch of the Mosaic browser and three decades before mobile devices became ubiquitous. But Burnham understood the trajectory of the emerging technology, for both the government and its citizens.

Recognizing the dangers of unchecked surveillance powers, Burnham was a champion of oversight and transparency, and, consequently, he was a skilled and aggressive user of the Freedom of Information Act. In 1989, he partnered with Professor Susan Long to establish the Transactional Records Access Clearinghouse (TRAC) at Syracuse University. TRAC combines sophisticated use of FOIA with data analytics techniques “to develop as comprehensive and detailed a picture as possible about what federal enforcement and regulatory agencies actually do . . . and to organize all of this information to make it readily accessible to the public.” From its FOIA requests, TRAC adds more than 3 billion new records to its database annually. Its work is widely acclaimed by the many academics, journalists and lawyers who make use of its extensive resources. It is a fitting legacy to Burnham’s unwavering belief in the power of information.

As EFF Executive Director Cindy Cohn has said when describing our work, we stand on the shoulders of giants. With his recognition of technology’s challenges to privacy, his insistence on transparency, and his joy in telling truth to power, David Burnham was one of them.

Full disclosure: David was a longtime colleague, client and friend.

How Many U.S. Persons Does Section 702 Spy On? The ODNI Needs to Come Clean.

22 octobre 2024 à 13:05

EFF has joined with 23 other organizations including the ACLU, Restore the Fourth, the Brennan Center for Justice, Access Now, and the Freedom of the Press Foundation to demand that the Office of the Director of National Intelligence (ODNI) furnish the public with an estimate of exactly how many U.S. persons’ communications have been hoovered up, and are now sitting on a government server for law enforcement to unconstitutionally sift through at their leisure.

This letter was motivated by the fact that representatives of the National Security Agency (NSA) have promised in the past to provide the public with an estimate of how many U.S. persons—that is, people on U.S. soil—have had their communications “incidentally” collected through the surveillance authority Section 702 of the FISA Amendments Act. 

As the letter states, “ODNI and NSA cannot expect public trust to be unconditional. If ODNI and NSA continue to renege on pledges to members of Congress, and to withhold information that lawmakers, civil society, academia, and the press have persistently sought over the course of thirteen years, that public trust will be fatally undermined.”

Section 702 allows the government to conduct surveillance of foreigners abroad from inside the United States. It operates, in part, through the cooperation of large and small telecommunications service providers which hand over the digital data and communications they oversee. While Section 702 prohibits the NSA from intentionally targeting Americans with this mass surveillance, these agencies routinely acquire a huge amount of innocent Americans' communications “incidentally” because, as it turns out, people in the United States communicate with people overseas all the time. This means that the U.S. government ends up with a massive pool consisting of the U.S.-side of conversations as well as communications from all over the globe. Domestic law enforcement agencies, including the Federal Bureau of Investigation (FBI), can then conduct backdoor warrantless searches of these “incidentally collected” communications. 

For over 10 years, EFF has fought hard every time Section 702 expires in the hope that we can get some much-needed reforms into any bills that seek to reauthorize the authority. Most recently, in spring 2024, Congress renewed Section 702 for another two years with none of the changes necessary to restore privacy rights

While we wait for the upcoming opportunity to fight Section 702, joining our allies to sign on to this letter in the fight for transparency will give us a better understanding of the scope of the problem.

You can read the whole letter here.

EFF to Massachusetts’ Highest Court: Pretrial Electronic Monitoring Should Not Eviscerate Privacy Rights

Par : Hannah Zhao
22 octobre 2024 à 11:58

When someone is placed on location monitoring for one purpose, it does not justify law enforcement’s access to that information for a completely different purpose without a proper warrant. 

EFF joined the Committee for Public Counsel Services, ACLU, ACLU of Massachusetts, and the Massachusetts Association of Criminal Defense Lawyers, in filing an amicus brief in the Massachusetts Supreme Judicial Court, in Commonwealth v. Govan, arguing just that. 

In this case, the defendant Anthony Govan was subjected to pretrial electronic monitoring as a condition of release prior to trial. In investigating a completely unrelated crime, the police asked the pretrial electronic monitoring division for the identity and location of “anyone” who was near the location of this latter incident. Mr. Govan’s data was part of the response, and that information was used against him in this unrelated case. 

Our joint amicus brief highlighted the coercive nature of electronic monitoring programs. When the alternative is being locked up, there is no meaningful consent to the collection of information under electronic monitoring. At the same time, as someone on pretrial release, Mr. Govan had a reasonable expectation of privacy in his location information. As courts, including the U.S. Supreme Court, have recognized, location and movement information are incredibly sensitive and revealing. Just because someone is on electronic monitoring, it doesn’t mean they have no expectation of privacy, whether they are going to a political protest, a prayer group, an abortion clinic, a gun show, or their private home. Pretrial electronic monitoring collects this information around the clock—information that otherwise would not have been available to law enforcement through traditional tools.  

The violation of privacy is especially problematic in this case, because Mr. Govan had not been convicted and is still presumed to be innocent. According to current law, those on pretrial release are entitled to far stronger Fourth Amendment protections than those who are on monitored release after a conviction. As argued in the amicus brief, absent a proper warrant, the information gathered by the electronic monitoring program should only be used to make sure Mr. Govan was complying with his pretrial release conditions. 

Lastly, although this case is decided on the absence of a warrant or a warrant exception, we argued that the court should provide guidance for future warrants. The Fourth Amendment and its state corollaries prohibit “general warrants,” akin to a fishing expedition, and instead require warrants meet nexus and particularity requirements.  Bulk location data requests like the one in this case cannot meet that standard.  

While electronic monitoring is marketed as an alternative to detention, the evidence does not bear this out. Courts should not allow the government to use the information gathered from this expansion of state surveillance to be used beyond its purpose without a warrant.

U.S. Border Surveillance Towers Have Always Been Broken

Par : Dave Maass
21 octobre 2024 à 11:47

A new bombshell scoop from NBC News revealed an internal U.S. Border Patrol memo claiming that 30 percent of camera towers that compose the agency's "Remote Video Surveillance System" (RVSS) program are broken. According to the report, the memo describes "several technical problems" affecting approximately 150 towers.

Except, this isn't a bombshell. What should actually be shocking is that Congressional leaders are acting shocked, like those who recently sent a letter about the towers to Department of Homeland Security (DHS) Secretary Alejandro Mayorkas. These revelations simply reiterate what people who have been watching border technology have known for decades: Surveillance at the U.S.-Mexico border is a wasteful endeavor that is ill-equipped to respond to an ill-defined problem.

Yet, after years of bipartisan recognition that these programs were straight-up boondoggles, there seems to be a competition among political leaders to throw the most money at programs that continue to fail.

Official oversight reports about the failures, repeated breakages, and general ineffectiveness of these camera towers have been public since at least the mid-2000s. So why haven't border security agencies confronted the problem in the last 25 years? One reason is that these cameras are largely political theater; the technology dazzles publicly, then fizzles quietly. Meanwhile, communities that should be thriving at the border are treated like a laboratory for tech companies looking to cash in on often exaggerated—if not fabricated—homeland security threats.

The Acronym Game

A map of the US-Mexico border with multicolored dots representing surveillance towers.

EFF is mapping surveillance at the U.S.-Mexico border

.

In fact, the history of camera towers at the border is an ugly cycle. First, Border Patrol introduces a surveillance program with a catchy name and big promises. Then a few years later, oversight bodies, including Congress, conclude it's an abject mess. But rather than abandon the program once and for all, border security officials come up with a new name, slap on a fresh coat of paint, and continue on. A few years later, history repeats.

In the early 2000s, there was the Integrated Surveillance Intelligence System (ISIS), with the installation of RVSS towers in places like Calexico, California and Nogales, Arizona, which was later became the America's Shield Initiative (ASI). After those failures, there was Project 28 (P-28), the first stage of the Secure Border Initiative (SBInet). When that program was canceled, there were various new programs like the Arizona Border Surveillance Technology Plan, which became the Southwest Border Technology Plan. Border Patrol introduced the Integrated Fixed Tower (IFT) program and the RVSS Update program, then the Automated Surveillance Tower (AST) program. And now we've got a whole slew of new acronyms, including the Integrated Surveillance Tower (IST) program and the Consolidated Towers and Surveillance Equipment (CTSE) program.

Feeling overwhelmed by acronyms? Welcome to the shell game of border surveillance. Here's what happens whenever oversight bodies take a closer look.

ISIS and ASI

A surveillance tower over a home.

An RVSS from the early 2000s in Calexico, California.

Let's start with the Integrated Surveillance Intelligence System (ISIS), a program comprised of towers, sensors and databases originally launched in 1997 by the Immigration and Naturalization Service. A few years later, INS was reorganized into the U.S. Department of Homeland Security (DHS), and ISIS became part of the newly formed Customs & Border Protection (CBP).

It was only a matter of years before the DHS Inspector General concluded that ISIS was a flop: "ISIS remote surveillance technology yielded few apprehensions as a percentage of detection, resulted in needless investigations of legitimate activity, and consumed valuable staff time to perform video analysis or investigate sensor alerts."

During Senate hearings, Sen. Judd Gregg (R-NH), complained about a "total breakdown in the camera structures," and that the U.S. government "bought cameras that didn't work."

Around 2004, ISIS was folded into the new America's Shield Initiative (ASI), which officials claimed would fix those problems. CBP Commissioner Robert Bonner even promoted ASI as a "critical part of CBP’s strategy to build smarter borders." Yet, less than a year later, Bonner stepped down, and the Government Accountability Office (GAO) found the ASI had numerous unresolved issues necessitating a total reevaluation. CBP disputed none of the findings and explained it was dismantling ASI in order to move onto something new that would solve everything: the Secure Border Initiative (SBI).

Reflecting on the ISIS/ASI programs in 2008, Rep. Mike Rogers (R-MI) said, "What we found was a camera and sensor system that was plagued by mismanagement, operational problems, and financial waste. At that time, we put the Department on notice that mistakes of the past should not be repeated in SBInet." 

You can guess what happened next. 

P-28 and SBInet

The subsequent iteration was called Project 28, which then evolved into the Secure Border Initiative's SBInet, starting in the Arizona desert. 

In 2010, the DHS Chief Information Officer summarized its comprehensive review: "'Project 28,' the initial prototype for the SBInet system, did not perform as planned. Project 28 was not scalable to meet the mission requirements for a national comment [sic] and control system, and experienced significant technical difficulties." 

A convoluted graphic illustrating how SBInet surveillance towers fit into the border security plan.

A DHS graphic illustrating the SBInet concept

Meanwhile, bipartisan consensus had emerged about the failure of the program, due to the technical problems as well as contracting irregularities and cost overruns.

As Rep. Christopher Carney (D-PA) said in his prepared statement during Congressional hearings:

P–28 and the larger SBInet program are supposed to be a model of how the Federal Government is leveraging technology to secure our borders, but Project 28, in my mind, has achieved a dubious distinction as a trifecta of bad Government contracting: Poor contract management; poor contractor performance; and a poor final product.

Rep. Rogers' remarks were even more cutting: "You know the history of ISIS and what a disaster that was, and we had hoped to take the lessons from that and do better on this and, apparently, we haven’t done much better. "

Perhaps most damning of all was yet another GAO report that found, "SBInet defects have been found, with the number of new defects identified generally increasing faster than the number being fixed—a trend that is not indicative of a system that is maturing." 

In January 2011, DHS Secretary Janet Napolitano canceled the $3-billion program.

IFTs, RVSSs, and ASTs

Following the termination of SBInet, the Christian Science Monitor ran the naive headline, "US cancels 'virtual fence' along Mexican border. What's Plan B?" Three years later, the newspaper answered its own question with another question, "'Virtual' border fence idea revived. Another 'billion dollar boondoggle'?"

Boeing was the main contractor blamed for SBINet's failure, but Border Patrol ultimately awarded one of the biggest new contracts to Elbit Systems, which had been one of Boeing's subcontractors on SBInet. Elbit began installing IFTs (again, that stands for "Integrated Fixed Towers") in many of the exact same places slated for SBInet. In some cases, the equipment was simply swapped on an existing SBInet tower.

Meanwhile, another contractor, General Dynamics Information Technology, began installing new RVSS towers and upgrading old ones as part of the RVSS-U program. Border Patrol also started installing hundreds of "Autonomous Surveillance Towers" (ASTs) by yet another vendor, Anduril Industries, embracing the new buzz of artificial intelligence.

Two surveillance towers and a Border Patrol vehicle along the Rio Grande

An Autonomous Surveillance Tower and an RVSS tower along the Rio Grande.

In 2017, the GAO complained the Border Patrol's poor data quality made the agency "limited in its ability to determine the mission benefits of its surveillance technologies." In one case, Border Patrol stations in the Rio Grande Valley claimed IFTs assisted in 500 cases in just six months. The problem with that assertion was there are no IFTs in Texas or, in fact, anywhere outside Arizona.

A few years later, the DHS Inspector General issued yet another report indicating not much had improved:

CBP faced additional challenges that reduced the effectiveness of its existing technology. Border Patrol officials stated they had inadequate personnel to fully leverage surveillance technology or maintain current information technology systems and infrastructure on site. Further, we identified security vulnerabilities on some CBP servers and workstations not in compliance due to disagreement about the timeline for implementing DHS configuration management requirements.

CBP is not well-equipped to assess its technology effectiveness to respond to these deficiencies. CBP has been aware of this challenge since at least 2017 but lacks a standard process and accurate data to overcome it.

Overall, these deficiencies have limited CBP’s ability to detect and prevent the illegal entry of noncitizens who may pose threats to national security.

Around that same time, the RAND Corporation published a study funded by DHS that found "strong evidence" the IFT program was having no impact on apprehension levels at the border, and only "weak" and "inconclusive" evidence that the RVSS towers were having any effect on apprehensions.

And yet, border authorities and their supporters in Congress are continuing to promote unproven, AI-driven technologies as the latest remedy for years of failures, including the ones voiced in the memo obtained by NBC News. These systems involve cameras controlled by algorithms that automatically identify and track objects or people of interest. But in an age when algorithmic errors and bias are being identified nearly everyday in every sector including law enforcement, it is unclear how this technology has earned the trust of the government.

History Keeps Repeating

That brings us today, with reportedly 150 or more towers out of service. So why does Washington keep supporting surveillance at the border? Why are they proposing record-level funding for a system that seems irreparable? Why have they abandoned their duty to scrutinize federal programs?

Well, one reason may be that treating problems at the border as humanitarian crises or pursuing foreign policy or immigration reform measures isn't as politically useful as promoting a phantom "invasion" that requires a military-style response. Another reason may be that tech companies and defense contractors wield immense amounts of influence and stand to make millions, if not billions, profiting off border surveillance. The price is paid by taxpayers, but also in the civil liberties of border communities and the human rights of asylum seekers and migrants.

But perhaps the biggest reason this history keeps repeating itself is that no one is ever really held accountable for wasting potentially billions of dollars on high-tech snake oil.

EFF to Third Circuit: TikTok Has Section 230 Immunity for Video Recommendations

Par : Sophia Cope
18 octobre 2024 à 18:24

UPDATE: On October 23, 2024, the Third Circuit denied TikTok's petition for rehearing en banc.

EFF legal intern Nick Delehanty was the principal author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Third Circuit in support of TikTok’s request that the full court reconsider the case Anderson v. TikTok after a three-judge panel ruled that Section 230 immunity doesn’t apply to TikTok’s recommendations of users’ videos. We argued that the panel was incorrect on the law, and this case has wide-ranging implications for the internet as we know it today. EFF was joined on the brief with Center for Democracy & Technology (CDT), Foundation for Individual Rights and Expression (FIRE), Public Knowledge, Reason Foundation, and Wikimedia Foundation.

At issue is the panel’s misapplication of First Amendment precedent. The First Amendment protects the editorial decisions of publishers about whether and how to display content, such as the videos TikTok displays to users through its recommendation algorithm.

Additionally, because common law allows publishers to be liable for other people’s content that they publish (for example, letters to the editor that are defamatory in print newspapers) due to limited First Amendment protection, Congress passed Section 230 to protect online platforms from liability for harmful user-generated content.

Section 230 has been pivotal for the growth and diversity of the internet—without it, internet intermediaries would potentially be liable for every piece of content posted by users, making them less likely to offer open platforms for third-party speech.

In this case, the Third Circuit panel erroneously held that since TikTok enjoys protection for editorial choices under the First Amendment, TikTok’s recommendations of user videos amount to TikTok’s first-party speech, making it ineligible for Section 230 immunity. In our brief, we argued that First Amendment protection for editorial choices and Section 230 protection are not mutually exclusive.

We also argued that the panel’s ruling does not align with what every other circuit has found: that Section 230 also immunizes the editorial decisions of internet intermediaries. We made four main points in support of this argument:

  • First, the panel ignored the text of Section 230 in that editorial choices are included in the commonly understood definition of “publisher” in the statute.
  • Second, the panel created a loophole in Section 230 by allowing plaintiffs who were harmed by user-generated content to bypass Section 230 by focusing on an online platform’s editorial decisions about how that content was displayed.
  • Third, it’s crucial that Section 230 protects editorial decisions notwithstanding additional First Amendment protection because Section 230 immunity is not only a defense against liability, it’s also a way to end a lawsuit early. Online platforms might ultimately win lawsuits on First Amendment grounds, but the time and expense of protracted litigation would make them less interested in hosting user-generated content. Section 230’s immunity from suit (as well as immunity from liability) advances Congress’ goal of encouraging speech at scale on the internet.
  • Fourth, TikTok’s recommendations specifically are part of a publisher’s “traditional editorial functions” because recommendations reflect choices around the display of third-party content and so are protected by Section 230.

We also argued that allowing the panel’s decision to stand would harm not only internet intermediaries, but all internet users. If internet intermediaries were liable for recommending or otherwise deciding how to display third-party content posted to their platforms, they would end useful content curation and engage in heavy-handed censorship to remove anything that might be legally problematic from their platforms. These responses to a weakened Section 230 would greatly limit users’ speech on the internet.

The full Third Circuit should recognize the error of the panel’s decision and reverse to preserve free expression online.

A Flourishing Internet Depends on Competition

Antitrust law has long recognized that monopolies stifle innovation and gouge consumers on price. When it comes to Big Tech, harm to innovation—in the form of  “kill zones,” where major corporations buy up new entrants to a market before they can compete with them—has been easy to find. Consumer harms have been harder to quantify, since a lot of services the Big Tech companies offer are “free.” This is why we must move beyond price as the major determinator of consumer harm. And once that’s done, it’s easier to see even greater benefits competition brings to the greater internet ecosystem. 

In the decades since the internet entered our lives, it has changed from a wholly new and untested environment to one where a few major players dominate everyone's experience. Policymakers have been slow to adapt and have equated what's good for the whole internet with what is good for those companies. Instead of a balanced ecosystem, we have a monoculture. We need to eliminate the build up of power around the giants and instead have fertile soil for new growth.

Content Moderation 

In content moderation, for example, it’s basically rote for experts to say that content moderation is impossible at scale. Facebook reports over three billion active users and is available in over 100 languages. However, Facebook is an American company that primarily does its business in English. Communication, in every culture, is heavily dependent on context. Even if it was hiring experts in every language it is in, which it manifestly is not, the company itself runs on American values. Being able to choose a social media service rooted in your own culture and language is important. It’s not that people have to choose that service, but it’s important that they have the option.  

This sometimes happens in smaller fora. For example, the knitting website Ravelry, a central hub for patterns and discussions about yarn, banned all discussions about then-President Donald Trump in 2019, as it was getting toxic. A number of disgruntled users banded together to make their disallowed content available in other places. 

In a competitive landscape, instead of demanding that Facebook or Twitter, or YouTube have the exact content rules you want, you could pick a service with the ones you want. If you want everything protected by the First Amendment, you could find it. If you want an environment with clear rules, consistently enforced, you could find that. Especially since smaller platforms could actually enforce its rules, unlike the current behemoths.  

Product Quality 

The same thing applies to product quality and the “enshittification” of platforms. Even if all of Facebook’s users spoke the same language, that’s no guarantee that they share the same values, needs, or wants. But, Facebook is an American company and it conducts its business largely in English and according to American cultural norms. As it is, Facebook’s feeds are designed to maximize user engagement and time on the service. Some people may like the recommendation algorithm, but other may want the traditional chronological feed. There’s no incentive for Facebook to offer the choice because it is not concerned with losing users to a competitor that does. It’s concerned with being able to serve as many ads to as many people as possible. In general, Facebook lacks user controls that would allow people to customize their experience on the site. That includes the ability to reorganize your feed to be chronological, to eliminate posts from anyone you don’t know, etc. There may be people who like the current, ad-focused algorithm, but no one else can get a product they would like. 

Another obvious example is how much the experience of googling something has deteriorated. It’s almost hack to complain about it now, but when when it started, Google was revolutionary in its ability to a) find exactly what you were searching for and b) allow normal language searching (that is, not requiring you to use boolean searches in order to get the desired result). Google’s secret sauce was, for a long time, the ability to find the right result to a totally unique search query. If you could remember some specific string of words in the thing you were looking for, Google could find it. However, in the endless hunt for “growth,” Google moved away from quality search results and towards quantity.  It also clogged the first page of results with ads and sponsored links.  

Morals, Privacy, and Security 

There are many individuals and small businesses that would like to avoid using Big Tech services, either because they are bad or because they have ethical and moral concerns. But, the bigger they are, the harder it is to avoid. For example, even if someone decides not to buy products from Amazon.com because they don’t agree with how it treats its workers, they may not be able to avoid patronizing Amazon Web Services (AWS), which funds the commerce side of the business. Netflix, The Guardian, Twitter, and Nordstrom are all companies that pay for Amazon’s services. The Mississippi Department of Employment Security moved its data management to Amazon in 2021. Trying to avoid Amazon entirely is functionally impossible. This means that there is no way for people to “vote with their feet,” withholding their business from companies they disagree with.  

Security and privacy are also at risk without competition. For one thing, it’s easier for a malicious actor or oppressive state to get what they want when it’s all in the hands of a single company—a single point of failure. When a single company controls the tools everyone relies on, an outage cripples the globe. This digital monoculture was on display during this year's Crowdstrike outage, where one badly-thought-out update crashed networks across the world and across industries. The personal danger of digital monoculture shows itself when Facebook messages are used in a criminal investigation against a mother and daughter discussing abortion and in “geofence warrants” that demand Google turn over information about every device within a certain distance of a crime. For another thing, when everyone is only able to share expression in a few places that makes it easier for regimes to target certain speech and for gatekeepers to maintain control over creativity 

Another example of the relationship between privacy and competition is Google’s so-called “Privacy Sandbox.” Google’s messaged it as removing “third-party cookies” that track you across the internet. However, the change actually just moved that data into the sole control of Google, helping cement its ad monopoly. Instead of eliminating tracking, the Privacy Sandbox does tracking within the browser directly, allowing Google to charge for access to the insights gleaned from your browsing history with advertisers and websites, rather than those companies doing it themselves. It’s not more privacy, it’s just concentrated control of data. 

You see this same thing at play with Apple’s app store in the saga of Beeper Mini, an app that allowed secure communications through iMessage between Apple and non-Apple phones. In doing so, it eliminated the dreaded “green bubbles” that indicated that messages were not encrypted (ie not between two iPhones). While Apple’s design choice was, in theory, meant to flag that your conversation wasn’t secure, it ended up being a design choice that motivated people to get iPhones just to avoid the stigma. Beeper Mini made messages more secure and removed the need to get a whole new phone to get rid of the green bubble. So Apple moved to break Beeper Mini, effectively choosing monopoly over security. If Apple had moved to secure non-iPhone messages on its own, that would be one thing. But it didn’t, it just prevented users from securing them on their own.  

Obviously, competition isn’t a panacea. But, like privacy, its prioritization means less emergency firefighting and more fire prevention. Think of it as a controlled burn—removing the dross that smothers new growth and allows fires to rage larger than ever before.  

California Attorney General Issues New Guidance on Military Equipment to Law Enforcement

17 octobre 2024 à 16:04

California law enforcement should take note: the state’s Attorney General has issued a new bulletin advising them on how to comply with AB 481—a state law that regulates how law enforcement agencies can use, purchase, and disclose information about military equipment at their disposal. This important guidance comes in the wake of an exposé showing that despite awareness of AB 481, the San Francisco Police Department (SFPD) flagrantly disregarded the law. EFF applauds the Attorney General’s office for reminding police and sheriff’s departments what the law says and what their obligations are, and urges the state’s top law enforcement officer to monitor agencies’ compliance with the law.

The bulletin emphasizes that law enforcement agencies must seek permission from governing bodies like city councils or boards of supervisors before buying any military equipment, or even applying for grants or soliciting donations to procure that equipment. The bulletin also reminds all California law enforcement agencies and state agencies with law enforcement divisions of their transparency obligations: they must post on their website a military equipment use policy that describes, among other details, the capabilities, purposes and authorized uses, and financial impacts of the equipment, as well as oversight and enforcement mechanisms for violations of the policy. Law enforcement agencies must also publish an annual military equipment report that provides information on how the equipment was used the previous year and the associated costs.

Agencies must cease use of any military equipment, including drones, if they have not sought the proper permission to use them. This is particularly important in San Francisco, where the SFPD has been caught, via public records, purchasing drones without seeking the proper authorization first, over the warnings of the department’s own policy officials.

In a climate where few cities and states have laws governing what technology and equipment police departments can use, Californians are fortunate to have regulations like AB 481 requiring transparency, oversight, and democratic control by elected officials of military equipment. But those regulations are far less effective if there is no accountability mechanism to ensure that police and sheriff’s departments follow them.


The SFPD and all other California law enforcement agencies must re-familiarize themselves with the rules. Police and sheriff’s departments must obtain permission and justify purchases before they buy military equipment, have use policies approved by their local governing body, and  provide yearly reports about what they have and how much it costs.

❌
❌