Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

No Matter What the Bank Says, It's YOUR Money, YOUR Data, and YOUR Choice

30 octobre 2024 à 08:16

The Consumer Finance Protection Bureau (CFPB) has just finalized a rule that makes it easy and safe for you to figure out which bank will give you the best deal and switch to that bank, with just a couple of clicks. 

We love this kind of thing: the coolest thing about a digital world is how easy it is to switch from product or service to another—in theory. Digital tools are so flexible, anyone who wants your business can write a program to import your data into a new service and forward any messages or interactions that show up at the old service.

That's the theory. But in practice, companies have figured out how to use law - IP law, cybersecurity law, contract law, trade secrecy law—to literally criminalize this kind of marvelous digital flexibility, so that it can end up being even harder to switch away from a digital service than it is to hop around among traditional, analog ones.

Companies love lock-in. The harder it is to quit a product or service, the worse a company can treat you without risking your business. Economists call the difficulties you face in leaving one service for another the "switching costs" and businesses go to great lengths to raise the switching costs they can impose on you if you have the temerity to be a disloyal customer. 

So long as it's easier to coerce your loyalty than it is to earn it, companies win and their customers lose. That's where the new CFPB rule comes in.

Under this rule, you can authorize a third party - another bank, a comparison shopping site, a broker, or just your bookkeeping software - to request your account data from your bank. The bank has to give the third party all the data you've authorized. This data can include your transaction history and all the data needed to set up your payees and recurring transactions somewhere else.

That means that—for example—you can authorize a comparison shopping site to access some of your bank details, like how much you pay in overdraft fees and service charges, how much you earn in interest, and what your loans and credit cards are costing you. The service can use this data to figure out which bank will cost you the least and pay you the most. 

Then, once you've opened an account with your new best bank, you can direct it to request all your data from your old bank, and with a few clicks, get fully set up in your new financial home. All your payees transfer over, all your regular payments, all the transaction history you'll rely on at tax time. "Painless" is an admittedly weird adjective to apply to household finances, but this comes pretty darned close.

Americans lose a lot of money to banking fees and low interest rates. How much? Well, CFPB economists, using a very conservative methodology, estimate that this rule will make the American public at least $677 million better off, every year.

Now, that $677 million has to come from somewhere, and it does: it comes from the banks that are currently charging sky-high fees and paying rock-bottom interest. The largest of these banks are suing the CFPB in a bid to block the rule from taking effect.

These banks claim that they are doing this to protect us, their depositors, from a torrent of fraud that would be unleashed if we were allowed to give third parties access to our own financial data. Clearly, this is the only reason a giant bank would want to make it harder for us to change to a competitor (it can't possibly have anything to do with the $677 million we stand to save by switching).

We've heard arguments like these before. While EFF takes a back seat to no one when it comes to defending user security (we practically invented this), we reject the idea that user security is improved when corporations lock us in (and leading security experts agree with us).

This is not to say that a bad data-sharing interoperability rule wouldn't be, you know, bad. A rule that lacked the proper safeguards could indeed enable a wave of fraud and identity theft the likes of which we've never seen.

Thankfully, this is a good interoperability rule! We liked it when it was first proposed, and it got even better through the rulemaking process.

First, the CFPB had the wisdom to know that a federal finance agency probably wasn't the best—or only—group of people to design a data-interchange standard. Rather than telling the banks exactly how they should transmit data when requested by their customers, the CFPB instead said, "These are the data you need to share and these are the characteristics of a good standards body. So long as you use a standard from a good standards body that shares this data, you're in compliance with the rule." This is an approach we've advocated for years, and it's the first time we've seen it in the wild.

The CFPB also instructs the banks to fail safe: any time a bank gets a request to share your data that it thinks might be fraudulent, they have the right to block the process until they can get more information and confirm that everything is on the up-and-up.

The rule also regulates the third parties that can get your data, establishing stringent criteria for which kinds of entities can do this. It also limits how they can use your data (strictly for the purposes you authorize) and what they need to do with the data when that has been completed (delete it forever), and what else they are allowed to do with it (nothing). There's also a mini "click-to-cancel" rule that guarantees that you can instantly revoke any third party's access to your data, for any reason.

The CFPB has had the authority to make a rule like this since its founding in 2010, with the passage of the Consumer Financial Protection Act (CFPA). Back when the CFPA was working its way through Congress, the banks howled that they were being forced to give up "their" data to their competitors.

But it's not their data. It's your data. The decision about who you share it with belongs to you, and you alone.

Court Orders Google (a Monopolist) To Knock It Off With the Monopoly Stuff

29 octobre 2024 à 09:24

A federal court recently ordered Google to make it easier for Android users to switch to rival app stores, banned Google from using its vast cash reserves to block competitors, and hit Google with a bundle of thou-shalt-nots and assorted prohibitions.

Each of these measures is well crafted, narrowly tailored, and purpose-built to accomplish something vital: improving competition in mobile app stores.

You love to see it.

Some background: the mobile OS market is a duopoly run by two dominant firms, Google (Android) and Apple (iOS). Both companies distribute software through their app stores (Google's is called "Google Play," Apple's is the "App Store"), and both companies use a combination of market power and legal intimidation to ensure that their users get all their apps from the company's store.

This creates a chokepoint: if you make an app and I want to run it, you have to convince Google (or Apple) to put it in their store first. That means that Google and Apple can demand all kinds of concessions from you, in order to reach me. The most important concession is money, and lots of it. Both Google and Apple demand 30 percent of every dime generated with an app - not just the purchase price of the app, but every transaction that takes place within the app after that. The companies have all kinds of onerous rules blocking app makers from asking their users to buy stuff on their website, instead of in the app, or from offering discounts to users who do so.

For avoidance of doubt: 30 percent is a lot. The "normal" rate for payment processing is more like 2-5 percent, a commission that's gone up 40 percent since covid hit, a price-hike that is itself attributable to monopoly power in the sector.That's bad, but Google and Apple demand ten times that (unless you qualify for their small business discount, in which case, they only charge five times more than the Visa/Mastercard cartel).

Epic Games - the company behind the wildly successful multiplayer game Fortnite - has been chasing Google and Apple through the courts over this for years, and last December, they prevailed in their case against Google.

This week's court ruling is the next step in that victory. Having concluded that Google illegally acquired and maintained a monopoly over apps for Android, the court had to decide what to do about it.

It's a great judgment: read it for yourself, or peruse the highlights in this excellent summary from The Verge

For the next three years, Google must meet the following criteria:

  • Allow third-party app stores for Android, and let those app stores distribute all the same apps as are available in Google Play (app developers can opt out of this);
  • Distribute third-party app stores as apps, so users can switch app stores by downloading a new one from Google Play, in just the same way as they'd install any app;
  • Allow apps to use any payment processor, not just Google's 30 percent money-printing machine;
  • Permit app vendors to tell users about other ways to pay for the things they buy in-app;
  • Permit app vendors to set their own prices.

Google is also prohibited from using its cash to fence out rivals, for example, by:

  • Offering incentives to app vendors to launch first on Google Play, or to be exclusive to Google Play;
  • Offering incentives to app vendors to avoid rival app stores;
  • Offering incentives to hardware makers to pre-install Google Play;
  • Offering incentives to hardware makers not to install rival app stores.

These provisions tie in with Google's other recent  loss; in Google v. DoJ, where the company was found to have operated a monopoly over search. That case turned on the fact that Google paid unimaginably vast sums - more than $25 billion per year - to phone makers, browser makers, carriers, and, of course, Apple, to make Google Search the default. That meant that every search box you were likely to encounter would connect to Google, meaning that anyone who came up with a better search engine would have no hope of finding users.

What's so great about these remedies is that they strike at the root of the Google app monopoly. Google locks billions of users into its platform, and that means that software authors are at its mercy. By making it easy for users to switch from one app store to another, and by preventing Google from interfering with that free choice, the court is saying to Google, "You can only remain dominant if you're the best - not because you're holding 3.3 billion Android users hostage."

Interoperability - plugging new features, services and products into existing systems - is digital technology's secret superpower, and it's great to see the courts recognizing how a well-crafted interoperability order can cut through thorny tech problems. 

Google has vowed to appeal. They say they're being singled out, because Apple won a similar case earlier this year. It's true, a different  court got it wrong with Apple.

But Apple's not off the hook, either: the EU's Digital Markets Act took effect this year, and its provisions broadly mirror the injunction that just landed on Google. Apple responded to the EU by refusing to substantively comply with the law, teeing up another big, hairy battle.

In the meantime, we hope that other courts, lawmakers and regulators continue to explore the possible uses of interoperability to make technology work for its users. This order will have far-reaching implications, and not just for games like Fortnite: the 30 percent app tax is a millstone around the neck of all kinds of institutions, from independent game devs who are dolphins caught in Google's tuna net to the free press itself..

EU to Apple: “Let Users Choose Their Software”; Apple: “Nah”

28 octobre 2024 à 10:48

This year, a far-reaching, complex new piece of legislation comes into effect in EU: the Digital Markets Act (DMA), which represents some of the most ambitious tech policy in European history. We don’t love everything in the DMA, but some of its provisions are great, because they center the rights of users of technology, and they do that by taking away some of the control platforms exercise over users, and handing that control back to the public who rely on those platforms.

Our favorite parts of the DMA are the interoperability provisions. IP laws in the EU (and the US) have all but killed the longstanding and honorable tradition of adversarial interoperability: that’s when you can alter a service, program or device you use, without permission from the company that made it. Whether that’s getting your car fixed by a third-party mechanic, using third-party ink in your printer, or choosing which apps run on your phone, you should have the final word. If a company wants you to use its official services, it should make the best services, at the best price – not use the law to force you to respect its business-model.

It seems the EU agrees with us, at least on this issue. The DMA includes several provisions that force the giant tech companies that control so much of our online lives (AKA “gatekeeper platforms”) to provide official channels for interoperators. This is a great idea, though, frankly, lawmakers should also restore the right of tinkerers and hackers to reverse-engineer your stuff and let you make it work the way you want.

One of these interop provisions is aimed at app stores for mobile devices. Right now, the only (legal) way to install software on your iPhone is through Apple’s App Store. That’s fine, so long as you trust Apple and you think they’re doing a great job, but pobody’s nerfect, and even if you love Apple, they won’t always get it right – like when they tell you you’re not allowed to have an app that records civilian deaths from US drone strikes, or a game that simulates life in a sweatshop, or a dictionary (because it has swear words!). The final word on which apps you use on your device should be yours.

Which is why the EU ordered Apple to open up iOS devices to rival app stores, something Apple categorically refuses to do. Apple’s “plan” for complying with the DMA is, shall we say, sorely lacking (this is part of a grand tradition of American tech giants wiping their butts with EU laws that protect Europeans from predatory activity, like the years Facebook spent ignoring European privacy laws, manufacturing stupid legal theories to defend the indefensible).

Apple’s plan for opening the App Store is effectively impossible for any competitor to use, but this goes double for anyone hoping to offer free and open source software to iOS users. Without free software – operating systems like GNU/Linux, website tools like WordPress, programming languages like Rust and Python, and so on – the internet would grind to a halt.

Our dear friends at Free Software Foundation Europe (FSFE) have filed an important brief with the European Commission, formally objecting to Apple’s ridiculous plan on the grounds that it effectively bars iOS users from choosing free software for their devices.

FSFE’s brief makes a series of legal arguments, rebutting Apple’s self-serving theories about what the DMA really means. FSFE shoots down Apple’s tired argument that copyrights and patents override any interoperability requirements. U.S. courts have been inconsistent on this issue, but we’re hopeful that the Court of Justice of the E.U. will reject the “intellectual property trump card.” Even more importantly, FSFE makes moral and technical arguments about the importance of safeguarding the technological self-determination of users by letting them choose free software, and about why this is as safe – or safer – than giving Apple a veto over its customers’ software choices.

Apple claims that because you might choose bad software, you shouldn’t be able to choose software, period. They say that if competing app stores are allowed to exist, users won’t be safe or private. We disagree – and so do some of the most respected security experts in the world.

It’s true that Apple can use its power wisely to ensure that you only choose good software. But it’s also used that power to attack its users, like in China, where Apple blocked all working privacy tools from iPhones and then neutered a tool used to organize pro-democracy protests.

It’s not just in China, either. Apple has blanketed the world with billboards celebrating its commitment to its users’ privacy, and they made good on that promise, blocking third-party surveillance (to the $10 billion dollar chagrin of Facebook). But right in the middle of all that, Apple also started secretly spying on iOS users to fuel its own surveillance advertising network, and then lied about it.

Pobody’s nerfect. If you trust Apple with your privacy and security, that’s great. But for people who don’t trust Apple to have the final word – for people who value software freedom, or privacy (from Apple), or democracy (in China), users should have the final say.

We’re so pleased to see the EU making tech policy we can get behind – and we’re grateful to our friends at FSFE for holding Apple’s feet to the fire when they flout that law.

Disability Rights Are Technology Rights

24 octobre 2024 à 17:57

At EFF, our work always begins from the same place: technological self-determination. That’s the right to decide which technology you use, and how you use it. Technological self-determination is important for every technology user, and it’s especially important for users with disabilities.

Assistive technologies are a crucial aspect of living a full and fulfilling life, which gives people with disabilities motivation to be some of the most skilled, ardent, and consequential technology users in the world. There’s a whole world of high-tech assistive tools and devices out there, with disabled technologists and users intimately involved in the design process. 

The accessibility movement’s slogan, “Nothing about us without us,” has its origins in the first stirrings of European democratic sentiment in sixteenth (!) century and it expresses a critical truth: no one can ever know your needs as well you do. Unless you get a say in how things work, they’ll never work right.

So it’s great to see people with disabilities involved in the design of assistive tech, but that’s where self-determination should start, not end. Every person is different, and the needs of people with disabilities are especially idiosyncratic and fine-grained. Everyone deserves and needs the ability to modify, improve, and reconfigure the assistive technologies they rely on.

Unfortunately, the same tech companies that devote substantial effort to building in assistive features often devote even more effort to ensuring that their gadgets, code and systems can’t be modified by their users.

Take streaming video. Back in 2017, the W3C finalized “Encrypted Media Extensions” (EME), a standard for adding digital rights management (DRM) to web browsers. The EME spec includes numerous accessibility features, including facilities for including closed captioning and audio descriptive tracks.

But EME is specifically designed so that anyone who reverse-engineers and modifies it will fall afoul of Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), a 1998 law that provides for five-year prison-sentences and $500,000 fines for anyone who distributes tools that can modify DRM. The W3C considered – and rejected – a binding covenant that would protect technologists who added more accessibility features to EME.

The upshot of this is that EME’s accessibility features are limited to the suite that a handful of giant technology companies have decided are important enough to develop, and that suite is hardly comprehensive. You can’t (legally) modify an EME-restricted stream to shift the colors to ones that aren’t affected by your color-blindness. You certainly can’t run code that buffers the video and looks ahead to see if there are any seizure-triggering strobe effects, and dampens them if there are. 

It’s nice that companies like Apple, Google and Netflix put a lot of thought into making EME video accessible, but it’s unforgivable that they arrogated to themselves the sole right to do so. No one should have that power.

It’s bad enough when DRM infects your video streams, but when it comes for hardware, things get really ugly. Powered wheelchairs – a sector dominated by a cartel of private-equity backed giants that have gobbled up all their competing firms – have a serious DRM problem.

Powered wheelchair users who need even basic repairs are corralled by DRM into using the manufacturer’s authorized depots, often enduring long waits during which they are unable to leave their homes or even their beds. Even small routine adjustments, like changing the wheel torque after adjusting your tire pressure, can require an official service call.

Colorado passed the country’s first powered wheelchair Right to Repair law in 2022. Comparable legislation is now pending in California, and the Federal Trade Commission has signaled that it will crack down on companies that use DRM to block repairs. But the wheels of justice grind slow – and wheelchair users’ own wheels shouldn’t be throttled to match them.

People with disabilities don’t just rely on devices that their bodies go into; gadgets that go into our bodies are increasingly common, and there, too, we have a DRM problem. DRM is common in implants like continuous glucose monitors and insulin pumps, where it is used to lock people with diabetes into a single vendor’s products, as a prelude to gouging them (and their insurers) for parts, service, software updates and medicine.

Even when a manufacturer walks away from its products, DRM creates insurmountable legal risks for third-party technologists who want to continue to support and maintain them. That’s bad enough when it’s your smart speaker that’s been orphaned, but imagine what it’s like to have an orphaned neural implant that no one can support without risking prison time under DRM laws.

Imagine what it’s like to have the bionic eye that is literally wired into your head go dark after the company that made it folds up shop – survived only by the 95-year legal restrictions that DRM law provides for, restrictions that guarantee that no one will provide you with software that will restore your vision.

Every technology user deserves the final say over how the systems they depend on work. In an ideal world, every assistive technology would be designed with this in mind: free software, open-source hardware, and designed for easy repair.

But we’re living in the Bizarro world of assistive tech, where not only is it normal to distribute tools for people with disabilities are designed without any consideration for the user’s ability to modify the systems they rely on – companies actually dedicate extra engineering effort to creating legal liability for anyone who dares to adapt their technology to suit their own needs.

Even if you’re able-bodied today, you will likely need assistive technology or will benefit from accessibility adaptations. The curb-cuts that accommodate wheelchairs make life easier for kids on scooters, parents with strollers, and shoppers and travelers with rolling bags. The subtitles that make TV accessible to Deaf users allow hearing people to follow along when they can’t hear the speaker (or when the director deliberately chooses to muddle the dialog). Alt tags in online images make life easier when you’re on a slow data connection.

Fighting for the right of disabled people to adapt their technology is fighting for everyone’s rights.

(EFF extends our thanks to Liz Henry for their help with this article.)

How the FTC Can Make the Internet Safe for Chatbots

28 juin 2024 à 16:13

No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI.

Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.

But Chair Khan didn’t treat the question as an excuse to launch into the policymaker’s verbal equivalent of a compulsory gymnastics exhibition.

Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act (FTCA5).

Section 5 gives the FTC a broad mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices.” Chair Khan has made extensive use of these powers during her first term as chair, for example, by banning noncompetes and taking action on online privacy.

At EFF, we share many of the widespread concerns over privacy, fairness, and labor rights raised by AI. We think that copyright law is the wrong tool to address those concerns, both because of what copyright law does and doesn’t permit, and because establishing copyright as the framework for AI model-training will not address the real privacy and labor issues posed by generative AI. We think that privacy problems should be addressed with privacy policy and that labor issues should be addressed with labor policy.

That’s what made Chair Khan’s remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on. The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights.

Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”

The question of whether chatbots can be taught not to “hallucinate” is far from settled. Some industry leaders think the problem can never be solved, even as startups publish (technically impressive-sounding, but non-peer reviewed) papers claiming to have solved the problem.

Whether the problem can be solved, it’s clear that for the commercial chatbot offerings in the market today, “hallucinations” come with the package. Or, put more simply: today’s chatbots lie, and no one can stop them.

That’s a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm. It’s hard enough to attend your grandmother’s funeral without the added pain of your airline’s chatbot lying to you about the bereavement fare.

Here’s where the FTC’s powers can help the American public:

The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an “unfair and deceptive practice” that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails.

After all, if a company doesn’t get in trouble when its chatbot lies to a customer, why would they pay extra for a chatbot that has been designed not to lie? And if there’s no reason to pay extra for a chatbot that doesn’t lie, why would anyone invest in solving the “hallucination” problem?

Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace. If you can prove that your chatbot won’t lie to your customers’ users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot’s output.

But until someone does figure out how to make a “hallucination”-free chatbot, guidance promising serious consequences for chatbots that deceive users with “hallucinated” lies will push companies to limit the use of chatbots to low-stakes environments, leaving human workers to do their jobs.

The FTC has already started down this path. Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency’s thinking on how companies should present their chatbots to users.

We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions – especially if that guidance is backed up with muscular enforcement.

Wanna Make Big Tech Monopolies Even Worse? Kill Section 230

It’s no fun when your friends ask you to take sides in their disputes. The plans for every dinner party, wedding, and even funeral arrive at a juncture where you find yourself thinking, “Dang, if I invite her, then he won’t come.”

It’s even less fun when you’re running an online community, from a groupchat to a Mastodon server (or someday, a Bluesky server), or any other (increasingly cheap and easy) space where your friends (and their friends) can hang out online, far from the unquenchable dumpster-fires of Big Tech social media.

But there’s a circle of hell that’s infinitely worse than being asked to choose sides in a flamewar: being threatened with a lawsuit for refusing to do so (or even for complying with one side’s request over the other).

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

At EFF, we’ve had decades of direct experience with the, uh, heated rhetoric that attends online disputes (there’s a reason the most famous law about online arguments was coined by the very first person EFF ever hired).

That’s one of the reasons we’re such big fans of Section 230 (47 U.S.C. § 230), a much-maligned, badly misunderstood law that protects people who run online services from being dragged into legal disputes between their users.

Getting sued can profoundly disrupt your life, even if you win. Much of the time, people on the receiving end of legal threats are forced to settle because they can’t afford to defend themselves in court. There's a whole cottage industry of legal bullies who’ll help the thin-skinned, vindictive and deep-pocketed to silence their critics.

That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.

Look, running online communities is already a thankless task that can convert a generous digital host into a bitter ex-online host.

The alternatives to Big Tech come from individuals, co-ops, nonprofits and startups. These cannot exist in a world where we change the law to make people who offer a space where communities may gather vulnerable to being dragged into lawsuits between their community members.

It’s one thing to volunteer your time and resources to create a hospitable place online; it’s another thing entirely to assume an uninsurable risk that could jeopardize your life’s savings, your home, and your retirement fund. Defending against a single such case can cost hundreds of thousands of dollars.

That’s very bad news indeed, because a world without Section 230 will desperately need alternatives to Big Tech.

Big Tech has deep pockets, which means that even if it creates a system of hair-trigger moderation that takes down anything remotely controversial on sight, it will still attract a staggering number of legal threats.

There’s a useful analogy here to FTX, the disgraced, fraudulent cryptocurrency exchange. Like Big Tech, FTX has some genuinely aggrieved users, but FTX has also been targeted by opportunistic treasure hunters who have laid claims against the company totaling 23.6 quintillion dollars.

We know what Big Tech will do in a post-230 world, because some of us are already living in that world. Donald Trump signed SESTA-FOSTA into law in 2018. The law was billed as a narrowly targeted measure to make platforms liable for failing to intervene in cases where they were aware of human trafficking. In practice, the law has been used to indiscriminately target consensual sex work, placing sex workers in harm’s way (just as we predicted).

Without Section 230, Big Tech will shoot first, ask questions later when it comes to taking down controversial online speech (like #MeToo or Black Lives Matter). For marginalized users with little social power (again, like #MeToo or Black Lives Matter participants), Big Tech takedowns will be permanent, because Big Tech has no incentive to figure out whether it’s worth hosting their speech.

Meanwhile, for the wealthy and powerful, a post-230 world is one where dictators, war criminals, and fraudsters will have a new, powerful tool to silence their critics.

A post-230 world, in other words, is a world where Big Tech is infinitely worse for the users who already suffer most from the large platforms’ moderation failures.

But it’s also a world where it’s infinitely harder to start an alternative to Big Tech’s gigantic walled gardens.

No wonder tech billionaires support getting rid of Section 230: they understand that their overgrown, universally loathed services are vulnerable to real alternatives.

Four years ago, the Biden Administration declared that promoting competition was a whole-of-government priority (and we cheered). Getting rid of Section 230 will do the opposite: freeze the internet in its current, monopolized state, creating a world where the rule of today’s tech barons is never challenged by a more democratic, user-centric internet.

Take Action

Ending Section 230 Will Make Big Tech Monopolies Even Worse

Big Tech to EU: "Drop Dead"

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so. 

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days. 

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could. 

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK. 

 

 

Privacy First and Competition

Privacy First” is a simple, powerful idea: seeing as so many of today’s technological problems are also privacy problems, why don’t we fix privacy first?

Whether you’re worried about kids’ mental health, or tech’s relationship to journalism, or spying by foreign adversaries, or reproductive rights, or AI deepfakes, or nonconsensual pornography, you’re worried about a problem rooted in the primitive, deplorable state of American privacy law.

It’s really impossible to overstate how bad the state of federal privacy law is in America. The last time the USA got a big, muscular, broadly applicable new consumer privacy law, the year was 1988, and the law was targeted at video-store clerks who leaked your VHS rental history.

It’s been a minute. America is long overdue for a strong, comprehensive privacy law

A new privacy law will help us with all those issues, and more. It would level the playing field between giants with troves of user data and startups who want to build something better. Such a law would keep competition from becoming a race to the bottom on user privacy.

Importantly, a strong privacy law will go a long way to improving the dismal state of competition in America’s ossified and decaying tech sector.

Take the tech sector’s relationship to the news media. The ad-tech duopoly has rigged the advertising market and takes $0.51 out of every advertising dollar. Without their vast troves of nonconsensually harvested personal data, Meta and Google wouldn’t be able to misappropriate billions from the publishers. Banning surveillance advertising wouldn’t just be good for our privacy - it would give publishers leverage to shift those billions back onto their own balance sheets. 

Undoing market concentration will require interoperability so that users can move from dominant services to new, innovative rivals without losing their data and relationships. The biggest challenge to interoperability? Privacy. Every time a user moves from one service to another, the resulting data-flows create risks for those users and their friends, families, customers and other social connections. Congress knows this, which is why every proposed interoperability law incorporates its own little privacy law. Privacy shouldn’t be an afterthought in a tech regulation. A standalone privacy law would give lawmakers the freedom to promote interoperability without having to work out a new privacy system for each effort.

That’s also true of Right to Repair laws: these laws are routinely opposed by tech monopolists who insist that giving Americans the right to choose their own repair shop or parts exposes them to privacy risks. It’s true that our devices harbor vast troves of sensitive information - but that doesn’t mean we should let Big Tech (or Big Car) monopolize repair. Instead, we should require everyone - both original manufacturers and independent repair shops - to honor your privacy.

America’s legal privacy vacuum is largely the result of the commercial surveillance industry’s lobbying power. Increasing competition in the tech sector won’t just help our privacy: it’ll also weaken tech’s lobbying power, which is a function of the vast profits that can be extracted in the absence of “wasteful competition” and the ease with which a concentrated sector can converge on a common lobbying position. 

That’s why EFF has urged the FTC and DOJ to consider privacy impacts when scrutinizing proposed mergers: not just to protect internet users from the harms of surveillance business models, but to protect democracy from the corrupting influence of surveillance cartels.

Privacy isn’t dead. Far from it. For a quarter of a century, would-be tech monopolists have been insisting that we have no privacy and telling us to “get over it.” The vast majority of the public wants privacy and will take it if offered, and grab it if it’s not.  

Whenever someone tells you that privacy is dead, they’re just wishcasting. What they mean is: “If I can convince you privacy is dead, I can make more money at your expense.”

Monopolists want us to believe that their power over our lives is inevitable and unchangeable, just as the surveillance industry banks on convincing you that the fight for privacy was and always will be a lost cause. But we once had a better internet, and we can get a better internet again. The fight for that better internet starts with privacy, a battle that we all want to win.




Hip Hip Hooray For Hipster Antitrust

14 février 2024 à 18:58

Don’t believe the hype.

The undeniable fact is that the FTC has racked up a long list of victories over corporate abuses, like busting a nationwide, decades-long fraud that tricked people into paying for “free” tax preparation.

The wheels of justice grind slowly, so many of the actions the FTC has brought are still pending. But these actions are significant. In tandem with the Department of Justice, it is suing over fake apartment listings, blocking noncompete clauses, targeting fake online reviews, and going after gig work platforms for ripping off their workers.

Companies that abuse our privacy and trust are being hit with massive fines: $520 million for Epic’s tricks to get kids to spend money online, $20 million to punish Microsoft for spying on kids who use Xboxes, and a $25 million fine against Amazon for capturing voice recordings of kids and storing kids’ location data.

The FTC is using its authority to investigate many forms of digital deception, from deceptive and fraudulent online ads to the use of cloud computing to lock in business customers to data brokers’ sale of our personal information.

And of course, the FTC is targeting anticompetitive mergers, like Nvidia’s attempted takeover of ARM - which has the immediate effect of preventing an anticompetitive merger and the long-term benefit of deterring future attempts at similar oligopolistic mergers. They’ve also targeted private equity “rollups,” which combine  dozens or hundreds of smaller companies into a monopoly with pricing power over its customers and the whip hand over its workers. These kinds of rollups are all too common, and destructive of offline and online services alike.

From Right to Repair to Click to Cancel to fines for deceptive UI (“dark patterns”), the FTC has taken up many of the issues we’ve fought for over the years. So the argument that the FTC is a do-nothing agency wasting our time with grandstanding stunts is just factually wrong. As recently as  December 2023, the FTC  and DOJ chalked up ten major victories

But this “win/loss ratio” accounting also misses the point. Even if the outcome isn’t guaranteed, this FTC refuses to turn a blind eye  to abuses of the American public. 

What’s more, the FTC collaborated with the DOJ on new merger guidelines that spell out what kinds of mergers are likely to be legal. These are the most comprehensive, future-looking guidelines in generations, and they tee up enforcement actions for this FTC and its successors for many years to come.

The FTC is also seeking to revive existing laws that have lane dormant for too long. . As John Mark Newman explains, this FTC has cannily filed cases that reassert its right to investigate “competing” companies with interlocking directorates.

Newman also praises the FTC for “supercharging student interest in the field,” with law schools seeing surging interest in antitrust courses and a renaissance in law review articles about antitrust enforcement. 

The FTC is not alone in this. Its colleagues in the DOJ’s antitrust division have their own long list of victories.

But the most important victory for America’s antitrust enforcers is what doesn’t happen. Across the economy and every sector, corporate leaders are backing away from merger-driven growth and predatory pricing, deterred from violating the law by the knowledge that the generations-long period of tolerance for lawless corporate abuse is coming to a close.

Even better, America’s antitrust enforcers don’t stand alone. At long last, it seems that the whole world is reversing decades of tacit support for oligopolies and corporate bullying. 

The Great Interoperability Convergence: 2023 Year in Review

21 décembre 2023 à 11:08

It’s easy to feel hopeless about the collapse of the tech sector into a group 0f monopolistic silos that harvest and exploit our data, hold our communities hostage, gouge us on prices, and steal our wages.

But all over the world and across different government departments, policymakers are converging on a set of muscular, effective solutions to Big Tech dominance.

This convergence spans financial regulators and consumer protection agencies; it’s emerging in Europe, the USA, and the UK. It’s kind of a moment.

How Not To Fix Big Tech 

To understand what’s new in Big Tech regulation, we should talk briefly about what’s old. For many years, policymakers have viewed the problems of Big Tech as tech problems, not big problems. From disinformation to harassment to copyright infringement, the go-to policy response of the past two decades has been to make tech platforms responsible for policing and controlling their users.

This approach starts from the assumption that the problems that occur after hundreds of millions or billions of people are locked inside of a platform’s walled garden are problems of mismanagement, not problems of scale. The thinking goes that the dictators of these platforms aren’t sufficiently benevolent or competent, and they must either be incentivized to do better or be replaced with more suitable autocrats.

This approach has consistently failed - gigantic companies have proved as unperfectable as they are ungovernable. What’s more, deputizing giant companies to police their users has the perverse effect of making them more powerful by creating barriers to entry that clear the field of competitors who might offer superior alternatives for both users and business customers.

Take copyright enforcement: in 2019, the EU passed a rule requiring platforms to intercept and filter all their users’ communications to screen out copyright infringement. These filters are stupendously expensive to build - YouTube’s version of them, the notorious Content ID, has cost Google more than $100 million to build and maintain. Not only is the result an unnavigable, Kafkaesque nightmare for creators, it’s also far short of what the EU rule requires.

Any law that requires every digital service to mobilize the resources of a trillion-dollar multinational will tend to produce an internet run by trillion-dollar multinationals.

A Better Approach

We think that the biggest problem facing the internet today is bigness itself. Very large platforms are every bit as capable of committing errors in judgment or making trade-offs that harm their users as small platforms. The difference is that when very large platforms make even small errors, millions or even billions of users are in harm’s way.

What’s more, if users are trapped inside these platforms - by high switching costs, data lock-in, or digital rights management - they pay a steep price for seeking out superior alternatives. And in a market dominated by large firms who have locked in their users, investors are unwilling to fund those alternatives.

For EFF, the solution to Big Tech is smaller tech: allowing lots of different kinds of organizations (from startups to user groups to nonprofits to local governments to individual tinkerers) to provide interoperable services that all work together. These smaller platforms are closer to their users, and stand a better chance of parsing out the fine-grained nuances in community moderation. Smaller platforms are easier to regulate, too.

Giving users the choice of more, interoperable platforms that are less able to capture their regulators means that if a platform changes the rules in ways you dislike, you can go elsewhere, or simply revert those bad changes with a plugin that makes the system work better for you.

Interoperability From the Top Down and the Bottom Up

Since the earliest days of the internet, interoperability has been a key driver of technological self-determination for users. Sometimes, that interoperability was attained through adherence to formal standards, but often interoperability was hacked into existing, dominant services by upstarts who used careful reverse-engineering, bots, scraping, and other adversarial interoperability techniques to let users leave or modify the products and services they relied on.

Decades of anticompetitive mergers and acquisitions by tech companies have created a highly concentrated internet where companies no longer feel the pressure to interoperate, and where attempts to correct this discrepancy with unauthorized plugins, scraping or other guerrilla tactics gives rise to eye-watering legal risks.

The siloing of the internet is the result of both too little tech regulation and too much regulation.

In failing to block anticompetitive mergers, regulators allowed a few companies to buy their way to near-total dominance, and to use that dominance to prevent other forms of regulation and enforcement on issues like privacy, labor and consumer protection.

Meanwhile, restrictions on reverse-engineering and violating terms of service has all but ended the high-tech liberation tactics of an earlier era.

To make the internet better, policymakers need to make it easier for better services to operate, and for users to switch to those services. Policymakers also need to protect users’ privacy, labor, and consumer rights from abuse by today’s giant services and the smaller services that will come next.

Privacy Without Monopoly, Then and Now

Two years ago, we published Privacy Without Monopoly, a detailed analysis of the data-protection issues associated with a transition from a siloed, monopolized internet to a decentralized, interoperable internet.

Dominant platforms, from Apple to Facebook to Google, point to the many times that they step in to protect their users from bad actors, but are conspicuously silent about the many times when their users come to harm when they are targeted by the companies who own the dominant platforms.

In Privacy Without Monopoly, we argue that it’s possible for internet users to have the benefits of being protected by tech platforms, without the risks of being victimized by them. To get the best of both worlds, governments must withdraw tech platforms’ legal right to block interoperators, while simultaneously creating strong privacy protections for users.

That means that tech companies can still take technical actions to block bad actors from abusing their platforms, but if they want to enlist the law to aid them in doing so, they must show that their adversaries are violating their users’ legal rights to privacy.

Under this system, the final word on which privacy rights a platform’s users are entitled to comes from democratically accountable lawmakers who legislate in public - not from shareholder-accountable executives who make policies behind locked boardroom doors.

Convergence, At Last

This past year has been a very good one for this approach. 2023 saw regulators challenging the market power of the largest tech companies and even beginning the long, slow process of restoring a prudent regime of merger scrutiny.

The global resurgence of these long-dormant established antitrust actions is a welcome development, but at EFF, we think that interoperability, backstopped by privacy and other legal protections, offers a more immediate prospect of relief and protection for users.

That’s why we’ve been so glad to see 2023’s other developments, ones that aim to make it easier for users to leave Big Tech and go somewhere smaller and more responsive to their needs.

In Europe, the Digital Markets Act, passed into law in 2022, has made significant progress towards a regime of mandatory interoperability for the largest platforms. In the USA, the bipartisan AMERICA Act could require ad-tech giants to break into interoperable pieces, a key step towards a more secure economic future for the news industry.

The US Consumer Financial Protection Bureau is advancing a rule to force banks to support interoperable standards to facilitate shopping for a better bank and then switching to it. This rule explicitly takes away incumbents’ power to block new market entrants in the name of protecting users’ privacy. Instead, it establishes bright-line rules restricting what the finance sector may do with users’ data. What’s more, this rule acknowledges the importance of adversarial interoperability, by including a framework for scraping user data on behalf of the user (a tactic with a proven track record for getting users a better deal from their bank).

Finally, in the UK, the long overdue Digital Markets, Competition and Consumers Bill has finally been introduced.  This bill will give the Competition and Markets Authority’s large and exceptionally skilled Digital Markets Unit the enforcement powers it was promised when it was formed in 2021. Among these proposed powers are the ability to impose interoperability mandates on the largest tech companies, something the agency has already investigated in detail.

With lawmakers from different domains and territories all converging on approaches that solve the very real problems of bad platforms by centering user choice and user protections, tech regulation is at a turning point: away from the hopeless task of perfecting Big Tech and towards the necessary work of abolishing Big Tech.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Without Interoperability, Apple Customers Will Never Be Secure

13 décembre 2023 à 14:18

Every internet user should have the ability to privately communicate with the people that matter to them, in a secure fashion, using the tools and protocols of their choosing.

Apple’s iMessage offers end-to-end encrypted messaging for its customers, but only if those customers want to talk to someone who also has an Apple product. When an Apple customer tries to message an Android user, the data is sent over SMS, a protocol that debuted while Wayne’s World was still in its first theatrical run. SMS is wildly insecure, but when Apple customers ask the company how to protect themselves while exchanging messages with Android users, Apple’s answer is “buy them iPhones.”

That’s an obviously false binary. Computers are all roughly equivalent, so there’s no reason that an Android device couldn’t run an app that could securely send and receive iMessage data. If Apple won’t make that app, then someone else could. 

That’s exactly what Apple did, back when Microsoft refused to make a high-quality MacOS version of Microsoft Office: Apple reverse-engineered Office and released iWork, whose Pages, Numbers and Keynote could perfectly read and write Microsoft’s Word, Excel and Powerpoint files.

Back in September, a 16 year old high school student reverse engineered iMessage and released Pypush, a free software library that reimplements iMessage so that anyone can send and receive secure iMessage data, maintaining end-to-end encryption, without the need for an Apple ID.

Last week, Beeper, a multiprotocol messaging company, released Beeper Mini, an alternative iMessage app reportedly based on the Pypush code that runs on Android, giving Android users the “blue bubble” that allows Apple customers to communicate securely with them. Beeper Mini stands out among earlier attempts at this by allowing users’ devices to directly communicate with Apple’s servers, rather than breaking end-to-end encryption by having messages decrypted and re-encrypted by servers in a data-center.

Beeper Mini is an example of “adversarial interoperability.” That’s when you make something new work with an existing product, without permission from the product’s creator.

(“Adversarial interoperability” is quite a mouthful, so we came up with “competitive compatibility” or “comcom” as an alternative term.)

Comcom is how we get third-party inkjet ink that undercuts HP’s $10,000/gallon cartridges, and it’s how we get independent repair from technicians who perform feats the manufacturer calls “impossible.” Comcom is where iMessage itself comes from: it started life as iChat, with support for existing protocols like XMPP

Beeper Mini makes life more secure for Apple users in two ways: first, it protects the security of the messages they send to people who don’t use Apple devices; and second, it makes it easier for Apple users to switch to a rival platform if Apple has a change of management direction that deprioritizes their privacy.

Apple doesn’t agree. It blocked Beeper Mini users just days after the app’s release.  Apple told The Verge’s David Pierce that they had blocked Beeper Mini users because Beeper Mini “posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”

If Beeper Mini indeed posed those risks, then Apple has a right to take action on behalf of its users. The only reason to care about any of this is if it makes users more secure, not because it serves the commercial interests of either Apple or Beeper. 

But Apple’s account of Beeper Mini’s threats does not square with the technical information Beeper has made available. Apple didn’t provide any specifics to bolster its claims. Large tech firms who are challenged by interoperators often smear their products as privacy or security risks, even when those claims are utterly baseless.

The gold standard for security claims is technical proof, not vague accusations. EFF hasn't audited Beeper Mini and we’d welcome technical details from Apple about these claimed security issues. While Beeper hasn’t published the source code for Beeper Mini, they have offered to submit it for auditing by a third party.

Beeper Mini is back. The company released an update on Monday that restored its functionality. If Beeper Mini does turn out to have security defects, Apple should protect its customers by making it easier for them to connect securely with Android users.

One thing that won’t improve the security of Apple users is for Apple to devote its engineering resources to an arms race with Beeper and other interoperators. In a climate of stepped-up antitrust enforcement, and as regulators around the world are starting to force interoperability on tech giants, pointing at interoperable products and shouting “insecure! Insecure!” no longer cuts it. 

Apple needs to acknowledge that it isn’t the only entity that can protect Apple customers.

You Wanna Break Up With Your Bank? The CFPB Wants to Help You Do It.

31 octobre 2023 à 09:14

The Consumer Finance Protection Bureau has proposed a new “Personal Financial Data Rights” rule that will force your bank to make it easy for you to extract your financial data so that you can use it to comparison shop for a better offer, and switch to another bank with just a few clicks.

This is a very good idea, provided it’s done right. Done wrong, it could be a nightmare. Below, we explain what the Bureau should do to avoid the nightmare and realize the dream.

We’ve all heard that “if you’re not paying for the product, you’re the product.” But time and again, companies have proven that they’re not shy about treating you like the product, no matter how much you pay them

What makes a company treat you like a customer, and not the product? Fear. Companies treat their customers with dignity when they fear losing their business, or when they fear getting punished by regulators. Decades of lax antitrust and consumer protection enforcement have ensured that in most industries, companies don’t need to fear either.

Companies without real competitors have it easy: if you need their services, they can siphon off value from you and give it to themselves, without worrying about you leaving. As the old Lily Tomlin gag goes, “We Don't Care. We Don't Have To. We're the Phone Company.”

But even when companies do have competition they can rig the game so that it’s hard for you to break up with them and fall into a rival’s arms. Companies create high switching costs that lock you into their business. Remember when cellphone companies forced you to throw away your phone and your phone number when you changed carriers? 

When the cost of leaving a company is higher than the cost of staying, you’ll stay. The more costly a company can make your departure, the worse they can treat you before they have to work about you leaving. 

Leaving your bank can be very costly indeed. First, there’s the cost associated with bringing along all your financial data - your account history, the payees you have accounts with and so on. 

Then there’s the cost of figuring out which bank would be better for you. Maybe another bank charges more for checks and less for electronic payments, but has a higher overdraft fee. Given that you don’t write checks at all, but use a lot of electronic payments, and typically get dinged for an overdraft twice per year, should you make the switch?

The new CFPB proposal takes aim at both of these costs. Under the proposed rules, your bank or other financial institution will have to give you a simple way to export your data in a “machine-readable” format that can be read by comparison shopping sites and other banks. 

That’ll make it easier for you to figure out which bank is best for you, and to make the switch when you do. Who knows, maybe it’ll even convince your bank to treat you better (and if it doesn’t, well, you can leave).

EFF has always supported “data portability.” Technological self-determination starts with controlling your data: having a copy of your own, and deciding who else gets that copy. But with data-portability, the devil is always in the details.

Financial data is some of the most sensitive data around. When your data gets into the wrong hands, you’re at risk of identity theft and fraud, as well as the usual privacy risks associated with your personal data getting spread around online.

For decades, companies have offered to help you get your data out of your bank. In the absence of a formal standard for moving that data around, these companies “scraped” the data from your bank, using your username and password to log in to your bank as you and then slurp up the account data from your bank’s website. 

This kind of scraping is a time-honored part of the adversarial interoperability story: when a tech company won’t give you something that you have a right to, you just take it. 

But there are a lot more people who’d like to get their data out of a bank than are able (or willing) to write their own web-scraper. Instead, we’re likely to use a commercial service that promises to do this for us.

That’s fine, too - provided that the service doesn’t also abuse us. Unfortunately, these finance scrapers have a long and dishonorable history of abusing the data they collect on our behalf - selling it, mining it, and leaking it.

No one is quicker to mention this bad behavior than the banks, of course. As they grapple with these companies that seek to make it easier to take your business elsewhere, the banks are adamant that they’re doing it all for you, to protect you from privacy plunderers. The fact that blocking these scrapers helps the banks keep you locked in is just a happy coincidence.

To hear the banks tell it, the only way to stop other companies from abusing your data is to let them decide when and how you’re allowed to share it. The CFPB offers an alternative to this false binary: rather than letting your (conflicted) bank decide the terms on which other companies can get your data, the CFPB has spelled out its own strict proposed rules about what other companies are allowed to do with that data:

Third parties could not collect, use, or retain data to advance their own commercial interests through actions like targeted or behavioral advertising. Instead, third parties would be obligated to limit themselves to what is reasonably necessary to provide the individual’s requested product.

This is a good start. As we wrote previously, the way to limit corporate abuse of internet users is to ban creepy, exploitative and deceptive practices and punish companies that violate the ban. We can’t trust big companies to decide when a competitor is worthy of your trust. They have an unresolvable conflict of interest.

One thing we’d like to see in that final rule: strong assurances that users will still have the right to use scrapers to get at their data, either because their bank is dragging its feet, or because there’s some data that isn’t captured by this rule.

To protect users who choose to scrape their data, we’d want to apply the same privacy, data minimization and use restrictions to scrapers that the rule would apply to companies that get your data in more formal ways.

This is a promising development! The CFPB has identified a real problem and conceived of a solution that empowers the public to escape commercial traps. Their proposal identifies the privacy risks associated with data portability and seeks to mitigate them. The CBPB has also  managed to steer clear of the traps that similar rules fell into

❌
❌