Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

How the FTC Can Make the Internet Safe for Chatbots

28 juin 2024 à 16:13

No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI.

Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.

But Chair Khan didn’t treat the question as an excuse to launch into the policymaker’s verbal equivalent of a compulsory gymnastics exhibition.

Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act (FTCA5).

Section 5 gives the FTC a broad mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices.” Chair Khan has made extensive use of these powers during her first term as chair, for example, by banning noncompetes and taking action on online privacy.

At EFF, we share many of the widespread concerns over privacy, fairness, and labor rights raised by AI. We think that copyright law is the wrong tool to address those concerns, both because of what copyright law does and doesn’t permit, and because establishing copyright as the framework for AI model-training will not address the real privacy and labor issues posed by generative AI. We think that privacy problems should be addressed with privacy policy and that labor issues should be addressed with labor policy.

That’s what made Chair Khan’s remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on. The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights.

Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”

The question of whether chatbots can be taught not to “hallucinate” is far from settled. Some industry leaders think the problem can never be solved, even as startups publish (technically impressive-sounding, but non-peer reviewed) papers claiming to have solved the problem.

Whether the problem can be solved, it’s clear that for the commercial chatbot offerings in the market today, “hallucinations” come with the package. Or, put more simply: today’s chatbots lie, and no one can stop them.

That’s a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm. It’s hard enough to attend your grandmother’s funeral without the added pain of your airline’s chatbot lying to you about the bereavement fare.

Here’s where the FTC’s powers can help the American public:

The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an “unfair and deceptive practice” that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails.

After all, if a company doesn’t get in trouble when its chatbot lies to a customer, why would they pay extra for a chatbot that has been designed not to lie? And if there’s no reason to pay extra for a chatbot that doesn’t lie, why would anyone invest in solving the “hallucination” problem?

Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace. If you can prove that your chatbot won’t lie to your customers’ users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot’s output.

But until someone does figure out how to make a “hallucination”-free chatbot, guidance promising serious consequences for chatbots that deceive users with “hallucinated” lies will push companies to limit the use of chatbots to low-stakes environments, leaving human workers to do their jobs.

The FTC has already started down this path. Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency’s thinking on how companies should present their chatbots to users.

We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions – especially if that guidance is backed up with muscular enforcement.

Wanna Make Big Tech Monopolies Even Worse? Kill Section 230

It’s no fun when your friends ask you to take sides in their disputes. The plans for every dinner party, wedding, and even funeral arrive at a juncture where you find yourself thinking, “Dang, if I invite her, then he won’t come.”

It’s even less fun when you’re running an online community, from a groupchat to a Mastodon server (or someday, a Bluesky server), or any other (increasingly cheap and easy) space where your friends (and their friends) can hang out online, far from the unquenchable dumpster-fires of Big Tech social media.

But there’s a circle of hell that’s infinitely worse than being asked to choose sides in a flamewar: being threatened with a lawsuit for refusing to do so (or even for complying with one side’s request over the other).

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

At EFF, we’ve had decades of direct experience with the, uh, heated rhetoric that attends online disputes (there’s a reason the most famous law about online arguments was coined by the very first person EFF ever hired).

That’s one of the reasons we’re such big fans of Section 230 (47 U.S.C. § 230), a much-maligned, badly misunderstood law that protects people who run online services from being dragged into legal disputes between their users.

Getting sued can profoundly disrupt your life, even if you win. Much of the time, people on the receiving end of legal threats are forced to settle because they can’t afford to defend themselves in court. There's a whole cottage industry of legal bullies who’ll help the thin-skinned, vindictive and deep-pocketed to silence their critics.

That’s why we were so alarmed to see a bill introduced in the House Energy and Commerce Committee that would sunset Section 230 as of December 31, 2025, with no provision to protect online service providers from being conscripted into their users’ online disputes and the legal battles that arise from them.

Homely places on the internet aren’t just a curiosity anymore, nor are they merely a hangover from the Web 1.0 era.

In an age of resurgent anti-monopoly activism, small online communities, either standing on their own, or joined in loose “federations,” are the best chance we have to escape Big Tech’s relentless surveillance and clumsy, unaccountable control.

Look, running online communities is already a thankless task that can convert a generous digital host into a bitter ex-online host.

The alternatives to Big Tech come from individuals, co-ops, nonprofits and startups. These cannot exist in a world where we change the law to make people who offer a space where communities may gather vulnerable to being dragged into lawsuits between their community members.

It’s one thing to volunteer your time and resources to create a hospitable place online; it’s another thing entirely to assume an uninsurable risk that could jeopardize your life’s savings, your home, and your retirement fund. Defending against a single such case can cost hundreds of thousands of dollars.

That’s very bad news indeed, because a world without Section 230 will desperately need alternatives to Big Tech.

Big Tech has deep pockets, which means that even if it creates a system of hair-trigger moderation that takes down anything remotely controversial on sight, it will still attract a staggering number of legal threats.

There’s a useful analogy here to FTX, the disgraced, fraudulent cryptocurrency exchange. Like Big Tech, FTX has some genuinely aggrieved users, but FTX has also been targeted by opportunistic treasure hunters who have laid claims against the company totaling 23.6 quintillion dollars.

We know what Big Tech will do in a post-230 world, because some of us are already living in that world. Donald Trump signed SESTA-FOSTA into law in 2018. The law was billed as a narrowly targeted measure to make platforms liable for failing to intervene in cases where they were aware of human trafficking. In practice, the law has been used to indiscriminately target consensual sex work, placing sex workers in harm’s way (just as we predicted).

Without Section 230, Big Tech will shoot first, ask questions later when it comes to taking down controversial online speech (like #MeToo or Black Lives Matter). For marginalized users with little social power (again, like #MeToo or Black Lives Matter participants), Big Tech takedowns will be permanent, because Big Tech has no incentive to figure out whether it’s worth hosting their speech.

Meanwhile, for the wealthy and powerful, a post-230 world is one where dictators, war criminals, and fraudsters will have a new, powerful tool to silence their critics.

A post-230 world, in other words, is a world where Big Tech is infinitely worse for the users who already suffer most from the large platforms’ moderation failures.

But it’s also a world where it’s infinitely harder to start an alternative to Big Tech’s gigantic walled gardens.

No wonder tech billionaires support getting rid of Section 230: they understand that their overgrown, universally loathed services are vulnerable to real alternatives.

Four years ago, the Biden Administration declared that promoting competition was a whole-of-government priority (and we cheered). Getting rid of Section 230 will do the opposite: freeze the internet in its current, monopolized state, creating a world where the rule of today’s tech barons is never challenged by a more democratic, user-centric internet.

Take Action

Ending Section 230 Will Make Big Tech Monopolies Even Worse

Big Tech to EU: "Drop Dead"

The European Union’s new Digital Markets Act (DMA) is a complex, many-legged beast, but at root, it is a regulation that aims to make it easier for the public to control the technology they use and rely on.  

One DMA rule forces the powerful “gatekeeper” tech companies to allow third-party app stores. That means that you, the owner of a device, can decide who you trust to provide you with software for it.  

Another rule requires those tech gatekeepers to offer interoperable gateways that other platforms can plug into - so you can quit using a chat client, switch to a rival, and still connect with the people you left behind (similar measures may come to social media in the future). 

There’s a rule banning “self-preferencing.” That’s when platforms push their often inferior, in-house products and hide superior products made by their rivals. 

And perhaps best of all, there’s a privacy rule, reinforcing the eight-year-old General Data Protection Regulation, a strong, privacy law that has been flouted  for too long, especially by the largest tech giants. 

In other words, the DMA is meant to push us toward a world where you decide which software runs on your devices,  where it’s easy to find the best products and services, where you can leave a platform for a better one without forfeiting your social relationships , and where you can do all of this without getting spied on. 

If it works, this will get dangerously close to better future we’ve spent the past thirty years fighting for. 

There’s just one wrinkle: the Big Tech companies don’t want that future, and they’re trying their damndest to strangle it in its cradle.

 Right from the start, it was obvious that the tech giants were going to war against the DMA, and the freedom it promised to their users. Take Apple, whose tight control over which software its customers can install was a major concern of the DMA from its inception.

Apple didn’t invent the idea of a “curated computer” that could only run software that was blessed by its manufacturer, but they certainly perfected it. iOS devices will refuse to run software unless it comes from Apple’s App Store, and that control over Apple’s customers means that Apple can exert tremendous control over app vendors, too. 

 Apple charges app vendors a whopping 30 percent commission on most transactions, both the initial price of the app and everything you buy from it thereafter. This is a remarkably high transaction fee —compare it to the credit-card sector, itself the subject of sharp criticism for its high 3-5 percent fees. To maintain those high commissions, Apple also restricts its vendors from informing their customers about the existence of other ways of paying (say, via their website) and at various times has also banned its vendors from offering discounts to customers who complete their purchases without using the app.  

Apple is adamant that it needs this control to keep its customers safe, but in theory and in practice, Apple has shown that it can protect you without maintaining this degree of control, and that it uses this control to take away your security when it serves the company’s profits to do so. 

Apple is worth between two and three trillion dollars. Investors prize Apple’s stock in large part due to the tens of billions of dollars it extracts from other businesses that want to reach its customers. 

The DMA is aimed squarely at these practices. It requires the largest app store companies to grant their customers the freedom to choose other app stores. Companies like Apple were given over a year to prepare for the DMA, and were told to produce compliance plans by March of this year. 

But Apple’s compliance plan falls very short of the mark: between a blizzard of confusing junk fees (like the €0.50 per use “Core Technology Fee” that the most popular apps will have to pay Apple even if their apps are sold through a rival store) and onerous conditions (app makers who try to sell through a rival app store are have their offerings removed from Apple’s store, and are permanently  banned from it), the plan in no way satisfies the EU’s goal of fostering competition in app stores. 

That’s just scratching the surface of Apple’s absurd proposal: Apple’s customers will have to successfully navigate a maze of deeply buried settings just to try another app store (and there’s some pretty cool-sounding app stores in the wings!), and Apple will disable all your third-party apps if you take your phone out of the EU for 30 days. 

Apple appears to be playing a high-stakes game of chicken with EU regulators, effectively saying, “Yes, you have 500 million citizens, but we have three trillion dollars, so why should we listen to you?” Apple inaugurated this performance of noncompliance by banning Epic, the company most closely associated with the EU’s decision to require third party app stores, from operating an app store and terminating its developer account (Epic’s account was later reinstated after the EU registered its disapproval). 

It’s not just Apple, of course.  

The DMA includes new enforcement tools to finally apply the General Data Privacy Regulation (GDPR) to US tech giants. The GDPR is Europe’s landmark privacy law, but in the eight years since its passage, Europeans have struggled to use it to reform the terrible privacy practices of the largest tech companies. 

Meta is one of the worst on privacy, and no wonder: its entire business is grounded in the nonconsensual extraction and mining of billions of dollars’ worth of private information from billions of people all over the world. The GDPR should be requiring Meta to actually secure our willing, informed (and revocable) consent to carry on all this surveillance, and there’s good evidence that more than 95 percent of us would block Facebook spying if we could. 

Meta’s answer to this is a “Pay or Okay” system, in which users who do not consent to Meta’s surveillance will have to pay to use the service, or be blocked from it. Unfortunately for Meta, this is prohibited (privacy is not a luxury good that only the wealthiest should be afforded).  

Just like Apple, Meta is behaving as though the DMA permits it to carry on its worst behavior, with minor cosmetic tweaks around the margins. Just like Apple, Meta is daring the EU to enforce its democratically enacted laws, implicitly promising to pit its billions against Europe’s institutions to preserve its right to spy on us. 

These are high-stakes clashes. As the tech sector grew more concentrated, it also grew less accountable, able to substitute lock-in and regulatory capture for making good products and having their users’ backs. Tech has found new ways to compromise our privacy rights, our labor rights, and our consumer rights - at scale. 

After decades of regulatory indifference to tech monopolization, competition authorities all over the world are taking on Big Tech. The DMA is by far the most muscular and ambitious salvo we’ve seen. 

Seen in that light, it’s no surprise that Big Tech is refusing to comply with the rules. If the EU successfully forces tech to play fair, it will serve as a starting gun for a global race to the top, in which tech’s ill-gotten gains - of data, power and money - will be returned to the users and workers from whom that treasure came. 

The architects of the DMA and DSA foresaw this, of course. They’ve announced investigations into Apple, Google and Meta, threatening fines of 10 percent of the companies’ global income, which will double to 20 percent if the companies don’t toe the line. 

It’s not just Big Tech that’s playing for all the marbles - it’s also the systems of democratic control and accountability. If Apple can sabotage the DMA’s insistence on taking away its veto over its customers’ software choices, that will spill over into the US Department of Justice’s case over the same issue, as well as the cases in Japan and South Korea, and the pending enforcement action in the UK. 

 

 

Privacy First and Competition

Privacy First” is a simple, powerful idea: seeing as so many of today’s technological problems are also privacy problems, why don’t we fix privacy first?

Whether you’re worried about kids’ mental health, or tech’s relationship to journalism, or spying by foreign adversaries, or reproductive rights, or AI deepfakes, or nonconsensual pornography, you’re worried about a problem rooted in the primitive, deplorable state of American privacy law.

It’s really impossible to overstate how bad the state of federal privacy law is in America. The last time the USA got a big, muscular, broadly applicable new consumer privacy law, the year was 1988, and the law was targeted at video-store clerks who leaked your VHS rental history.

It’s been a minute. America is long overdue for a strong, comprehensive privacy law

A new privacy law will help us with all those issues, and more. It would level the playing field between giants with troves of user data and startups who want to build something better. Such a law would keep competition from becoming a race to the bottom on user privacy.

Importantly, a strong privacy law will go a long way to improving the dismal state of competition in America’s ossified and decaying tech sector.

Take the tech sector’s relationship to the news media. The ad-tech duopoly has rigged the advertising market and takes $0.51 out of every advertising dollar. Without their vast troves of nonconsensually harvested personal data, Meta and Google wouldn’t be able to misappropriate billions from the publishers. Banning surveillance advertising wouldn’t just be good for our privacy - it would give publishers leverage to shift those billions back onto their own balance sheets. 

Undoing market concentration will require interoperability so that users can move from dominant services to new, innovative rivals without losing their data and relationships. The biggest challenge to interoperability? Privacy. Every time a user moves from one service to another, the resulting data-flows create risks for those users and their friends, families, customers and other social connections. Congress knows this, which is why every proposed interoperability law incorporates its own little privacy law. Privacy shouldn’t be an afterthought in a tech regulation. A standalone privacy law would give lawmakers the freedom to promote interoperability without having to work out a new privacy system for each effort.

That’s also true of Right to Repair laws: these laws are routinely opposed by tech monopolists who insist that giving Americans the right to choose their own repair shop or parts exposes them to privacy risks. It’s true that our devices harbor vast troves of sensitive information - but that doesn’t mean we should let Big Tech (or Big Car) monopolize repair. Instead, we should require everyone - both original manufacturers and independent repair shops - to honor your privacy.

America’s legal privacy vacuum is largely the result of the commercial surveillance industry’s lobbying power. Increasing competition in the tech sector won’t just help our privacy: it’ll also weaken tech’s lobbying power, which is a function of the vast profits that can be extracted in the absence of “wasteful competition” and the ease with which a concentrated sector can converge on a common lobbying position. 

That’s why EFF has urged the FTC and DOJ to consider privacy impacts when scrutinizing proposed mergers: not just to protect internet users from the harms of surveillance business models, but to protect democracy from the corrupting influence of surveillance cartels.

Privacy isn’t dead. Far from it. For a quarter of a century, would-be tech monopolists have been insisting that we have no privacy and telling us to “get over it.” The vast majority of the public wants privacy and will take it if offered, and grab it if it’s not.  

Whenever someone tells you that privacy is dead, they’re just wishcasting. What they mean is: “If I can convince you privacy is dead, I can make more money at your expense.”

Monopolists want us to believe that their power over our lives is inevitable and unchangeable, just as the surveillance industry banks on convincing you that the fight for privacy was and always will be a lost cause. But we once had a better internet, and we can get a better internet again. The fight for that better internet starts with privacy, a battle that we all want to win.




Hip Hip Hooray For Hipster Antitrust

14 février 2024 à 18:58

Don’t believe the hype.

The undeniable fact is that the FTC has racked up a long list of victories over corporate abuses, like busting a nationwide, decades-long fraud that tricked people into paying for “free” tax preparation.

The wheels of justice grind slowly, so many of the actions the FTC has brought are still pending. But these actions are significant. In tandem with the Department of Justice, it is suing over fake apartment listings, blocking noncompete clauses, targeting fake online reviews, and going after gig work platforms for ripping off their workers.

Companies that abuse our privacy and trust are being hit with massive fines: $520 million for Epic’s tricks to get kids to spend money online, $20 million to punish Microsoft for spying on kids who use Xboxes, and a $25 million fine against Amazon for capturing voice recordings of kids and storing kids’ location data.

The FTC is using its authority to investigate many forms of digital deception, from deceptive and fraudulent online ads to the use of cloud computing to lock in business customers to data brokers’ sale of our personal information.

And of course, the FTC is targeting anticompetitive mergers, like Nvidia’s attempted takeover of ARM - which has the immediate effect of preventing an anticompetitive merger and the long-term benefit of deterring future attempts at similar oligopolistic mergers. They’ve also targeted private equity “rollups,” which combine  dozens or hundreds of smaller companies into a monopoly with pricing power over its customers and the whip hand over its workers. These kinds of rollups are all too common, and destructive of offline and online services alike.

From Right to Repair to Click to Cancel to fines for deceptive UI (“dark patterns”), the FTC has taken up many of the issues we’ve fought for over the years. So the argument that the FTC is a do-nothing agency wasting our time with grandstanding stunts is just factually wrong. As recently as  December 2023, the FTC  and DOJ chalked up ten major victories

But this “win/loss ratio” accounting also misses the point. Even if the outcome isn’t guaranteed, this FTC refuses to turn a blind eye  to abuses of the American public. 

What’s more, the FTC collaborated with the DOJ on new merger guidelines that spell out what kinds of mergers are likely to be legal. These are the most comprehensive, future-looking guidelines in generations, and they tee up enforcement actions for this FTC and its successors for many years to come.

The FTC is also seeking to revive existing laws that have lane dormant for too long. . As John Mark Newman explains, this FTC has cannily filed cases that reassert its right to investigate “competing” companies with interlocking directorates.

Newman also praises the FTC for “supercharging student interest in the field,” with law schools seeing surging interest in antitrust courses and a renaissance in law review articles about antitrust enforcement. 

The FTC is not alone in this. Its colleagues in the DOJ’s antitrust division have their own long list of victories.

But the most important victory for America’s antitrust enforcers is what doesn’t happen. Across the economy and every sector, corporate leaders are backing away from merger-driven growth and predatory pricing, deterred from violating the law by the knowledge that the generations-long period of tolerance for lawless corporate abuse is coming to a close.

Even better, America’s antitrust enforcers don’t stand alone. At long last, it seems that the whole world is reversing decades of tacit support for oligopolies and corporate bullying. 

The Great Interoperability Convergence: 2023 Year in Review

21 décembre 2023 à 11:08

It’s easy to feel hopeless about the collapse of the tech sector into a group 0f monopolistic silos that harvest and exploit our data, hold our communities hostage, gouge us on prices, and steal our wages.

But all over the world and across different government departments, policymakers are converging on a set of muscular, effective solutions to Big Tech dominance.

This convergence spans financial regulators and consumer protection agencies; it’s emerging in Europe, the USA, and the UK. It’s kind of a moment.

How Not To Fix Big Tech 

To understand what’s new in Big Tech regulation, we should talk briefly about what’s old. For many years, policymakers have viewed the problems of Big Tech as tech problems, not big problems. From disinformation to harassment to copyright infringement, the go-to policy response of the past two decades has been to make tech platforms responsible for policing and controlling their users.

This approach starts from the assumption that the problems that occur after hundreds of millions or billions of people are locked inside of a platform’s walled garden are problems of mismanagement, not problems of scale. The thinking goes that the dictators of these platforms aren’t sufficiently benevolent or competent, and they must either be incentivized to do better or be replaced with more suitable autocrats.

This approach has consistently failed - gigantic companies have proved as unperfectable as they are ungovernable. What’s more, deputizing giant companies to police their users has the perverse effect of making them more powerful by creating barriers to entry that clear the field of competitors who might offer superior alternatives for both users and business customers.

Take copyright enforcement: in 2019, the EU passed a rule requiring platforms to intercept and filter all their users’ communications to screen out copyright infringement. These filters are stupendously expensive to build - YouTube’s version of them, the notorious Content ID, has cost Google more than $100 million to build and maintain. Not only is the result an unnavigable, Kafkaesque nightmare for creators, it’s also far short of what the EU rule requires.

Any law that requires every digital service to mobilize the resources of a trillion-dollar multinational will tend to produce an internet run by trillion-dollar multinationals.

A Better Approach

We think that the biggest problem facing the internet today is bigness itself. Very large platforms are every bit as capable of committing errors in judgment or making trade-offs that harm their users as small platforms. The difference is that when very large platforms make even small errors, millions or even billions of users are in harm’s way.

What’s more, if users are trapped inside these platforms - by high switching costs, data lock-in, or digital rights management - they pay a steep price for seeking out superior alternatives. And in a market dominated by large firms who have locked in their users, investors are unwilling to fund those alternatives.

For EFF, the solution to Big Tech is smaller tech: allowing lots of different kinds of organizations (from startups to user groups to nonprofits to local governments to individual tinkerers) to provide interoperable services that all work together. These smaller platforms are closer to their users, and stand a better chance of parsing out the fine-grained nuances in community moderation. Smaller platforms are easier to regulate, too.

Giving users the choice of more, interoperable platforms that are less able to capture their regulators means that if a platform changes the rules in ways you dislike, you can go elsewhere, or simply revert those bad changes with a plugin that makes the system work better for you.

Interoperability From the Top Down and the Bottom Up

Since the earliest days of the internet, interoperability has been a key driver of technological self-determination for users. Sometimes, that interoperability was attained through adherence to formal standards, but often interoperability was hacked into existing, dominant services by upstarts who used careful reverse-engineering, bots, scraping, and other adversarial interoperability techniques to let users leave or modify the products and services they relied on.

Decades of anticompetitive mergers and acquisitions by tech companies have created a highly concentrated internet where companies no longer feel the pressure to interoperate, and where attempts to correct this discrepancy with unauthorized plugins, scraping or other guerrilla tactics gives rise to eye-watering legal risks.

The siloing of the internet is the result of both too little tech regulation and too much regulation.

In failing to block anticompetitive mergers, regulators allowed a few companies to buy their way to near-total dominance, and to use that dominance to prevent other forms of regulation and enforcement on issues like privacy, labor and consumer protection.

Meanwhile, restrictions on reverse-engineering and violating terms of service has all but ended the high-tech liberation tactics of an earlier era.

To make the internet better, policymakers need to make it easier for better services to operate, and for users to switch to those services. Policymakers also need to protect users’ privacy, labor, and consumer rights from abuse by today’s giant services and the smaller services that will come next.

Privacy Without Monopoly, Then and Now

Two years ago, we published Privacy Without Monopoly, a detailed analysis of the data-protection issues associated with a transition from a siloed, monopolized internet to a decentralized, interoperable internet.

Dominant platforms, from Apple to Facebook to Google, point to the many times that they step in to protect their users from bad actors, but are conspicuously silent about the many times when their users come to harm when they are targeted by the companies who own the dominant platforms.

In Privacy Without Monopoly, we argue that it’s possible for internet users to have the benefits of being protected by tech platforms, without the risks of being victimized by them. To get the best of both worlds, governments must withdraw tech platforms’ legal right to block interoperators, while simultaneously creating strong privacy protections for users.

That means that tech companies can still take technical actions to block bad actors from abusing their platforms, but if they want to enlist the law to aid them in doing so, they must show that their adversaries are violating their users’ legal rights to privacy.

Under this system, the final word on which privacy rights a platform’s users are entitled to comes from democratically accountable lawmakers who legislate in public - not from shareholder-accountable executives who make policies behind locked boardroom doors.

Convergence, At Last

This past year has been a very good one for this approach. 2023 saw regulators challenging the market power of the largest tech companies and even beginning the long, slow process of restoring a prudent regime of merger scrutiny.

The global resurgence of these long-dormant established antitrust actions is a welcome development, but at EFF, we think that interoperability, backstopped by privacy and other legal protections, offers a more immediate prospect of relief and protection for users.

That’s why we’ve been so glad to see 2023’s other developments, ones that aim to make it easier for users to leave Big Tech and go somewhere smaller and more responsive to their needs.

In Europe, the Digital Markets Act, passed into law in 2022, has made significant progress towards a regime of mandatory interoperability for the largest platforms. In the USA, the bipartisan AMERICA Act could require ad-tech giants to break into interoperable pieces, a key step towards a more secure economic future for the news industry.

The US Consumer Financial Protection Bureau is advancing a rule to force banks to support interoperable standards to facilitate shopping for a better bank and then switching to it. This rule explicitly takes away incumbents’ power to block new market entrants in the name of protecting users’ privacy. Instead, it establishes bright-line rules restricting what the finance sector may do with users’ data. What’s more, this rule acknowledges the importance of adversarial interoperability, by including a framework for scraping user data on behalf of the user (a tactic with a proven track record for getting users a better deal from their bank).

Finally, in the UK, the long overdue Digital Markets, Competition and Consumers Bill has finally been introduced.  This bill will give the Competition and Markets Authority’s large and exceptionally skilled Digital Markets Unit the enforcement powers it was promised when it was formed in 2021. Among these proposed powers are the ability to impose interoperability mandates on the largest tech companies, something the agency has already investigated in detail.

With lawmakers from different domains and territories all converging on approaches that solve the very real problems of bad platforms by centering user choice and user protections, tech regulation is at a turning point: away from the hopeless task of perfecting Big Tech and towards the necessary work of abolishing Big Tech.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Without Interoperability, Apple Customers Will Never Be Secure

13 décembre 2023 à 14:18

Every internet user should have the ability to privately communicate with the people that matter to them, in a secure fashion, using the tools and protocols of their choosing.

Apple’s iMessage offers end-to-end encrypted messaging for its customers, but only if those customers want to talk to someone who also has an Apple product. When an Apple customer tries to message an Android user, the data is sent over SMS, a protocol that debuted while Wayne’s World was still in its first theatrical run. SMS is wildly insecure, but when Apple customers ask the company how to protect themselves while exchanging messages with Android users, Apple’s answer is “buy them iPhones.”

That’s an obviously false binary. Computers are all roughly equivalent, so there’s no reason that an Android device couldn’t run an app that could securely send and receive iMessage data. If Apple won’t make that app, then someone else could. 

That’s exactly what Apple did, back when Microsoft refused to make a high-quality MacOS version of Microsoft Office: Apple reverse-engineered Office and released iWork, whose Pages, Numbers and Keynote could perfectly read and write Microsoft’s Word, Excel and Powerpoint files.

Back in September, a 16 year old high school student reverse engineered iMessage and released Pypush, a free software library that reimplements iMessage so that anyone can send and receive secure iMessage data, maintaining end-to-end encryption, without the need for an Apple ID.

Last week, Beeper, a multiprotocol messaging company, released Beeper Mini, an alternative iMessage app reportedly based on the Pypush code that runs on Android, giving Android users the “blue bubble” that allows Apple customers to communicate securely with them. Beeper Mini stands out among earlier attempts at this by allowing users’ devices to directly communicate with Apple’s servers, rather than breaking end-to-end encryption by having messages decrypted and re-encrypted by servers in a data-center.

Beeper Mini is an example of “adversarial interoperability.” That’s when you make something new work with an existing product, without permission from the product’s creator.

(“Adversarial interoperability” is quite a mouthful, so we came up with “competitive compatibility” or “comcom” as an alternative term.)

Comcom is how we get third-party inkjet ink that undercuts HP’s $10,000/gallon cartridges, and it’s how we get independent repair from technicians who perform feats the manufacturer calls “impossible.” Comcom is where iMessage itself comes from: it started life as iChat, with support for existing protocols like XMPP

Beeper Mini makes life more secure for Apple users in two ways: first, it protects the security of the messages they send to people who don’t use Apple devices; and second, it makes it easier for Apple users to switch to a rival platform if Apple has a change of management direction that deprioritizes their privacy.

Apple doesn’t agree. It blocked Beeper Mini users just days after the app’s release.  Apple told The Verge’s David Pierce that they had blocked Beeper Mini users because Beeper Mini “posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”

If Beeper Mini indeed posed those risks, then Apple has a right to take action on behalf of its users. The only reason to care about any of this is if it makes users more secure, not because it serves the commercial interests of either Apple or Beeper. 

But Apple’s account of Beeper Mini’s threats does not square with the technical information Beeper has made available. Apple didn’t provide any specifics to bolster its claims. Large tech firms who are challenged by interoperators often smear their products as privacy or security risks, even when those claims are utterly baseless.

The gold standard for security claims is technical proof, not vague accusations. EFF hasn't audited Beeper Mini and we’d welcome technical details from Apple about these claimed security issues. While Beeper hasn’t published the source code for Beeper Mini, they have offered to submit it for auditing by a third party.

Beeper Mini is back. The company released an update on Monday that restored its functionality. If Beeper Mini does turn out to have security defects, Apple should protect its customers by making it easier for them to connect securely with Android users.

One thing that won’t improve the security of Apple users is for Apple to devote its engineering resources to an arms race with Beeper and other interoperators. In a climate of stepped-up antitrust enforcement, and as regulators around the world are starting to force interoperability on tech giants, pointing at interoperable products and shouting “insecure! Insecure!” no longer cuts it. 

Apple needs to acknowledge that it isn’t the only entity that can protect Apple customers.

You Wanna Break Up With Your Bank? The CFPB Wants to Help You Do It.

31 octobre 2023 à 09:14

The Consumer Finance Protection Bureau has proposed a new “Personal Financial Data Rights” rule that will force your bank to make it easy for you to extract your financial data so that you can use it to comparison shop for a better offer, and switch to another bank with just a few clicks.

This is a very good idea, provided it’s done right. Done wrong, it could be a nightmare. Below, we explain what the Bureau should do to avoid the nightmare and realize the dream.

We’ve all heard that “if you’re not paying for the product, you’re the product.” But time and again, companies have proven that they’re not shy about treating you like the product, no matter how much you pay them

What makes a company treat you like a customer, and not the product? Fear. Companies treat their customers with dignity when they fear losing their business, or when they fear getting punished by regulators. Decades of lax antitrust and consumer protection enforcement have ensured that in most industries, companies don’t need to fear either.

Companies without real competitors have it easy: if you need their services, they can siphon off value from you and give it to themselves, without worrying about you leaving. As the old Lily Tomlin gag goes, “We Don't Care. We Don't Have To. We're the Phone Company.”

But even when companies do have competition they can rig the game so that it’s hard for you to break up with them and fall into a rival’s arms. Companies create high switching costs that lock you into their business. Remember when cellphone companies forced you to throw away your phone and your phone number when you changed carriers? 

When the cost of leaving a company is higher than the cost of staying, you’ll stay. The more costly a company can make your departure, the worse they can treat you before they have to work about you leaving. 

Leaving your bank can be very costly indeed. First, there’s the cost associated with bringing along all your financial data - your account history, the payees you have accounts with and so on. 

Then there’s the cost of figuring out which bank would be better for you. Maybe another bank charges more for checks and less for electronic payments, but has a higher overdraft fee. Given that you don’t write checks at all, but use a lot of electronic payments, and typically get dinged for an overdraft twice per year, should you make the switch?

The new CFPB proposal takes aim at both of these costs. Under the proposed rules, your bank or other financial institution will have to give you a simple way to export your data in a “machine-readable” format that can be read by comparison shopping sites and other banks. 

That’ll make it easier for you to figure out which bank is best for you, and to make the switch when you do. Who knows, maybe it’ll even convince your bank to treat you better (and if it doesn’t, well, you can leave).

EFF has always supported “data portability.” Technological self-determination starts with controlling your data: having a copy of your own, and deciding who else gets that copy. But with data-portability, the devil is always in the details.

Financial data is some of the most sensitive data around. When your data gets into the wrong hands, you’re at risk of identity theft and fraud, as well as the usual privacy risks associated with your personal data getting spread around online.

For decades, companies have offered to help you get your data out of your bank. In the absence of a formal standard for moving that data around, these companies “scraped” the data from your bank, using your username and password to log in to your bank as you and then slurp up the account data from your bank’s website. 

This kind of scraping is a time-honored part of the adversarial interoperability story: when a tech company won’t give you something that you have a right to, you just take it. 

But there are a lot more people who’d like to get their data out of a bank than are able (or willing) to write their own web-scraper. Instead, we’re likely to use a commercial service that promises to do this for us.

That’s fine, too - provided that the service doesn’t also abuse us. Unfortunately, these finance scrapers have a long and dishonorable history of abusing the data they collect on our behalf - selling it, mining it, and leaking it.

No one is quicker to mention this bad behavior than the banks, of course. As they grapple with these companies that seek to make it easier to take your business elsewhere, the banks are adamant that they’re doing it all for you, to protect you from privacy plunderers. The fact that blocking these scrapers helps the banks keep you locked in is just a happy coincidence.

To hear the banks tell it, the only way to stop other companies from abusing your data is to let them decide when and how you’re allowed to share it. The CFPB offers an alternative to this false binary: rather than letting your (conflicted) bank decide the terms on which other companies can get your data, the CFPB has spelled out its own strict proposed rules about what other companies are allowed to do with that data:

Third parties could not collect, use, or retain data to advance their own commercial interests through actions like targeted or behavioral advertising. Instead, third parties would be obligated to limit themselves to what is reasonably necessary to provide the individual’s requested product.

This is a good start. As we wrote previously, the way to limit corporate abuse of internet users is to ban creepy, exploitative and deceptive practices and punish companies that violate the ban. We can’t trust big companies to decide when a competitor is worthy of your trust. They have an unresolvable conflict of interest.

One thing we’d like to see in that final rule: strong assurances that users will still have the right to use scrapers to get at their data, either because their bank is dragging its feet, or because there’s some data that isn’t captured by this rule.

To protect users who choose to scrape their data, we’d want to apply the same privacy, data minimization and use restrictions to scrapers that the rule would apply to companies that get your data in more formal ways.

This is a promising development! The CFPB has identified a real problem and conceived of a solution that empowers the public to escape commercial traps. Their proposal identifies the privacy risks associated with data portability and seeks to mitigate them. The CBPB has also  managed to steer clear of the traps that similar rules fell into

❌
❌