Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Saving the Internet in Europe: Fostering Choice, Competition and the Right to Innovate

This is the fourth instalment in a four-part blog series documenting EFF's work in Europe. You can read additional posts here: 

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.   

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and discuss how what happens in Europe can affect digital rights across the globe.  

EFF’s Approach to Competition  

Market concentration and monopoly power among internet companies and internet access impacts many of EFF’s issues, particularly innovation, consumer privacy, net neutrality, and platform censorship. And we have said it many times: Antitrust law and rules on market fairness are powerful tools with the potential to either cement the hold of established giants over a market even more or to challenge incumbents and spur innovation and choice that benefit users. Antitrust enforcement must hit monopolists where it hurts: ensuring that anti-competitive behaviors like abuse of dominance by multi-billion-dollar tech giants come at a price high enough to force real change.  

The EU has recently shown that it is serious about cracking down on Big Tech companies with its full arsenal of antitrust rules. For example, in a high-stakes appeal in 2022, EU judges hit Google with a record fine of more than €4.13 billion for abusing its dominant position by locking Android users into its search engine (now pending before the Court of Justice). 

We believe that with the right dials and knobs, clever competition rules can complement antitrust enforcement and ensure that firms that grow top heavy and sluggish are displaced by nimbler new competitors. Good competition rules should enable better alternatives that protect users’ privacy and enhance users’ technological self-determination. In the EU, this requires not only proper enforcement of existing rules but also new regulation that tackles gatekeeper’s dominance before harm is done. 

The Digital Markets Act  

The DMA will probably turn out to be one of the most impactful pieces of EU tech legislation in history. It’s complex but the overall approach is to place new requirements and restrictions on online “gatekeepers”: the largest tech platforms, which control access to digital markets for other businesses. These requirements are designed to break down the barriers businesses face in competing with the tech giants. 

Let’s break down some of the DMA’s rules. If enforced robustly, the DMA will make it easier for users to switch services, install third party apps and app stores and have more power over default settings on their mobile computing devices. Users will no longer be steered into sticking with the defaults embedded in their devices and can choose, for example, their own default browser on Apple’s iOS. The DMA also tackles data collection practices: gatekeepers can no longer cross-combine user data or sign them into new services without their explicit consent and must provide them with a specific choice. A “pay or consent” advertising model as proposed by Meta will probably not cut it.  

There are also new data access and sharing requirements that could benefit users, such as the right of end users to request effective portability of data and get access to effective tools to this end. One section of the DMA even requires gatekeepers to make their person-to-person messaging systems (like WhatsApp) interoperable with competitors’ systems on request—making it a globally unique ex ante obligation in competition regulation. At EFF, we believe that interoperable platforms can be a driver for technological self-determination and a more open internet. But even though data portability and interoperability are anti-monopoly medicine, they come with challenges: Ported data can contain sensitive information about you and interoperability poses difficult questions about security and governance, especially when it’s mandated for encrypted messaging services. Ideally, the DMA should be implemented to offer better protections for users’ privacy and security, new features, new ways of communication and better terms of service.  

There are many more do's and don'ts in the new fairness rulebook of the EU, such as the prohibition of platforms to favour their own products and services over those of rivals in ranking, crawling and indexing (ensuring users a real choice!), along with many other measures. All these and other requirements are to create more fairness and contestability in digital markets—a laudable objective.  If done right, the DMA presents an option for a real change for technology users—and a real threat to current abusive or unfair industry practices by Big Tech. But if implemented poorly, it could create more legal uncertainty, restrict free expression, or even legitimize the status quo. It is now up to the European Commission to bring the DMA’s promises to life. 

Public Interest 

As the EU’s 2024–2029 mandate is now in full swing, it will be important to not lose sight of the big picture. Fairness rules can only be truly fair if they follow a public-interest approach by empowering users, business, and society more broadly and make it easier for users to control the technology they rely on. And we cannot stop here: the EU must strive to foster a public interest internet and support open-source and decentralized alternatives. Competition and innovation are interconnected forces and the recent rise of the Fediverse makes this clear. Platforms like Mastodon and Bluesky thrive by filling gaps (and addressing frustrations) left by corporate giants, offering users more control over their experience and ultimately strengthening the resilience of the open internet. The EU should generally support user-controlled alternatives to Big Tech and use smart legislation to foster interoperability for services like social networks. In an ideal world, users are no longer locked into dominant platforms and the ad-tech industry—responsible for pervasive surveillance and other harms—is brought under control. 

What we don’t want is a European Union that conflates fairness with protectionist industrial policies or reacts to geopolitical tensions with measures that could backfire on digital openness and fair markets. The enforcement of the DMA and new EU competition and digital rights policies must remain focused on prioritizing user rights and ensuring compliance from Big Tech—not tolerating malicious (non)compliance tactics—and upholding the rule of law rather than politicized interventions. The EU should avoid policies that could lead to a fragmented internet and must remain committed to net neutrality. It should also not hesitate to counter the concentration of power in the emerging AI stack market, where control over infrastructure and technology is increasingly in the hands of a few dominant players. 

EFF will be watching. And we will continue to fight to save the internet in Europe, ensuring that fairness in digital markets remains rooted in choice, competition, and the right to innovate. 

Systemic Risk Reporting: A System in Crisis?

16 janvier 2025 à 12:45

The first batch of reports assessing the so called “systemic risks” posed by the largest online platforms are in. These reports are a result of the Digital Services Act (DSA), Europe’s new law regulating platforms like Google, Meta, Amazon or X, and have been eagerly awaited by civil society groups across the globe. In their reports, companies are supposed to assess whether their services contribute to a wide range of barely defined risks. These go beyond the dissemination of illegal content and include vaguely defined categories such as negative effects on the integrity of elections, impediments to the exercise of fundamental rights or undermining of civic discourse. We have previously warned that the subjectivity of these categories invites a politization of the DSA.  

In view of a new DSA investigation into TikTok’s potential role in Romania’s presidential election, we take a look at the reports and the framework that has produced them to understand their value and limitations.  

A Short DSA Explainer  

The DSA covers a lot of different services. It regulates online markets like Amazon or Shein, social networks like Instagram and TikTok, search engines like Google and Bing, and even app stores like those run by Apple and Google. Different obligations apply to different services, depending on their type and size. Generally, the lower the degree of control a service provider has over content shared via its product, the fewer obligations it needs to comply with.   

For example, hosting services like cloud computing must provide points of contact for government authorities and users and basic transparency reporting. Online platforms, meaning any service that makes user generated content available to the public, must meet additional requirements like providing users with detailed information about content moderation decisions and the right to appeal. They must also comply with additional transparency obligations.  

While the DSA is a necessary update to the EU’s liability rules and improved users’ rights, we have plenty of concerns with the route that it takes:  

  • We worry about the powers it gives to authorities to request user data and the obligation on providers to proactively share user data with law enforcement.  
  • We are also concerned about the ways in which trusted flaggers could lead to the over-removal of speech, and  
  • We caution against the misuse of the DSA’s mechanism to deal with emergencies like a pandemic. 

Introducing Systemic Risks 

The most stringent DSA obligations apply to large online platforms and search engines that have more than 45 million users in the EU. The European Commission has so far designated more than 20 services to constitute such “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). These companies, which include X, TikTok, Amazon, Google Search, Maps and Play, YouTube and several porn platforms, must proactively assess and mitigate “systemic risks” related to the design, operation and use of their services. The DSA’s non-conclusive list of risks includes four broad categories: 1) the dissemination of illegal content, 2) negative effects on the exercise of fundamental rights, 3) threats to elections, civic discourse and public safety, and 4) negative effects and consequences in relation to gender-based violence, protection of minors and public health, and on a person’s physical and mental wellbeing.  

The DSA does not provide much guidance on how VLOPs and VLOSEs are supposed to analyze whether they contribute to the somewhat arbitrary seeming list of risks mentioned. Nor does the law offer clear definitions of how these risks should be understood, leading to concerns that they could be interpreted widely and lead to the extensive removal of lawful but awful content. There is equally little guidance on risk mitigation as the DSA merely names a few measures that platforms can choose to employ. Some of these recommendations are incredibly broad, such as adapting the design, features or functioning of a service, or “reinforcing internal processes”. Others, like introducing age verification measures, are much more specific but come with a host of issues and can undermine fundamental rights themselves.   

Risk Management Through the Lens of the Romanian Election 

Per the DSA, platforms must annually publish reports detailing how they have analyzed and managed risks. These reports are complemented by separate reports compiled by external auditors, tasked with assessing platforms’ compliance with their obligations to manage risks and other obligations put forward by the DSA.  

To better understand the merits and limitations of these reports, let’s examine the example of the recent Romanian election. In late November 2024, an ultranationalist and pro-Russian candidate, Calin Georgescu, unexpectedly won the first round of Romania’s presidential election. After reports by local civil society groups accusing TikTok of amplifying pro-Georgescu content, and a declassified brief published by Romania’s intelligence services that alleges cyberattacks and influence operations, the Romanian constitutional court annulled the results of the election. Shortly after, the European Commission opened formal proceedings against TikTok for insufficiently managing systemic risks related to the integrity of the Romanian election. Specifically, the Commission’s investigation focuses on “TikTok's recommender systems, notably the risks linked to the coordinated inauthentic manipulation or automated exploitation of the service and TikTok's policies on political advertisements and paid-for political content.” 

TikTok’s own risk assessment report dedicates eight pages to potential negative effects on elections and civic discourse. Curiously, TikTok’s definition of this particular category of risk focuses on the spread of election misinformation but makes no mention of coordinated inauthentic behavior or the manipulation of its recommender systems. This illustrates the wide margin on platforms to define systemic risks and implement their own mitigation strategies. Leaving it up to platforms to define relevant risks not only makes the comparison of approaches taken by different companies impossible, it can also lead to overly broad or narrow approachespotentially undermining fundamental rights or running counter to the obligation to effectively deal with risks, as in this example. It should also be noted that mis- and disinformation are terms not defined by international human rights law and are therefore not well suited as a robust basis on which freedom of expression may be restricted.  

In its report, TikTok describes the measures taken to mitigate potential risks to elections and civic discourse. This overview broadly describes some election-specific interventions like labels for content that has not been fact checked but might contain misinformation, and describes TikTok’s policies like its ban of political ads, which is notoriously easy to circumvent. It does not entail any indication that the robustness and utility of the measures employed are documented or have been tested, nor any benchmarks of when TikTok considers a risk successfully mitigated. It does not, for example, contain figures on how many pieces of content receive certain labels, and how these influence users’ interactions with the content in question.  

Similarly, the report does not contain any data regarding the efficacy of TikTok’s enforcement of its political ads ban. TikTok’s “methodology” for risk assessments, also included in the report, does not help in answering any of these questions, either. And looking at the report compiled by the external auditor, in this case KPMG, we are once again left disappointed: KPMG concluded that it was impossible to assess TikTok’s systemic risk compliance because of two earlier, pending investigations by the European Commission due to potential non-compliance with the systemic risk mitigation obligations. 

Limitations of the DSA’s Risk Governance Approach 

What then, is the value of the risk and audit reports, published roughly a year after their finalization? The answer may be very little.  

As explained above, companies have a lot of flexibility in how to assess and deal with risks. On the one hand, some degree of flexibility is necessary: every VLOP and VLOSE differs significantly in terms of product logics, policies, user base and design choices. On the other hand, the high degree of flexibility in determining what exactly a systemic risk is can lead to significant inconsistencies and render risk analysis unreliable. It also allows regulators to put forward their own definitions, thereby potentially expanding risk categories as they see fit to deal with emerging or politically salient issues.  

Rather than making sense of diverse and possibly conflicting definitions of risks, companies and regulators should put forward joint benchmarks, and include civil society experts in the process. 

Speaking of benchmarks: There is a critical lack of standardized processes, assessment methodologies and reporting templates. Most assessment reports contain very little information on how the actual assessments are carried out, and the auditors’ reports distinguish themselves through an almost complete lack of insight into the auditing process itself. This information is crucial, but it is near impossible to adequately scrutinize the reports themselves without understanding whether auditors were provided the necessary information, whether they ran into any roadblocks looking at specific issues, and how evidence was produced and documented. And without methodologies that are applicable across the board it will remain very challenging, if not impossible, to compare approaches taken by different companies.  

The TikTok example shows that the risk and audit reports do not contain the “smoking gun” some might have hoped for. Besides the shortcomings explained above, this is due to the inherent limitations of the DSA itself. Although the DSA attempts to take a holistic approach to complex societal risks that cut across different but interconnected challenges, its reporting system is forced to only consider the obligations put forward by the DSA itself. Any legal assessment framework will struggle to capture complex societal challenges like the integrity of elections or public safety. In addition, phenomena as complex as electoral processes and civic discourse are shaped by a range of different legal instruments, including European rules on political ads, data protection, cybersecurity and media pluralism, not to mention countless national laws. Expecting a definitive answer on the potential implications of large online services on complex societal processes from a risk report will therefore always fall short.  

The Way Forward  

The reports do present a slight improvement in terms of companies’ accountability and transparency. Even if the reports may not include the hard evidence of non-compliance some might have expected, they are a starting point to understanding how platforms attempt to grapple with complex issues taking place on their services. As such, they are, at best, the basis for an iterative approach to compliance. But many of the risks described by the DSA as systemic and their relationships with online services are still poorly understood.  

Instead of relying on platforms or regulators to define how risks should be conceptualized and mitigated, a joint approach is neededone that builds on expertise by civil society, academics and activists, and emphasizes best practices. A collaborative approach would help make sense of these complex challenges and how they can be addressed in ways that strengthen users’ rights and protect fundamental rights.  

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Disinformation and Elections: EFF and ARTICLE 19 Submit Key Recommendations to EU Commission

Global Elections and Platform Responsibility

This year is a major one for elections around the world, with pivotal races in the U.S., the UK, the European Union, Russia, and India, to name just a few. Social media platforms play a crucial role in democratic engagement by enabling users to participate in public discourse and by providing access to information, especially as public figures increasingly engage with voters directly. Unfortunately elections also attract a sometimes dangerous amount of disinformation, filling users' news feed with ads touting conspiracy theories about candidates, false news stories about stolen elections, and so on.

Online election disinformation and misinformation can have real world consequences in the U.S. and all over the world. The EU Commission and other regulators are therefore formulating measures platforms could take to address disinformation related to elections. 

Given their dominance over the online information space, providers of Very Large Online Platforms (VLOPs), as sites with over 45 million users in the EU are called, have unique power to influence outcomes.  Platforms are driven by economic incentives that may not align with democratic values, and that disconnect  may be embedded in the design of their systems. For example, features like engagement-driven recommender systems may prioritize and amplify disinformation, divisive content, and incitement to violence. That effect, combined with a significant lack of transparency and targeting techniques, can too easily undermine free, fair, and well-informed electoral processes.

Digital Services Act and EU Commission Guidelines

The EU Digital Services Act (DSA) contains a set of sweeping regulations about online-content governance and responsibility for digital services that make X, Facebook, and other platforms subject in many ways to the European Commission and national authorities. It focuses on content moderation processes on platforms, limits targeted ads, and enhances transparency for users. However, the DSA also grants considerable power to authorities to flag content and investigate anonymous users - powers that they may be tempted to mis-use with elections looming. The DSA also obliges VLOPs to assess and mitigate systemic risks, but it is unclear what those obligations mean in practice. Much will depend on how social media platforms interpret their obligations under the DSA, and how European Union authorities enforce the regulation.

We therefore support the initiative by the EU Commission to gather views about what measures the Commission should call on platforms to take to mitigate specific risks linked to disinformation and electoral processes.

Together with ARTICLE 19, we have submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Furthermore, DSA risk assessment and mitigation compliance evaluations should focus primarily on ensuring respect for fundamental rights. 

We further argue against using watermarking of AI content to curb disinformation, and caution against the draft guidelines’ broadly phrased recommendation that platforms should exchange information with national authorities. Any such exchanges should take care to respect human rights, beginning with a transparent process.  We also recommend that the guidelines pay particular attention to attacks against minority groups or online harassment and abuse of female candidates, lest such attacks further silence those parts of the population who are already often denied a voice.

EFF and ARTICLE 19 Submission: https://www.eff.org/document/joint-submission-euelections

EFF And Other Experts Join in Pointing Out Pitfalls of Proposed EU Cyber-Resilience Act

Today we join a set of 56 experts from organizations such as Google, Panasonic, Citizen Lab, Trend Micro and many others in an open letter calling on the European Commission, European Parliament, and Spain’s Ministry of Economic Affairs and Digital Transformation to reconsider the obligatory vulnerability reporting mechanisms built into Article 11 of the EU’s proposed Cyber-Resilience Act (CRA). As we’ve pointed out before, this reporting obligation raises major cybersecurity concerns. Broadening the knowledge of unpatched vulnerabilities to a larger audience will increase the risk of exploitation, and software publishers being forced to report these vulnerabilities to government regulators introduces the possibility of governments adding it to their offensive arsenals. These aren’t just theoretical threats: vulnerabilities stored on Intelligence Community infrastructure have been breached by hackers before.

Technology companies and others who create, distribute, and patch software are in a tough position. The intention of the CRA is to protect the public from companies who shirk their responsibilities by leaving vulnerabilities unpatched and their customers open to attack. But companies and software publishers who do the right thing by treating security vulnerabilities as well-guarded secrets until a proper fix can be applied and deployed now face an obligation to disclose vulnerabilities to regulators within 24 hours of exploitation. This significantly increases the danger these vulnerabilities present to the public. As the letter points out, the CRA “already requires software publishers to mitigate vulnerabilities without delay” separate from the reporting obligation. The letter also points out that this reporting mechanism may interfere with the collaboration and trusted relationship between companies and security researchers who work with companies to produce a fix.

The letter suggests to either remove this requirement entirely or change the reporting obligation to be a 72-hour window after patches are made and deployed. It also calls on European law- and policy-makers to prohibit use of reported vulnerabilities “for intelligence, surveillance, or offensive purposes.” These changes would go a long way in ensuring security vulnerabilities discovered by software publishers don’t wind up being further exploited by falling into the wrong hands.

Separately, EFF (and others) have pointed out the dangers the CRA presents to open-source software developers by making them liable for vulnerabilities in their software if they so much as solicit donations for their efforts. The obligatory reporting mechanism and open-source liability clauses of the CRA must be changed or removed. Otherwise, software publishers and open-source developers who are doing a public service will fall under a burdensome and undue liability.

❌
❌