Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 26 décembre 2024Flux principal

EU Tech Regulation—Good Intentions, Unclear Consequences: 2024 in Review

For a decade, the EU has served as the regulatory frontrunner for online services and new technology. Over the past two EU mandates (terms), the EU Commission brought down many regulations covering all sectors, but Big Tech has been the center of their focus. As the EU seeks to regulate the world’s largest tech companies, the world is taking notice, and debates about the landmark Digital Markets Act (DMA) and Digital Services Act (DSA) have spread far beyond Europe. 

The DSA’s focus is the governance of online content. It requires increased transparency in content moderation while holding platforms accountable for their role in disseminating illegal content. 

For “very large online platforms” (VLOPs), the DSA imposes a complex challenge: addressing “systemic risks” – those arising from their platforms’ underlying design and rules - as well as from how these services are used by the public. Measures to address these risks often pull in opposite directions. VLOPs must tackle illegal content and address public security concerns; while simultaneously upholding fundamental rights, such as freedom of expression; while also considering impacts on electoral processes and more nebulous issues like “civic discourse.” Striking this balance is no mean feat, and the role of regulators and civil society in guiding and monitoring this process remains unclear.  

As you can see, the DSA is trying to walk a fine line: addressing safety concerns and the priorities of the market. The DSA imposes uniform rules on platforms that are meant to ensure fairness for individual users, but without so proscribing the platforms’ operations that they can’t innovate and thrive.  

The DMA, on the other hand, concerns itself entirely with the macro level – not on the rights of users, but on the obligations of, and restrictions on, the largest, most dominant platforms.  

The DMA concerns itself with a group of “gatekeeper” platforms that control other businesses’ access to digital markets. For these gatekeepers, the DMA imposes a set of rules that are supposed to ensure “contestability” (that is, making sure that upstarts can contest gatekeepers’ control and maybe overthrow their power) and “fairness” for digital businesses.  

Together, the DSA and DMA promise a safer, fairer, and more open digital ecosystem. 

As 2024 comes to a close, important questions remain: How effectively have these laws been enforced? Have they delivered actual benefits to users?

Fairness Regulation: Ambition and High-Stakes Clashes 

There’s a lot to like in the DMA’s rules on fairness, privacy and choice...if you’re a technology user. If you’re a tech monopolist, those rules are a nightmare come true. 

Predictably, the DMA was inaugurated with a no-holds-barred dirty fight between the biggest US tech giants and European enforcers.  

Take commercial surveillance giant Meta: the company’s mission is to relentlessly gather, analyze and abuse your personal information, without your consent or even your knowledge. In 2016, the EU passed its landmark privacy law, called the General Data Protection Regulation. The GDPR was clearly intended to halt Facebook’s romp through the most sensitive personal information of every European. 

In response, Facebook simply pretended the GDPR didn’t say what it clearly said, and went on merrily collecting Europeans’ information without their consent. Facebook’s defense for this is that they were contractually obliged to collect this information, because their terms and conditions represented a promise to users to show them surveillance ads, and if they didn’t gather all that information, they’d be breaking that promise. 

The DMA strengthens the GDPR by clarifying the blindingly obvious point that a privacy law exists to protect your privacy. That means that Meta’s services – Facebook, Instagram, Threads, and its “metaverse” (snicker) - are no longer allowed to plunder your private information. They must get your consent. 

In response, Meta announced that it would create a new paid tier for people who don’t want to be spied on, and thus anyone who continues to use the service without paying for it is “consenting” to be spied on. The DMA explicitly bans these “Pay or OK” arrangements, but then, the GDPR banned Meta’s spying, too. Zuckerberg and his executives are clearly expecting that they can run the same playbook again. 

Apple, too, is daring the EU to make good on its threats. Ordered to open up its iOS devices (iPhones, iPads and other mobile devices) to third-party app stores, the company cooked up a Kafkaesque maze of junk fees, punitive contractual clauses, and unworkable conditions and declared itself to be in compliance with the DMA.  

For all its intransigence, Apple is getting off extremely light. In an absurd turn of events, Apple’s iMessage system was exempted from the DMA’s interoperability requirements (which would have forced Apple to allow other messaging systems to connect to iMessage and vice-versa). The EU Commission decided that Apple’s iMessage – a dominant platform that the company CEO openly boasts about as a source of lock-in – was not a “gatekeeper platform.”

Platform regulation: A delicate balance 

For regulators and the public the growing power of online platforms has sparked concerns: how can we address harmful content, while also protecting platforms from being pushed to over-censor, so that freedom of expression isn’t on the firing line?  

EFF has advocated for fundamental principles like “transparency,” “openness,” and “technological self-determination.” In our European work, we always emphasize that new legislation should preserve, not undermine, the protections that have served the internet well. Keep what works, fix what is broken.  

In the DSA, the EU got it right, with a focus on platforms’ processes rather than on speech control. The DSA has rules for reporting problematic content, structuring terms of use, and responding to erroneous content removals. That’s the right way to do platform governance! 

But that doesn’t mean we’re not worried about the DSA’s new obligations for tackling illegal content and systemic risks, broad goals that could easily lead to enforcement overreach and censorship. 

In 2024, our fears were realized, when the DSA’s ambiguity as to how systemic risks should be mitigated created a new, politicized enforcement problem. Then-Commissioner Theirry Breton sent a letter to Twitter, saying that under the DSA, the platform had an obligation to remove content related to far-right xenophobic riots in the UK, and about an upcoming meeting between Donald Trump and Elon Musk. This letter sparked widespread concern that the DSA was a tool to allow bureaucrats to decide which political speech could and could not take place online. Breton’s letter sidestepped key safeguards in the DSA: the Commissioner ignored the question of “systemic risks” and instead focused on individual pieces of content, and then blurred the DSA’s critical line between "illegal” and “harmful”; Breton’s letter also ignored the territorial limits of the DSA, demanding content takedowns that reached outside the EU. 

Make no mistake: online election disinformation and misinformation can have serious real-world consequences, both in the U.S. and globally. This is why EFF supported the EU Commission’s initiative to gather input on measures platforms should take to mitigate risks linked to disinformation and electoral processes. Together with ARTICLE 19, we submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Additionally, we recommended that DSA risk assessment and mitigation compliance evaluations prioritize ensuring respect for fundamental rights.  

The typical way many platforms address organized or harmful disinformation is by removing content that violates community guidelines, a measure trusted by millions of EU users. But contrary to concerns raised by EFF and other civil society groups, a new law in the EU, the EU Media Freedom Act, enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force. While EFF successfully pushed for crucial changes and stronger protections, we remain concerned about the real-world challenges of enforcement.  

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

À partir d’avant-hierFlux principal

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

EFF and Partners to EU Commissioner: Prioritize User Rights, Avoid Politicized Enforcement of DSA Rules

EFF, Access Now, and Article 19 have written to EU Commissioner for Internal Market Thierry Breton calling on him to clarify his understanding of “systemic risks” under the Digital Services Act, and to set a high standard for the protection of fundamental rights, including freedom of expression and of information. The letter was in response to Breton’s own letter addressed to X, in which he urged the platform to take action to ensure compliance with the DSA in the context of far-right riots in the UK as well as the conversation between US presidential candidate Donald Trump and X CEO Elon Musk, which was scheduled to be, and was in fact, live-streamed hours after his letter was posted on X. 

Clarification is necessary because Breton’s letter otherwise reads as a serious overreach of EU authority, and transforms the systemic risks-based approach into a generalized tool for censoring disfavored speech around the world. By specifically referencing the streaming event between Trump and Musk on X, Breton’s letter undermines one of the core principles of the DSA: to ensure fundamental rights protections, including freedom of expression and of information, a principle noted in Breton’s letter itself.

The DSA Must Not Become A Tool For Global Censorship

The letter plays into some of the worst fears of critics of the DSA that it would be used by EU regulators as a global censorship tool rather than addressing societal risks in the EU. 

The DSA requires very large online platforms (VLOPs) to assess the systemic risks that stem from “the functioning and use made of their services in the [European] Union.” VLOPs are then also required to adopt “reasonable, proportionate and effective mitigation measures,”“tailored to the systemic risks identified.” The emphasis on systemic risks was intended, at least in part, to alleviate concerns that the DSA would be used to address individual incidents of dissemination of legal, but concerning, online speech. It was one of the limitations that civil society groups concerned with preserving a free and open internet worked hard to incorporate. 

Breton’s letter troublingly states that he is currently monitoring “debates and interviews in the context of elections” for the “potential risks” they may pose in the EU. But such debates and interviews with electoral candidates, including the Trump-Musk interview, are clearly matters of public concern—the types of publication that are deserving of the highest levels of protection under the law. Even if one has concerns about a specific event, dissemination of information that is highly newsworthy, timely, and relevant to public discourse is not in itself a systemic risk.

People seeking information online about elections have a protected right to view it, even through VLOPs. The dissemination of this content should not be within the EU’s enforcement focus under the threat of non-compliance procedures, and risks associated with such events should be analyzed with care. Yet Breton’s letter asserts that such publications are actually under EU scrutiny. And it is entirely unclear what proactive measures a VLOP should take to address a future speech event without resorting to general monitoring and disproportionate content restrictions. 

Moreover, Breton’s letter fails to distinguish between “illegal” and “harmful content” and implies that the Commission favors content-specific restrictions of lawful speech. The European Commission has itself recognized that “harmful content should not be treated in the same way as illegal content.” Breton’s tweet that accompanies his letter refers to the “risk of amplification of potentially harmful content.” His letter seems to use the terms interchangeably. Importantly, this is not just a matter of differences in the legal protections for speech between the EU, the UK, the US, and other legal systems. The distinction, and the protection for legal but harmful speech, is a well-established global freedom of expression principle. 

Lastly, we are concerned that the Commission is reaching beyond its geographic mandate.  It is not clear how such events that occur outside the EU are linked to risks and societal harm to people who live and reside within the EU, as well as the expectation of the EU Commission about what actions VLOPs must take to address these risks. The letter itself admits that the assessment is still in process, and the harm merely a possibility. EFF and partners within the DSA Human Rights Alliance have advocated for a long time that there is a great need to follow a human rights-centered enforcement of the DSA that also considers the global effects of the DSA. It is time for the Commission to prioritize their enforcement actions accordingly. 

Read the full letter here.

Disinformation and Elections: EFF and ARTICLE 19 Submit Key Recommendations to EU Commission

Global Elections and Platform Responsibility

This year is a major one for elections around the world, with pivotal races in the U.S., the UK, the European Union, Russia, and India, to name just a few. Social media platforms play a crucial role in democratic engagement by enabling users to participate in public discourse and by providing access to information, especially as public figures increasingly engage with voters directly. Unfortunately elections also attract a sometimes dangerous amount of disinformation, filling users' news feed with ads touting conspiracy theories about candidates, false news stories about stolen elections, and so on.

Online election disinformation and misinformation can have real world consequences in the U.S. and all over the world. The EU Commission and other regulators are therefore formulating measures platforms could take to address disinformation related to elections. 

Given their dominance over the online information space, providers of Very Large Online Platforms (VLOPs), as sites with over 45 million users in the EU are called, have unique power to influence outcomes.  Platforms are driven by economic incentives that may not align with democratic values, and that disconnect  may be embedded in the design of their systems. For example, features like engagement-driven recommender systems may prioritize and amplify disinformation, divisive content, and incitement to violence. That effect, combined with a significant lack of transparency and targeting techniques, can too easily undermine free, fair, and well-informed electoral processes.

Digital Services Act and EU Commission Guidelines

The EU Digital Services Act (DSA) contains a set of sweeping regulations about online-content governance and responsibility for digital services that make X, Facebook, and other platforms subject in many ways to the European Commission and national authorities. It focuses on content moderation processes on platforms, limits targeted ads, and enhances transparency for users. However, the DSA also grants considerable power to authorities to flag content and investigate anonymous users - powers that they may be tempted to mis-use with elections looming. The DSA also obliges VLOPs to assess and mitigate systemic risks, but it is unclear what those obligations mean in practice. Much will depend on how social media platforms interpret their obligations under the DSA, and how European Union authorities enforce the regulation.

We therefore support the initiative by the EU Commission to gather views about what measures the Commission should call on platforms to take to mitigate specific risks linked to disinformation and electoral processes.

Together with ARTICLE 19, we have submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Furthermore, DSA risk assessment and mitigation compliance evaluations should focus primarily on ensuring respect for fundamental rights. 

We further argue against using watermarking of AI content to curb disinformation, and caution against the draft guidelines’ broadly phrased recommendation that platforms should exchange information with national authorities. Any such exchanges should take care to respect human rights, beginning with a transparent process.  We also recommend that the guidelines pay particular attention to attacks against minority groups or online harassment and abuse of female candidates, lest such attacks further silence those parts of the population who are already often denied a voice.

EFF and ARTICLE 19 Submission: https://www.eff.org/document/joint-submission-euelections

European Court of Human Rights Confirms: Weakening Encryption Violates Fundamental Rights

In a milestone judgment—Podchasov v. Russiathe European Court of Human Rights (ECtHR) has ruled that weakening of encryption can lead to general and indiscriminate surveillance of the communications of all users and violates the human right to privacy.  

In 2017, the landscape of digital communication in Russia faced a pivotal moment when the government required Telegram Messenger LLP and other “internet communication” providers to store all communication data—and content—for specified durations. These providers were also required to supply law enforcement authorities with users’ data, the content of their communications, as well as any information necessary to decrypt user messages. The FSB (the Russian Federal Security Service) subsequently ordered Telegram to assist in decrypting the communications of specific users suspected of engaging in terrorism-related activities.

Telegram opposed this order on the grounds that it would create a backdoor that would undermine encryption for all of its users. As a result, Russian courts fined Telegram and ordered the blocking of its app within the country. The controversy extended beyond Telegram, drawing in numerous users who contested the disclosure orders in Russian courts. A Russian citizen, Mr Podchasov, escalated the issue to the European Court of Human Rights (ECtHR), arguing that forced decryption of user communication would infringe on the right to private life under Article 8 of the European Convention of Human Rights (ECHR), which reads as follows:  

Everyone has the right to respect for his private and family life, his home and his correspondence (Article 8 ECHR, right to respect for private and family life, home and correspondence) 

EFF has always stood against government intrusion into the private lives of users and advocated for strong privacy guarantees, including the right to confidential communication. Encryption not only safeguards users’ privacy but also protects their right to freedom of expression protected under international human rights law. 

In a great victory for privacy advocates, the ECtHR agreed. The Court found that the requirement of continuous, blanket storage of private user data interferes with the right to privacy under the Convention, emphasizing that the possibility for national authorities to access these data is a crucial factor for determining a human rights violation [at 53]. The Court identified the inherent risks of arbitrary government action in secret surveillance in the present case and found again—following its stance in Roman Zakharov v. Russiathat the relevant legislation failed to live up to the quality of law standards and lacked the adequate and effective safeguards against misuse [75].  Turning to a potential justification for such interference, the ECtHR emphasized the need of a careful balancing test that considers the use of modern data storage and processing technologies and weighs the potential benefits against important private-life interests [62-64]. 

In addressing the State mandate for service providers to submit decryption keys to security services, the court's deliberations culminated in the following key findings [76-80]:

  1. Encryption being important for protecting the right to private life and other fundamental rights, such as freedom of expression: The ECtHR emphasized the importance of encryption technologies for safeguarding the privacy of online communications. Encryption safeguards and protects the right to private life generally while also supporting the exercise of other fundamental rights, such as freedom of expression.
  2. Encryption as a shield against abuses: The Court emphasized the role of encryption to provide a robust defense against unlawful access and generally “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.” The Court held that this must be given due consideration when assessing measures which could weaken encryption.
  3. Decryption of communications orders weakens the encryption for all users: The ECtHR established that the need to decrypt Telegram's "secret chats" requires the weakening of encryption for all users. Taking note again of the dangers of restricting encryption described by many experts in the field, the Court held that backdoors could be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. 
  4. Alternatives to decryption: The ECtHR took note of a range of alternative solutions to compelled decryption that would not weaken the protective mechanisms, such as forensics on seized devices and better-resourced policing.  

In light of these findings, the Court held that the mandate to decrypt end-to-end encrypted communications risks weakening the encryption mechanism for all users, which was a disproportionate to the legitimate aims pursued. 

In summary [80], the Court concluded that the retention and unrestricted state access to internet communication data, coupled with decryption requirements, cannot be regarded as necessary in a democratic society, and are thus unlawful. It emphasized that a direct access of authorities to user data on a generalized basis and without sufficient safeguards impairs the very essence of the right to private life under the Convention. The Court also highlighted briefs filed by the European Information Society Institute (EISI) and Privacy International, which provided insight into the workings of end-to-end encryption and explained why mandated backdoors represent an illegal and disproportionate measure. 

Impact of the ECtHR ruling on current policy developments 

The ruling is a landmark judgment, which will likely draw new normative lines about human rights standards for private and confidential communication. We are currently supporting Telegram in its parallel complaint to the ECtHR, contending that blocking its app infringes upon fundamental rights. As part of a collaborative efforts of international human rights and media freedom organisations, we have submitted a third-party intervention to the ECtHR, arguing that blocking an entire app is a serious and disproportionate restriction on freedom of expression. That case is still pending. 

The Podchasov ruling also directly challenges ongoing efforts in Europe to weaken encryption to allow access and scanning of our private messages and pictures.

For example, the controversial UK's Online Safety Act creates the risk that online platforms will use software to search all users’ photos, files, and messages, scanning for illegal content. We recently submitted comments to the relevant UK regulator (Ofcom) to avoid any weakening of encryption when this law becomes operational. 

In the EU, we are concerned about the European Commission’s message-scanning proposal (CSAR) as being a disaster for online privacy. It would allow EU authorities to compel online services to scan users’ private messages and compare users’ photos to against law enforcement databases or use error-prone AI algorithms to detect criminal behavior. Such detection measures will inevitably lead to dangerous and unreliable Client-Side Scanning practices, undermining the essence of end-to-end encryption. As the ECtHR deems general user scanning as disproportionate, specifically criticizing measures that weaken existing privacy standards, forcing platforms like WhatsApp or Signal to weaken security by inserting a vulnerability into all users’ devices to enable message scanning must be considered unlawful. 

The EU regulation proposal is likely to be followed by other proposals to grant law enforcement access to encrypted data and communications. An EU high level expert group on ‘access to data for effective law enforcement’ is expected to make policy recommendations to the next EU Commission in mid-2024. 

We call on lawmakers to take the Court of Human Rights ruling seriously: blanket and indiscriminate scanning of user communication and the general weakening of encryption for users is unacceptable and unlawful. 

Fighting European Threats to Encryption: 2023 Year in Review 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

The Latest EU Media Freedom Act Agreement Is a Bad Deal for Users

6 décembre 2023 à 14:23

The European Parliament and Member States’ representatives last week negotiated a controversial special status for media outlets that are active on large online platforms. The EU Media Freedom Act (EMFA), though well-intended, has significant flaws. By creating a special class of privileged self-declared media providers whose content cannot be removed from big tech platforms, the law not only changes company policies but risks harming users in the European Union (EU) and beyond. 

Fostering Media Plurality: Good Intentions 

Last year, the EU Commission presented the EMFA as a way to bolster media pluralism in the EU. It promised increased transparency about media ownership and safeguards against government surveillance and the use of spyware against journalists—real dangers that EFF has warned against for years. Some of these aspects are still in flux and remain up for negotiation, but the political agreement on EMFA’s content moderation provisions could erode public trust in media and jeopardize the integrity of information channels. 

Content Hosting by Force: Bad Consequences 

Millions of EU users trust that online platforms will take care of content that violates community standards. But contrary to concerns raised by EFF and other civil society groups, Article 17 of the EMFA enforces a 24-hour content moderation exemption for media, effectively making platforms host content by force.  

This “must carry” rule prevents large online platforms like X or Meta, owner of Facebook, Instagram, and WhatsApp, from removing or flagging media content that breaches community guidelines. If the deal becomes law, it could undermine equality of speech, fuel disinformation, and threaten marginalized groups. It also poses important concerns about government interference in editorial decisions.

Imagine signing up to a social media platform committed to removing hate speech, only to find that EU regulations prevent platforms from taking any action against it. Platforms must instead create a special communication channel to discuss content restrictions with news providers before any action is taken. This approach not only undermines platforms’ autonomy in enforcing their terms of use but also
jeopardizes the safety of marginalized groups, who are often targeted by hate speech and propaganda. This policy could also allow orchestrated disinformation to remain online, undermining one of the core goals of EMFA to provide more “reliable sources of information to citizens”.  

Bargaining Hell: Platforms and Media Companies Negotiating Content  

Not all media providers will receive this special status. Media actors must self-declare their status on platforms, and demonstrate adherence to recognized editorial standards or affirm compliance with regulatory requirements. Platforms will need to ensure that most of the reported information is publicly accessible. Also, Article 17 is set to include a provision on AI-generated content, with specifics still under discussion. This new mechanism puts online platforms in a powerful yet precarious position of deciding over the status of a wide range of media actors. 

The approach of the EU Media Freedom Act effectively leads to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible—Christoph Schmon, EFF International Policy Director

It’s likely that the must carry approach will lead to a perplexing bargaining situation where influential media outlets and platforms negotiate over which content remains visible. There are strong pecuniary interests by media outlets to pursue a fast-track communication channel and make sure that their content is always visible, potentially at the expense of smaller providers.  

Implementation Challenges 

It’s positive that negotiators listened to some of our concerns and added language to safeguard media independence from political parties and governments. However, we remain concerned about the enforcement reality and the potential exploitation of the self-declaration mechanism, which could undermine the equality of free speech and democratic debate. While lawmakers stipulated in Article 17 that the EU Digital Services Act remains intact and that platforms are free to shorten the suspension period in crisis situations, the practical implementation of the EMFA will be a challenge. 

❌
❌