Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?

11 décembre 2024 à 09:00

The Brazilian Supreme Court is on the verge of deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. 

The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.

The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.

After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.

However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.

The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.  

Some Background About Marco Civil’s Intermediary Liability Regime

The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generally prioritize their economic interests and security over preserving users’ protected expression and over-remove content to avoid legal battles and regulatory scrutiny. The enforcement overreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.

The provision was in line with the Special Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.

A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges, changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.

The Cases Under Trial and The Reach of the Supreme Court’s Decision

The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.

In the second case, the plaintiff sued Facebook after the company didn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers. 

Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases. 

The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention. 

There’s a lot at stake for users’ rights online in the outcomes of these cases. 

The Many Perils and Pitfalls on the Way

Brazil’s platform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector, draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes. 

We’ve written about intermediary liability trends around the globe, how to move forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users. 

One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content. 

While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence of less-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.

But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring is hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based detection of copyright infringement that have deeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.  

Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages.  Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.  

Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.

Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use of takedown mechanisms to disappear critical content online.

It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request. 

As we said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants. 

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

15 octobre 2024 à 15:48

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists

In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk. 

The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years.

The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.  

The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean. 

In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment. 

Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists

In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights. 

The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below.

Prior Judicial Order for Communications Surveillance and Access to Data

The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing thatthe mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.” 

Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that

“[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added) 

According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”  

The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter.

The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order.

Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.” 

To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure. 

Informational Self-Determination Recognized as an Autonomous Human Right 

In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention. 

“In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”  

The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases. 

The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law. 

According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.  

The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files.

The Vital Protection of the Right to Defend Human Rights

The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights.

For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities. 

Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State. 

Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices 

In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities. 

In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights. 

As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records. 

The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures. 

The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files. 

These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out.

In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations. 

The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.  

When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 

Recent Surveillance Revelations, Enduring Latin American Issues: 2023 Year in Review

25 décembre 2023 à 12:39

 The challenges in ensuring strong privacy safeguards, proper oversight of surveillance powers, and effective remedy for those arbitrarily affected continued during 2023 in Latin America. Let’s take a few, non-exhaustive, examples.

We saw a scandal unveiling that Brazilian Intelligence agents monitored movements of politicians, journalists, lawyers, police officers, and judges. In Perú, leaked documents indicated negotiations between the government and an U.S. vendor of spying technologies. Amidst the Argentinian presidential elections, a thorny surveillance scheme broke in the news. In México, media reports highlighted prosecutors’ controversial data requests targeting public figures. New revelations reinforced that the Mexican government shift didn’t halt the use of Pegasus to spy on human rights defenders, while the trial on Pegasus’ abuses in the previous administration has finally begun.

Those recent surveillance stories have deep roots in legal and institutional weaknesses, many times topped by an entrenched culture of secrecy. While the challenges cited above are not (at all!) exclusive to Latin America, it remains an essential task to draw attention to and look at the arbitrary surveillance cases that occasionally emerge, allowing a broader societal scrutiny. 

The Opacity of Intelligence Activities and Privacy Loopholes

First revealed in March, the use of location tracking software by Intelligence forces in Brazil hit the headlines again in October when a Federal Police’s investigation led to 25 search warrants and the arrest of two officials. The newspaper O Globo uncovered that during three years of former President Bolsonaro’s administration, Intelligence Agency’s officials used First Mile to monitor the steps of up to 10,000 cell phone owners every 12 months without any official protocol. According to O Globo, the software First Mile, developed by the Israeli company Cognyte, can detect an individual based on the location of devices using 2G, 3G and 4G networks. By simply entering a person’s phone number, the system allows you to follow their last position on a map. It also provides targets’ displacement records and "real-time alerts" of their movements. 

 News reports indicate that the system likely exploits the Signaling System n. 7 (SS7), which is an international telecommunication protocol standard that defines how the network elements in a telephone network exchange information and control signals. It’s by using the SS7 protocol that network operators are able to route telephone calls and SMS messages to the correct recipients. Yet, security vulnerabilities in the SS7 protocol also enable attackers to find out the location of a target, among other malicious uses. While telecom companies have access to such data as part of their activities and may disclose it in response to law enforcement requests, tools like First Mile allow intelligence and police agents to skip this step. 

A high-ranking source at Abin told O Globo that the agency claimed using the tool for "state security" purposes, and on the grounds there was a “legal limbo” on the privacy protections for cell phone metadata. The primary issue the case underscores is the lack of robust regulation and oversight of intelligence activities in Brazil. Second, while the Brazilian law indeed lacks strong explicit privacy protections for telephone metadata, the access to real-time location data enjoys a higher standard at least for criminal investigations. Moreover, Brazil counts on key constitutional data privacy safeguards and case law that can provide a solid basis to challenge the arbitrary use of tools like First Mile.

The Good and the Bad Guys Cross Paths

We should not disregard how the absence of proper controls, safeguards, and tech security measures opens the door not only for law enforcement and government abuses but can feed actions from malicious third-parties – also in their relations with political powers. 

The Titan software used in Mexico also exploits the SS7 protocol and combines location data with a trove of information it gets from credit bureaus’, government, telecom, and other databases. Vice News unveiled that Mexican cartels are allegedly piggy-backing police’s use of this system to track and target their enemies.

In the Titan’s case, Vice News reported that by entering a first and last name, or a phone number, the platform gives access to a person’s Mexican ID, “including address, phone number, a log of calls made and received, a security background check showing if the person has an active or past warrant or has been in prison, credit information, and the option to geolocate the phone.” The piece points out there is an underground market of people selling Titan-enabled intel, with prices that can reach up to USD 9.000 per service.   

In turn, the surveillance scheme uncovered in Argentina doesn’t rely on a specific software, but it may involve hacking and apparently mixes up different sources and techniques to spy on persons of interest. The lead character here is a former federal police officer who compiled over 1,000 folders about politicians, judges, journalists, union leaders, and more. Various news reports suggest how the former police officer's spying services relate to his possible political ties.

Vulnerabilities on Sale, Rights at Stake

Another critical aspect concerns the current incentives to perpetuate, rather than fixing security vulnerabilities – and governments’ role in it. As we highlighted, “governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous to their own citizens,” and shift their attitude from often facilitating the spread of malicious software to actually supporting security for all of us. Yet, we still have a long way ahead.

In Perú, La Encerrona reported that an U.S. based vendor, Duality Alliance, offered spying systems to the Intelligence Division of Perú’s Ministry of Interior (DIGIMIN). According to La Encerrona, leaked documents indicated negotiations during 2021 and 2022. Among the offers, La Encerrona underlines the tool ARPON, which the vendor claimed had the ability to intercept WhatsApp messages by a zero-click attack able to circumvent security restrictions between the app and the operating system Android. DIGIMIN has assured the news site that the agency didn’t purchase any of the tools that Duality Alliance offered.

 Recent Mexican experience shows the challenges of putting an end to the arbitrary use of spywares. Despite major public outcry against the use of Pegasus by security forces to track journalists, human rights defenders, political opponents, among others, and President López Obrador’s public commitment to halt these abuses, the issue continues. New evidence of Mexican Armed Forces’ spying during Obrador’s administration burst into the media in 2023.  According to media reports, the military used Pegasus to track the country’s undersecretary for human rights, a human rights defender, and journalists.

The kick-off of the trial on the Mexican Pegasus case is definitely good news. It started in December already providing key witnesses' insights on the spying operations, According to the Mexican digital rights organization R3D, a trial witness included the former President Enrique Peña Nieto and other high-ranked officials in the chain of command behind infections with Pegasus. As R3D pointed out, this trial must serve as a starting point to investigate the espionage apparatus in Mexico built between public and private actors, which should also consider most recent cases.

Recurrent Issues, Urgent Needs

On a final but equally important note, The New York Times published that the Mexico City's Attorney General's Office (AGO) and prosecutors in the state of Colima issued controversial data requests to the Mexican telecom company Telcel targeting politicians and public officials. According to The New York Times, Mexico City's AGO denied having requested that information, although other sources confirmed. The requests didn't require previous judicial authorization as they fell into a legal exception for kidnapping investigations. R3D highlighted how the case relates to deep-seated issues, such as the obligation for indiscriminate telecom data retention set in Mexican law and the lack of adequate safeguards to prevent and punish the arbitrary access to metadata by law enforcement.

Along with R3D and other partners in Latin America, EFF has been furthering the project ¿Quién Defiende Tus Datos? ("Who Defends Your Data) since 2015 to push for stronger privacy and transparency commitments from Internet Service Providers (ISPs) in the region. In 2023, we released a comparative report building on eight years of findings and challenges. Despite advances, our conclusions show persistent gaps and new concerning trends closely connected to the set of issues this post indicates. Our recommendations aim to reinforce critical milestones companies and states should embrace for paving a way forward.

During 2023 we continued working for them to come true. Among others, we collaborated with partners in Brazil on a draft proposal for ensuring data protection in the context of public security and law enforcement, spoke to Mexican lawmakers about how cybersecurity and strong data privacy rights go hand in hand, and joined policy debates upholding solid data privacy standards. We will keep monitoring Latin America privacy's ups and downs, and contribute to turning the recurring lessons from arbitrary surveillance cases into consistent responses towards robust data privacy and security for all.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

18 décembre 2023 à 10:12

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

Observation Mission Stresses Key Elements of Ola Bini's Case for Upholding Digital Rights

Despite an Ecuadorian court’s unanimous acquittal of security expert Ola Bini in January this year due to complete lack of evidence, Ecuador’s attorney general's office has moved to appeal the decision, perpetuating several years of unjust attacks on Bini’s rights. 

In the context of the Internet Governance Forum 2023 (IGF) held in Japan, the Observation Mission on the Bini case, which includes EFF and various digital and human rights groups, analyzed how advocates can utilize key elements of the judgment that found Bini not guilty. The Mission released a new statement pointing out these elements. The statement also urges Ecuadorian authorities to clarify Bini's procedural status as the attorney general's office has been posing difficulties for Bini's compliance with the precautionary measures still pending against him, particularly the requirement of periodic appearances to the AG's office.  

The full statement in Spanish is available here

Below we’ve summarized these key elements, which are critical for the protection of digital rights.

Irrelevant Evidence. The court characterized all evidence presented by the attorney general's office as irrelevant or unfit: "None of these elements led to a procedural truth for the purpose of proving any crime." With this decision, the court refused to convict Bini based on stereotyped views of security experts.  It has refused to apply criminal law based on a person's identity, connections, or activity, instead of actual conduct, or to apply criminal law based on a "political and arbitrary interpretation of what constitutes the security of the State and who could threaten it." Politically motivated prosecutions like Bini’s receive extensive media coverage, but what is often presented as "suspicious" is neither technically nor legally consistent. Civil society has worked to raise awareness among journalists about what is at stake in such cases, and to prevent judicial authorities from being pressured by publicized political accusations. 

The Importance of Proper Digital Evidence. The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, does not constitute evidence of a cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge or can be verified from digital forensics is not proper digital evidence. The Observation Mission's statement notes this is a key precedent that clarifies the type of evidence that is considered technically valid for proving alleged computer crimes. 

Unauthorized Access. The court clarified the meaning of unauthorized access, even though no access was proven in Bini's case. According to the court, access without authorization of a computer system requires the breach of some security system, which the ruling understands as overcoming technical barriers or using access credentials without authorization. In addition, and following Ecuador's penal code, the criminal offense of unauthorized access also requires proving an illegitimate purpose or malicious intent. While prosecutors failed to prove that any access has taken place (much less an unauthorized access), this interpretation aids in setting a precedent for defining unauthorized access in digital rights cases. It's particularly crucial as it ensures that individuals who test systems for vulnerabilities and report them do not face undue criminalization.

In light of these key elements, the Observation Mission's statement stresses that it is essential for Ecuadorian appellate authorities to affirm the lower court’s acquittal of Bini. It's also imperative that authorities clarify his procedural status and the requirement for periodic appearances, as any violation of his fundamental rights raises concerns about the legitimacy of the proceedings.

The Case's Legacy and Global Implications

This verdict has significant implications for digital rights beyond Bini's case. It underscores the importance of incorporating malicious intent into the configuration of computer crimes in legal and public policy discussions, as well as the importance of guarding against politically motivated prosecutions that rely on suspicion and public fear. 

Bini's case serves as a beacon for the defense of digital rights. It establishes critical precedents for the treatment of evidence, the importance of digital forensics, and relevant elements for assessing the offense of unauthorized access. It's a testament to the global fight for digital rights and an opportunity to safeguard the work of those who enhance our privacy, security, and human rights in the digital era.

What’s the Goal and How Do We Get There? Crucial Issues in Brazil’s Take on Saving the News from Big Tech

24 octobre 2023 à 10:57

Amidst the global wave of countries looking at Big Tech revenues and how they relate to the growing news media crisis, many are asking whether and how tech companies should  compensate publishers for the journalism that circulates on their platforms. This has become another flash point in Brazil’s heated agenda regarding platform regulation.

Draft proposals setting a “remuneration obligation” for digital platforms started to pop up in the Brazilian congress after Australia adopted its own News Media Bargaining Code. The issue gained steam when the rapporteur of PL 2630 (the so-called “Fake News bill”), Orlando Silva, presented a new draft in early 2022, including a press remuneration provision. Subsequent negotiations  moved this remuneration proposal to a different draft bill, PL 2370. The remuneration rules are similar to the current version of another draft proposal in Brazil’s Chamber of Deputies (PL 1354).

While the main disputed issues revolve around who should get paid, for what, and how remuneration is measured, there is a baseline implicit question that deserves further analysis: What are the ultimate goals of making digital platforms pay for journalistic content? Responses from those supporting the proposal include redressing Big Tech's unfair exploitation of their relationship with publishers, fixing power asymmetries in the online news distribution market, and preserving public interest journalism as an essential piece of democratic societies.

These are all important priorities. But if what we want in the end is to ensure a vibrant, plural, diverse and democratic arena for publishing and discussing news and the world, there are  fundamental tenets that should guide how we frame and pursue this goal.

These tenets are:

- We want people to widely read, share, comment, and debate news. We also want people to be able to access the documents and information underlying reporting to better reflect on them. We want plural and diverse sources of information to thrive. Access to information and free expression are human and fundamental rights that measures seeking to strengthen journalism must champion, not jeopardize. They are rights intrinsically related to upholding journalism as a key element of democratic societies.

- We want to fortify journalism and a free and diverse media. The overreliance of news outlets on Big Tech is a reality we must change, rather than reinforcing it. Proper responses should aim at building alternatives to the centralized intermediary role that few dominant digital platforms play in how information and revenues are distributed. Solutions that entrench this role and further consolidate publishers’ dependency on Big Tech are not fit for purpose. 

But before we discuss solutions that policymakers should embrace, let’s delve a little more into the underlying problems we should tackle.

An Account of Ad-Tech Industry’s Disruption of Journalism Sustainability

We have already written a good chunk on how Big Tech has disrupted the media's traditional business model.

While the ad-tech turmoil on how news businesses used to work back in the day affects journalism as a public interest good, even back in the day, the presence of thriving news players didn’t necessarily mean a plural and diverse media environment. Brazil is sadly, and historically, a compelling example of that. Adopting appropriate structural measures to tackle market concentration would probably have led to a different story. Even if an independent, diverse, and public interest journalism landscape doesn’t automatically follow from a robust news market, fixing asymmetries and distortions in such market do play a critical role in enabling a stronger journalism landscape.

When it comes to the relations between digital platforms and publishers, tech’s intermediation of the distribution of news content poses a series of issues. It starts with platforms' incentives to keep people on their sites rather than clicking through the actual content, and goes beyond. Here we highlight some of them:

  • Draining media advertising funds to digital platforms – Tech intermediaries pocket a huge portion of the money that advertisers pay for displaying ads online. It’s not only that digital platforms like Instagram and Google Search compete with news outlets making “ad spots” up for grabs. Even when the advertiser displays its ad on a news publisher website, much of the money paid stays with intermediaries along the way. In the UK, a study of the British advertisers’ association ISBA showed that only half of the ad money spent ultimately reached the news publishers. If in the analog era the main intermediary acting to place ads in media outlets was an advertising agency, nowadays there is an intricate ad-tech chain by which different players also get their bite. 
  • Complexity and opacity of the ad-tech ecosystem – How much do intermediaries get and how does the ecosystem operate are not simple questions to answer. The ad-tech ecosystem is both complex and opaque. The ISBA’s study itself stressed the hurdles of finding consistent and standardized data about its inner workings and the flow of advertising money across the intermediaries’ chain. Yet, one critical aspect of this ecosystem has already stood out – the reigning position that Google and Meta enjoy in the ad-tech stack. 
  • Ad-tech stack duopoly and market abuse – As we spelled out here, the ad-tech stack operates through real-time auctions that offer available online spaces for ad display combined with users’ profiling in a run-up for our attention. This stack includes: a “supply-side platform” (SSP), which acts as the publisher’s broker offering ad spots (usually called “ad inventory”) and related user eyeballs; a “demand-side platform” (DSP), which represents the advertisers and help them manage the purchasing of ad slots and find the “most effective” impression for their ads considering user data; and a marketplace for ad spots where supply and demand meet. As we noted, there are many companies that offer one or two of these services, but Google and Meta offer all three. Plus, they also compete with publishers by selling ad slots on YouTube or Facebook and Instagram, respectively. Google and Meta represent both buyers and sellers on a marketplace they control, collecting fees at each step of the way and rigging the bidding to their own benefit. They faced investigations of illegal collusion to rig the market in their favor by protecting Google’s dominance in exchange for preferential treatment for Meta. Although authorities decided not to pursue this specific case, other antitrust investigations and actions against their abusive conducts in the ad-tech market are in progress.
  • Making journalism dependent on surveillance advertising – Trading audience attention is not new in how the news market operates. But an integrated and unrelenting system of user tracking, profiling and targeting did come about in our digital era with the rise of Big Tech’s main way of doing business. A whole behavioral advertising industry has developed grounded in the promises and perils of delivering more value based on dragnet surveillance of our traits, relations, moves, and inferred interests. Big Tech companies rule this territory and shape it in such a way as to hold publishers hostages to their gimmicks. Making journalism reliant on surveillance advertising is a deal that serves to entrench few tech players as must-needed ad gatekeepers since this is not a trivial structure to build and maintain. This structure is also directly abusive to users, who are continuously tracked and profiled, feeding a vicious cycle. We shouldn't need pervasive, behavioral surveillance for journalism to thrive.

All these problems relate to Big Tech's unfair exploitation of their relationship with news organizations. But none of them are copyright issues. Copyright is a poor framework for addressing concerns about journalism sustainability. The copyright approach to the fight between tech and news relies on the assumption that journalists and media outlets, as copyright holders, are empowered to license (and thus control and block) quotation and discussion of the news of the day. That logic threatens the first fundamental tenet we presented above as it would undermine both the free discussion of important reporting and reporting itself. Copyright proposals also purport to create a remuneration dynamic that tracks and measures the “use” of journalistic content of each copyright holder so that each one can receive the corresponding compensation. Even when not explicitly attached to copyright law, proposals of journalistic remuneration based on the “use” of news content pose many challenges. Australia’s compensation arrangements are a mixed bag with several issues deriving from this and other problems we outline below.

Why Brazil Shouldn’t Follow Australia’s Code or Any “Content Use-Based” Models

Australia’s News Media Bargaining Code is a declared inspiration for Brazil’s debate over a remuneration right for publishers, endorsed by Big Media players and decision makers. As per the Code’s model, private remuneration agreements between news businesses and digital platforms result from these platforms making news content available on their services. The law details what “making content available” means, the conditions the Treasurer must follow to designate digital platforms that are bound by the law, the requirements news businesses must meet to benefit from the bargaining rules, the obligations that designated digital platforms have in relation to registered news businesses, and mechanisms for mediation and arbitration in case both parties fail to reach an agreement. 

Although Google and Meta have closed more than 30 agreements during the law’s first year in force, none of them is actually under the Code’s purview. The two tech giants’ strategic moves regarding the new law avoided any formal designations of digital platforms as per the Code’s rules (as James Meese notes in “The Decibel” podcast).

So far, the Code has served as a bargaining tool for media players to reach agreements with Google and Meta outside the law’s guarantees. Both due to the Code’s language and the unfolding bargaining practice, the Australian model brings a set of lessons we shouldn’t overlook. Professor Diana Bossio’s analysis points out some of them:

First, the lack of transparency in the agreements has deepened imbalances among media players competing for market share in an already concentrated ecosystem. Smaller, independent organizations unaware of higher sums secured by major outlets have struck deals for very modest amounts and lost key professionals to larger groups that used the new funding source to pay salaries above the usual market rate. Second, the tech platforms used agreements to bolster their own news products, such as “Google News Showcase,” according to their content and business priorities. Third, Google and Meta are the ones ultimately determining what is and which media outlets produce public interest journalism that gets to be paid. As a result, they are actually the ones deciding the “winners and losers of the Australian media industry.” In sum, Bossio states that

Lack of transparency and designation means the tech platforms have been able to act in the best interests of their own business priorities, rather than in the interest of the code’s stated aim of supporting public-interest journalism.

Canada’s Online News Act sought to address some of the pitfalls of the Australian model but has been struggling with securing its enforcement. Both Google and Meta have said the law is unworkable for their businesses, and Meta has decided to block news content for everyone accessing Facebook and Instagram in Canada. The company argues that people don’t come to Meta’s platforms for news, and that the only way it “can reasonably comply with this legislation is to end news availability for people in Canada.”

By ceasing to make news available on its platforms, Meta dodges Canada’s remuneration obligation. This is one of the traps of basing a remuneration arrangement on the “use” of journalistic content by online platforms, as the current draft of PL 2370 in Brazil does. Digital platforms can simply filter news out. If lawmakers respond by compelling them to carry news content in order to avoid such blocking, they fall yet in another trap – that of undermining platforms’ ability to remove harmful or otherwise problematic content under their terms of service. But the traps don’t end there. The “use” of journalistic content as the basis for remuneration is also bad because:

  • It encourages "clickbait" content.
  • It ends up favoring dominant or sensationalist media players.
  • It fosters and deepens structures for monitoring user sharing of links and content, which poses both data privacy and tech market concentration concerns.
  • It faces clear hurdles in circumscribing what “use” is, measuring such “use” in relation to each news organization, and supervising whether the remuneration is compatible with the amount of content “used.”

What should we do, then?

Which Alternatives Can Pave the Proper Way Forward

Let’s recall our fundamental tenets for achieving the end goal of ensuring a vibrant, plural, diverse, and democratic arena for publishing and discussing news and the world we live in. First, measures aimed at strengthening journalism shouldn’t serve to curb the circulation and discussion of news. Access to information and free expression are human and fundamental rights that these measures must champion, not endanger. Second, fortifying a free, independent, and diverse press entails the creation of alternatives to overcome news outlets’ dependency on Big Tech, instead of reinforcing it.

While PL 2370 and PL 1354 are important vectors for going a step further towards journalism sustainability in Brazil, their current language still fails to properly meet such concerns.

The draft bills follow the model of private agreements between digital platforms and news companies based on the “use” of journalistic content. Setting the kind of “use” that triggers remuneration vis-à-vis reasonable use exceptions has been complex and debated. The fear that this approach ends up favoring only the big players or that the money doesn’t get to the journalists actually doing the work has also driven discussions. Worryingly, there are no transparency requirements in the drafts for such remuneration deals. The bills don’t look at the market distortions we presented earlier. Relatedly, they don’t explore alternative approaches to Big Tech’s central intermediation role in how information and revenues are distributed. In fact, they may serve to cement the current dependency course.

By combining structural market measures and a policy decision to strengthen journalism, Brazilian decision makers, including Congress, should instead:

  • Establish restrictions for companies to operate in two or more parts of the ad-tech stack. Big Tech firms would have to choose if they want to represent the “demand-side”, the “supply side” or offer the “marketplace” where both meet. A draft law in the U.S. aims precisely to bridle such abusive situation and can inspire the Brazilian draft legislation. 
  • Ramp up the transparency of the ad-tech ecosystem and the flow of ad spending. For example, by requiring ad-tech platforms to disclose the underlying criteria (including figures) used to calculate ad revenues and viewership, backstopped by independent auditors.
  • Adopt further measures that can reduce Big Tech’s dominant role as intermediaries of publishers’ revenues coming from ads or subscribers. For example, to allow smaller players to participate in real-time bidding, incentivize more competitive solutions in such ecosystem, and open up the market of app stores. Currently, Google or Apple pocket 30 percent of every in-app subscription or micropayment dollar. As we noted, the EU and the U.S. are taking measures to change that. 
  • Build on Brazil's data protection legal framework  to stop surveillance advertising and return to contextual ads, which are based on the context in which it appears: what article it appears alongside of, or which publication. Rather than following users around to target them with ads, contextual advertisers seek out content that is relevant to their messages, and place ads alongside that content. This would dismiss the data advantage enjoyed by Big Tech companies in the ad ecosystem.

The measures above could likely be enough to rebalance the power asymmetries between digital platforms and news outlets, especially regarding larger media players. However, Brazil’s background indicates that this alone may fail to advance an independent, diverse, and public interest journalism landscape. The proper policy decision to pursue this goal is not to foster private and non-transparent agreements based on how much platforms or people “use” news.There are better approaches, such as establishing public subsidies for advancing journalism sustainability. The policy goal of strengthening journalism as a decisive element of democratic societies translates into a policy decision to financially support its flourishment. In addition to promoting structural market measures, the government should direct resources towards this goal. Considering the many funding priorities and budget constraints, a viable and sound path is using the collection of ad-tech players' taxation to create a fund managed by an independent, multistakeholder committee. The committee and the funding allocation would abide by strict transparency rules, representativeness criteria, and oversight.

With that, the discussion over who gets paid, for what, and which other initiatives are important to fund to pave a way of less dependency between news organizations and Big Tech could go way beyond bargaining agreements and have this fund as a catalyst based on guidelines set by law. This could also free the remuneration model from the problematic aspiration of tracking the "use" of news content and dispensing payments accordingly.

The idea of creating a fund is not new in Brazilian debates about journalism sustainability. Following global discussions, the Brazilian National Federation of Journalists (FENAJ) has been advocating for a fund considering the model of Brazil’s Audiovisual Sector Fund (FSA), which is part of a consistent policy fostering the audiovisual sector in the country. The idea gained support from Brazil's Digital Journalism Association (AJOR) and other civil society organizations. Brazilian decision makers should look at FSA’s experience to build a sounder path, putting in place, of course, the necessary checks and balances to prevent risks of capture and undue interference. As noted above, the collection of resources should rely on a relevant portion of revenue-related taxation of ad-tech players rather than the use of journalistic content. Moreover, transparency, public oversight, and democratic criteria to allocate the money are among the essential commitments to be set to ensure a participative, multistakeholder, and independent journalism fund.

We hope the crucial issues and alternatives outlined here can help to build a stronger way forward in Brazil’s take of upholding journalism before the dominant role of Big Tech companies.

The State of Chihuahua Is Building a 20-Story Tower in Ciudad Juarez to Surveil 13 Cities–and Texas Will Also Be Watching

EFF Special Advisor Paul Tepper and EFF intern Michael Rubio contributed research to this report.

Chihuahua state officials and a notorious Mexican security contractor broke ground last summer on the Torre Centinela (Sentinel Tower), an ominous, 20-story high-rise in downtown Ciudad Juarez that will serve as the central node of a new AI-enhanced surveillance regime. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project will be unlike anything seen before along the U.S.-Mexico border.

And that's saying a lot, considering the last 30-plus years of surging technology on the U.S side of the border. 

The Torre Centinela will stand in a former parking lot next to the city's famous bullring, a mere half-mile south of where migrants and asylum seekers have camped and protested at the Paso del Norte International Bridge leading to El Paso. But its reach goes much further: the Torre Centinela is just one piece of the Plataforma Centinela (Sentinel Platform), an aggressive new technology strategy developed by Chihuahua's Secretaria de Seguridad Pública Estatal (Secretary of State Public Security or SSPE) in collaboration with the company Seguritech.

With its sprawling infrastructure, the Plataforma Centinela will create an atmosphere of surveillance and data-streams blanketing the entire region. The plan calls for nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more.

If the project comes together as advertised in the Avengers-style trailer that SSPE released to influence public opinion, law enforcement personnel on site will be surrounded by wall-to-wall monitors (140 meters of screens per floor), while 2,000 officers in the field will be able to access live intelligence through handheld tablets.

Texas law enforcement will also have "eyes on this side of the border" via the Plataforma Centinela, Chihuahua Governor Maru Campos publicly stated last year. Texas Governor Greg Abbott signed a memorandum of understanding confirming the partnership.

Plataforma Centinela will transform public life and threaten human rights in the borderlands in ways that aren't easy to assess. Regional newspapers and local advocates–especially Norte Digital and Frente Político Ciudadano para la Defensa de los Derechos Humanos (FPCDDH)--have raised significant concerns about the project, pointing to a low likelihood of success and high potential for waste and abuse.

"It is a myopic approach to security; the full emphasis is placed on situational prevention, while the social causes of crime and violence are not addressed," FPCDDH member and analyst Victor M. Quintana tells EFF, noting that the Plataforma Centinela's budget is significantly higher than what the state devotes to social services. "There are no strategies for the prevention of addiction, neither for rebuilding the fabric of society nor attending to dropouts from school or young people at risk, which are social causes of insecurity."

Instead of providing access to unfiltered information about the project, the State of Chihuahua has launched a public relations blitz. In addition to press conferences and the highly-produced cinematic trailer, SSPE recently hosted a "Pabellón Centinel" (Sentinel Pavillion), a family-friendly carnival where the public was invited to check out a camera wall and drones, while children played with paintball guns, drove a toy ATV patrol vehicle around a model city, and colored in illustrations of a data center operator.

Behind that smoke screen, state officials are doing almost everything they can to control the narrative around the project and avoid public scrutiny.

According to news reports, the SSPE and the Secretaría de Hacienda (Finance Secretary) have simultaneously deemed most information about the project as classified and left dozens of public records requests unanswered. The Chihuahua State Congress also rejected a proposal to formally declassify the documents and stymied other oversight measures, including a proposed audit. Meanwhile, EFF has submitted public records requests to several Texas agencies and all have claimed they have no records related to the Plataforma Centinela.

This is all the more troubling considering the relationship between the state and Seguritech, a company whose business practices in 22 other jurisdictions have been called into question by public officials.

What we can be sure of is that the Plataforma Centinela project may serve as proof of concept of the kind of panopticon surveillance governments can get away with in both North America and Latin America.

What Is the Plataforma Centinela?

High-tech surveillance centers are not a new phenomenon on the Mexican side of the border. These facilities tend to use "C" distinctions to explain their functions and purposes. EFF has mapped out dozens of these in the six Mexican border states.

A screen capture of a Google Map of Mexican C-Centers

Click to explore the map. Google's Privacy Policy applies.

They include:

  • C4 (Centro de Comunicación, Cómputo, Control y Comando) (Center for Communications, Calculation, Control, and Command), 
  • C5 (Centro de Coordinación Integral, de Control, Comando, Comunicación y Cómputo del Estado) (Center for Integral Coordination for Control, Command, Communications, and State Calculation), 
  • C5i (Centro de Control, Comando, Comunicación, Cómputo, Coordinación e Inteligencia) (Center for Control, Command, Communication, Calculation, Coordination and Intelligence).

Typically, these centers focus as a cross between a 911 call center and a real-time crime center, with operators handling emergency calls, analyzing crime data, and controlling a network of surveillance cameras via a wall bank of monitors. In some cases, the Cs may be presented in different order or stand for slightly different words. For example, some C5s might alternately stand for "Centros de Comando, Control, Comunicación, Cómputo y Calidad" (Centers for Command, Control, Communication, Computation and Quality). These facilities also exist in other parts of Mexico. The number of Cs often indicate scale and responsibilities, but more often than not, it seems to be a political or marketing designation.

The Plataforma Centinela however, goes far beyond the scope of previous projects and in fact will be known as the first C7 (Centro de Comando, Cómputo, Control, Coordinación, Contacto Ciudadano, Calidad, Comunicaciones e Inteligencia Artificial) (Center for Command, Calculation, Control, Coordination, Citizen Contact, Quality, Communications and Artificial Intelligence). The Torre Centinela in Ciudad Juarez will serve as the nerve center, with more than a dozen sub-centers throughout the state. 

According to statistics that Gov. Campos disclosed as part of negotiations with Texas and news reports, the Plataforma Centinela will include: 

    • 1,791 automated license plate readers. These are cameras that photograph vehicles and their license plates, then upload that data along with the time and location where the vehicles were seen to a massive searchable database. Law enforcement can also create lists of license plates to track specific vehicles and receive alerts when those vehicles are seen. 
    • 4,800 fixed cameras. These are your run-of-the-mill cameras, positioned to permanently surveil a particular location from one angle.  
    • 3,065 pan-tilt-zoom (PTZ) cameras. These are more sophisticated cameras. While they are affixed to a specific location, such as a street light or a telephone pole, these cameras can be controlled remotely. An operator can swivel the camera around 360-degrees and zoom in on subjects. 
    • 2,000 tablets. Officers in the field will be issued handheld devices for accessing data directly from the Plataforma Centinela
    • 102 security arches. This is a common form of surveillance in Mexico, but not the United States. These are structures built over highways and roads to capture data on passing vehicles and their passengers. 
    • 74 drones (Unmanned Aerial Vehicles/UAVs). While the Chihuahua government has not disclosed what surveillance payload will be attached to these drones, it is common for law enforcement drones to deploy video, infrared, and thermal imaging technology.
    • 40 mobile video surveillance trailers. While details on these systems are scant, it is likely these are camera towers that can be towed to and parked at targeted locations. 
    • 15 anti-drone systems. These systems are designed to intercept and disable drones operated by criminal organizations.
    • Face recognition. The project calls for the application of "biometric filters" to be applied to camera feeds "to assist in the capture of cartel leaders," and the collection of migrant biometrics. Such a system would require scanning the faces of the general public.
    • Artificial intelligence. So far, the administration has thrown around the term AI without fully explaining how it will be used. However, typically law enforcement agencies have used this technology to "predict" where crime might occur, identify individuals mostly likely to be connected to crime, and to surface potential connections between suspects that would not have been obvious to a human observer. However, all these technologies have a propensity for making errors or exacerbating existing bias. 

As of May, 60% of the Plataforma Centinela camera network had been installed, with an expected completion date of December, according to Norte Digital. However, the cameras were already being used in criminal investigations. 

All combined, this technology amounts to an unprecedented expansion of the surveillance state in Latin America, as SSPE brags in its promotional material. The threat to privacy may also be unprecedented: creating cities where people can no longer move freely in their communities without being watched, scanned, and tagged.

But that's assuming the system functions as advertised—and based on the main contractor's history, that's anything but guaranteed. 

Who Is Seguritech?

The Plataforma Centinela project is being built by the megacorporation Seguritech, which has signed deals with more than a dozen government entities throughout Mexico. As of 2018, the company received no-bid contracts in at least 10 Mexican states and cities, which means it was able to sidestep the accountability process that requires companies to compete for projects.

And when it comes to the Plataforma Centinela, the company isn't simply a contractor: It will actually have ownership over the project, the Torre Centinela, and all its related assets, including cameras and drones, until August 2027.

That's what SSPE Secretary Gilberto Loya Chávez told the news organization Norte Digital, but the terms of the agreement between Seguritech and Chihuahua's administration are not public. The SSPE's Transparency Committee decided to classify the information "concerning the procedures for the acquisition of supplies, goods, and technology necessary for the development, implementation, and operation of the Platforma Centinela" for five years.

In spite of the opacity shrouding the project, journalists have surfaced some information about the investment plan. According to statements from government officials, the Plataforma Centinela will cost 4.2 billion pesos, with Chihuahua's administration paying regular installments to the company every three months (Chihuahua's governor had previously said that these would be yearly payments in the amount of 700 million to 1 billion pesos per year). According to news reports, when the payments are completed in 2027, the ownership of the platform's assets and infrastructure are expected to pass from Seguritech to the state of Chihuahua.

The Plataforma Centinela project marks a new pinnacle in Seguritech's trajectory as a Mexican security contractor. Founded in 1995 as a small business selling neighborhood alarms, SeguriTech Privada S.A de C.V. became a highly profitable brand, and currently operates in five areas: security, defense, telecommunications, aeronautics, and construction. According to Zeta Tijuana, Seguritech also secures contracts through its affiliated companies, including Comunicación Segura (focused on telecommunications and security) and Picorp S.A. de C.V. (focused on architecture and construction, including prisons and detention centers). Zeta also identified another SecuriTech company, Tres10 de C.V., as the contractor named in various C5i projects.

Thorough reporting by Mexican outlets such as Proceso, Zeta Tijuana, Norte Digital, and Zona Free paint an unsettling picture of Seguritech's activities over the years.

Former President Felipe Calderón's war on drug trafficking, initiated during his 2006-2012 term, marked an important turning point for surveillance in Mexico. As Proceso reported, Seguritech began to secure major government contracts beginning in 2007, receiving its first billion-peso deal in 2011 with Sinaloa's state government. In 2013, avoiding the bidding process, the company secured a 6-billion peso contract assigned by Eruviel Ávila, then governor of the state of México (or Edomex, not to be confused with the country of Mexico). During Enrique Peña Nieto's years as Edomex's governor, and especially later, as Mexico's president, Seguritech secured its status among Mexico's top technology contractors.

According to Zeta Tijuana, during the six years that Peña Nieto served as president (2012-2018), the company monopolized contracts for the country's main surveillance and intelligence projects, specifically the C5i centers. As Zeta Tijuana writes:

"More than 10 C5i units were opened or began construction during Peña Nieto's six-year term. Federal entities committed budgets in the millions, amid opacity, violating parliamentary processes and administrative requirements. The purchase of obsolete technological equipment was authorized at an overpriced rate, hiding information under the pretext of protecting national security."

Zeta Tijuana further cites records from the Mexican Institute of Industrial Property showing that Seguritech registered the term "C5i" as its own brand, an apparent attempt to make it more difficult for other surveillance contractors to provide services under that name to the government.

Despite promises from government officials that these huge investments in surveillance would improve public safety, the country’s number of violent deaths increased during Peña Nieto's term in office.

"What is most shocking is how ineffective Seguritech's system is," says Quintana, the spokesperson for FPCDDH. By his analysis, Quintana says, "In five out of six states where Seguritech entered into contracts and provided security services, the annual crime rate shot up in proportions ranging from 11% to 85%."

Seguritech has also been criticized for inflated prices, technical failures, and deploying obsolete equipment. According to Norte Digital, only 17% of surveillance cameras were working by the end of the company's contract with Sinaloa's state government. Proceso notes the rise of complaints about the malfunctioning of cameras in Cuauhtémoc Delegation (a borough of Mexico City) in 2016. Zeta Tijuana reported on the disproportionate amount the company charged for installing 200 obsolete 2-megapixel cameras in 2018.

Seguritech's track record led to formal complaints and judicial cases against the company. The company has responded to this negative attention by hiring services to take down and censor critical stories about its activities published online, according to investigative reports published as part of the Global Investigative Journalism Network's Forbidden Stories project.

Yet, none of this information dissuaded Chihuahua's governor, Maru Campos, from closing a new no-bid contract with Seguritech to develop the Plataforma Centinela project. 

 A Cross-Border Collaboration 


The Plataforma Centinela project presents a troubling escalation in cross-border partnerships between states, one that cuts out each nation's respective federal governments.  In April 2022, the states of Texas and Chihuahua signed a memorandum of understanding to collaborate on reducing "cartels' human trafficking and smuggling of deadly fentanyl and other drugs" and to "stop the flow of migrants from over 100 countries who illegally enter Texas through Chihuahua."

A slide describing the "New Border Model"

While much of the agreement centers around cargo at the points of entry, the document also specifically calls out the various technologies that make up the Plataforma Centinela. In attachments to the agreement, Gov. Campos promises Chihuahua is "willing to share that information with Texas State authorities and commercial partners directly."

During a press conference announcing the MOU, Gov. Abbot declared, “Governor Campos has provided me with the best border security plan that I have seen from any governor from Mexico.” He held up a three-page outline and a slide, which were also provided to the public, but also referenced the existence of "a much more extensive detailed memo that explains in nuance" all the aspects of the program.

Abbott went on to read out a summary of Plataforma Centinela, adding, "This is a demonstration of commitment from a strong governor who is working collaboratively with the state of Texas."

Then Campos, in response to a reporter's question, added: "We are talking about sharing information and intelligence among states, which means the state of Texas will have eyes on this side of the border." She added that the data collected through the Plataforma Centinela will be analyzed by both the states of Chihuahua and Texas.

Abbott provided an example of one way the collaboration will work: "We will identify hotspots where there will be an increase in the number of migrants showing up because it's a location chosen by cartels to try to put people across the border at that particular location. The Chihuahua officials will work in collaboration with the Texas Department of Public Safety, where DPS has identified that hotspot and the Chihuahua side will work from a law enforcement side to disrupt that hotspot."

In order to learn more about the scope of the project, EFF sent public records requests to several Texas agencies, including the Governor's Office, the Texas Department of Public Safety, the Texas Attorney General's Office, the El Paso County Sheriff, and the El Paso Police Department. Not one of the agencies produced records related to the Plataforma Centinela project.

Meanwhile, Texas is further beefing up its efforts to use technology at the border, including by enacting new laws that formally allow the Texas National Guard and State Guard to deploy drones at the border and authorize the governor to enter compacts with other states to share intelligence and resource to build "a comprehensive technological surveillance system" on state land to deter illegal activity at the border. In addition to the MOU with Chihuahua, Abbott also signed similar agreements with the states of Nuevo León and Coahuila in 2022. 

Two Sides, One Border

The Plataforma Centinela has enormous potential to violate the rights of one of the largest cross-border populations along the U.S.-Mexico border. But while law enforcement officials are eager to collaborate and traffic data back and forth, advocacy efforts around surveillance too often are confined to their respective sides.

The Spanish-language press in Mexico has devoted significant resources to investigating the Plataforma Centinela and raising the alarm over its lack of transparency and accountability, as well as its potential for corruption. Yet, the project has received virtually no attention or scrutiny in the United States. 

Fighting back against surveillance of cross-border communities requires cross-border efforts. EFF supports the efforts of advocacy groups in Ciudad Juarez and other regions of Chihuahua to expose the mistakes the Chihuahua government is making with the Plataforma Centinela and call out its mammoth surveillance approach for failing to address the root social issues. We also salute the efforts by local journalists to hold the government accountable. However, U.S-based journalists, activists, and policymakers—many of whom have done an excellent job surfacing criticism of Customs and Border Protection's so-called virtual wall—must also turn their attention to the massive surveillance that is building up on the Mexican side.

In reality, there really is no Mexican surveillance and U.S. surveillance. It’s one massive surveillance monster that, ironically, in the name of border enforcement, recognizes no borders itself. 

❌
❌