Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Germany Rushes to Expand Biometric Surveillance

7 octobre 2024 à 16:07

Germany is a leader in privacy and data protection, with many Germans being particularly sensitive to the processing of their personal data – owing to the country’s totalitarian history and the role of surveillance in both Nazi Germany and East Germany.

So, it is disappointing that the German government is trying to push through Parliament, at record speed, a “security package” that would increase biometric surveillance at an unprecedented scale. The proposed measures contravene the government’s own coalition agreement, and undermine European law and the German constitution.

In response to a knife-stabbing in the West-German town of Solingen in late-August, the government has introduced a so-called “security package” consisting of a bouquet of measures to tighten asylum rules and introduce new powers for law enforcement authorities.

Among them, three stand out due to their possibly disastrous effect on fundamental rights online. 

Biometric Surveillance  

The German government wants to allow law enforcement authorities to identify suspects by comparing their biometric data (audio, video, and image data) to all data publicly available on the internet. Beyond the host of harms related to facial recognition software, this would mean that any photos or videos uploaded to the internet would become part of the government’s surveillance infrastructure.

This would include especially sensitive material, such as pictures taken at political protests or other contexts directly connected to the exercise of fundamental rights. This could be abused to track individuals and create nuanced profiles of their everyday activities. Experts have highlighted the many unanswered technical questions in the government’s draft bill. The proposal contradicts the government’s own coalition agreement, which commits to preventing biometric surveillance in Germany.

The proposal also contravenes the recently adopted European AI Act, which bans the use of AI systems that create or expand facial recognition databases. While the AI Act includes exceptions for national security, Member States may ban biometric remote identification systems at the national level. Given the coalition agreement, German civil society groups have been hoping for such a prohibition, rather than the introduction of new powers.

These sweeping new powers would be granted not just to law enforcement authorities--the Federal Office for Migration and Asylum would be allowed to identify asylum seekers that do not carry IDs by comparing their biometric data to “internet data.” Beyond the obvious disproportionality of such powers, it is well documented that facial recognition software is rife with racial biases, performing significantly worse on images of people of color. The draft law does not include any meaningful measures to protect against discriminatory outcomes, nor does it acknowledge the limitations of facial recognition.  

Predictive Policing 

Germany also wants to introduce AI-enabled mining of any data held by law enforcement authorities, which is often used for predictive policing. This would include data from anyone who ever filed a complaint, served as a witness, or ended up in a police database for being a victim of a crime. Beyond this obvious overreach, data mining for predictive policing threatens fundamental rights like the right to privacy and has been shown to exacerbate racial discrimination.

The severe negative impacts of data mining by law enforcement authorities have been confirmed by Germany’s highest court, which ruled that the Palantir-enabled practices by two German states are unconstitutional.  Regardless, the draft bill seeks to introduce similar powers across the country.  

Police Access to More User Data 

The government wants to exploit an already-controversial provision of the recently adopted Digital Services Act (DSA). The law, which regulates online platforms in the European Union, has been criticized for requiring providers to proactively share user data with law enforcement authorities in potential cases of violent crime. Due to its unclear definition, the provision risks undermining the freedom of expression online as providers might be pressured to share rather more than less data to avoid DSA fines.

Frustrated by the low volume of cases forwarded by providers, the German government now suggests expanding the DSA to include specific criminal offences for which companies must share user data. While it is unrealistic to update European regulations as complex as the DSA so shortly after its adoption, this proposal shows that protecting fundamental rights online is not a priority for this government. 

Next Steps

Meanwhile, thousands have protested the security package in Berlin. Moreover, experts at the parliament’s hearing and German civil society groups are sending a clear signal: the government’s plans undermine fundamental rights, violate European law, and walk back the coalition parties’ own promises. EFF stands with the opponents of these proposals. We must defend fundamental rights more decidedly than ever.  

 

Las demandas de derechos humanos contra Cisco pueden avanzar (otra vez)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

EFF to Ninth Circuit: Don’t Shield Foreign Spyware Company from Human Rights Accountability in U.S. Court

Legal intern Danya Hajjaji was the lead author of this post.

EFF filed an amicus brief in the U.S. Court of Appeals for the Ninth Circuit supporting a group of journalists in their lawsuit against Israeli spyware company NSO Group. In our amicus brief backing the plaintiffs’ appeal, we argued that victims of human rights abuses enabled by powerful surveillance technologies must be able to seek redress through U.S. courts against both foreign and domestic corporations. 

NSO Group notoriously manufactures “Pegasus” spyware, which enables full remote control of a target’s smartphone. Pegasus attacks are stealthy and sophisticated: the spyware embeds itself into phones without an owner having to click anything (such as an email or text message). A Pegasus-infected phone allows government operatives to intercept personal data on a device as well as cloud-based data connected to the device.

Our brief highlights multiple examples of Pegasus spyware having been used by governmental bodies around the world to spy on targets such as journalists, human rights defenders, dissidents, and their families. For example, the Saudi Arabian government was found to have deployed Pegasus against Washington Post columnist Jamal Khashoggi, who was murdered at the Saudi consulate in Istanbul, Turkey.

In the present case, Dada v. NSO Group, the plaintiffs are affiliated with El Faro, a prominent independent news outlet based in El Salvador, and were targeted with Pegasus through their iPhones. The attacks on El Faro journalists coincided with their investigative reporting into the Salvadorian government.

The plaintiffs sued NSO Group in California because NSO Group, in deploying Pegasus against iPhones, abused the services of Apple, a California-based company. However, the district court dismissed the case on a forum non conveniens theory, holding that California is an inconvenient forum for NSO Group. The court thus concluded that exercising jurisdiction over the foreign corporation was inappropriate and that the case would be better considered by a court in Israel or elsewhere.

However, as we argued in our brief, NSO Group is already defending two other lawsuits in California brought by both Apple and WhatsApp. And the company is unlikely to face legal accountability in its home country—the Israeli Ministry of Defense provides an export license to NSO Group, and its technology has been used against citizens within Israel.

That's why this case is critical—victims of powerful, increasingly-common surveillance technologies like Pegasus spyware must not be barred from U.S. courts.

As we explained in our brief, the private spyware industry is a lucrative industry worth an estimated $12 billion, largely bankrolled by repressive governments. These parties widely fail to comport with the United Nations’ Guiding Principles on Business and Human Rights, which caution against creating a situation where victims of human rights abuses “face a denial of justice in a host State and cannot access home State courts regardless of the merits of the claim.”

The U.S. government has endorsed the Guiding Principles as applied to U.S. companies selling surveillance technologies to foreign governments, but also sought to address the issue of spyware facilitating state-sponsored human rights violations. In 2021, for example, the Biden Administration recognized NSO Group as engaging in such practices by placing it on a list of entities prohibited from receiving U.S. exports of hardware or software.

Unfortunately, the Guiding Principles expressly avoid creating any “new international law obligations,” thus leaving accountability to either domestic law or voluntary mechanisms.

Yet voluntary enforcement mechanisms are wholly inadequate for human rights accountability. The weakness of voluntary enforcement is best illustrated by NSO Group supposedly implementing its own human rights policies, all the while acting as a facilitator of human rights abuses.

Restraining the use of the forum non conveniens doctrine and opening courthouse doors to victims of human rights violations wrought by surveillance technologies would bind companies like NSO Group through judicial liability.

But this would not mean that U.S. courts have unfettered discretion over foreign corporations. The reach of courts is limited by rules of personal jurisdiction and plaintiffs must still prove the specific required elements of their legal claims.

The Ninth Circuit must give the El Faro plaintiffs the chance to vindicate their rights in federal court. Shielding spyware companies like NSO Group from legal accountability does not only diminish digital civil liberties like privacy and freedom of speech—it paves the way for the worst of the worst human rights abuses, including physical apprehensions, unlawful detentions, torture, and even summary executions by the governments that use the spyware.

Broad Scope Will Authorize Cross-Border Spying for Acts of Expression: Why You Should Oppose Draft UN Cybercrime Treaty

Par : Karen Gullo
1 août 2024 à 10:08

The draft UN Cybercrime Convention was supposed to help tackle serious online threats like ransomware attacks, which cost billions of dollars in damages every year.

But, after two and a half years of negotiations among UN Member States, the draft treaty’s broad rules for collecting evidence across borders may turn it into a tool for spying on people. In other words, an extensive surveillance pact.

It permits countries to collect evidence on individuals for actions classified as serious crimes—defined as offenses punishable by four years or more. This could include protected speech activities, like criticizing a government or posting a rainbow flag, if these actions are considered serious crimes under local laws.

Here’s an example illustrating why this is a problem:

If you’re an activist in Country A tweeting about human rights atrocities in Country B, and criticizing government officials or the king is considered a serious crime in both countries under vague cybercrime laws, the UN Cybercrime Treaty could allow Country A to spy on you for Country B. This means Country A could access your email or track your location without prior judicial authorization and keep this information secret, even when it no longer impacts the investigation.

Criticizing the government is a far cry from launching a phishing attack or causing a data breach. But since it involves using a computer and is a serious crime as defined by national law, it falls within the scope of the treaty’s cross-border spying powers, as currently written.

This isn’t hyperbole. In countries like Russia and China, serious “cybercrime”
has become a catchall term for any activity the government disapproves of if it involves a computer. This broad and vague definition of serious crimes allows these governments to target political dissidents and suppress free speech under the guise of cybercrime enforcement.

Posting a rainbow flag on social media could be considered a serious cybercrime in countries outlawing LGBTQ+ rights. Journalists publishing articles based on leaked data about human rights atrocities and digital activists organizing protests through social media could be accused of committing cybercrimes under the draft convention.

The text’s broad scope could allow governments to misuse the convention’s cross border spying powers to gather “evidence” on political dissidents and suppress free speech and privacy under the pretext of enforcing cybercrime laws.

Canada said it best at a negotiating session earlier this year: “Criticizing a leader, innocently dancing on social media, being born a certain way, or simply saying a single word, all far exceed the definition of serious crime in some States. These acts will all come under the scope of this UN treaty in the current draft.”

The UN Cybercrime Treaty’s broad scope must be limited to core cybercrimes. Otherwise it risks authorizing cross-border spying and extensive surveillance, and enabling Russia, China, and other countries to collaborate in targeting and spying on activists, journalists, and marginalized communities for protected speech.

It is crucial to exclude such overreach from the scope of the treaty to genuinely protect human rights and ensure comprehensive mandatory safeguards to prevent abuse. Additionally, the definition of serious crimes must be revised to include those involving death, injury, or other grave harms to further limit the scope of the treaty.

For a more in-depth discussion about the flawed treaty, read here, here, and here.

Security Researchers and Journalists at Risk: Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
31 juillet 2024 à 10:53

The proposed UN Cybercrime Treaty puts security researchers and journalists at risk of being criminally prosecuted for their work identifying and reporting computer system vulnerabilities, work that keeps the digital ecosystem safer for everyone.

The proposed text fails to exempt security research from the expansive scope of its cybercrime prohibitions, and does not provide mandatory safeguards to protect their rights.

Instead, the draft text includes weak wording that criminalizes accessing a computer “without right.” This could allow authorities to prosecute security researchers and investigative journalists who, for example, independently find and publish information about holes in computer networks.

These vulnerabilities could be exploited to spread malware, cause data breaches, and get access to sensitive information of millions of people. This would undermine the very purpose of the draft treaty: to protect individuals and our institutions from cybercrime.

What's more, the draft treaty's overbroad scope, extensive secret surveillance provisions, and weak safeguards risk making the convention a tool for state abuse. Journalists reporting on government corruption, protests, public dissent, and other issues states don't like can and do become targets for surveillance, location tracking, and private data collection.

Without clear protections, the convention, if adopted, will deter critical activities that enhance cybersecurity and press freedom. For instance, the text does not make it mandatory to distinguish between unauthorized access and bypassing effective security measures, which would protect researchers and journalists.

By not mandating malicious or dishonest intent when accessing computers “without right,” the draft convention threatens to penalize researchers and journalists for actions that are fundamental to safeguards the digital ecosystem or reporting on issues of public interest, such as government transparency, corporate misconduct, and cybersecurity flaws.¸

For
an in-depth analysis, please read further.

Calls Mount—from Principal UN Human Rights Official, Business, and Tech Groups—To Address Dangerous Flaws in Draft UN Surveillance Treaty

Par : Karen Gullo
30 juillet 2024 à 18:44

As UN delegates sat down in New York this week to restart negotiations, calls are mounting from all corners—from the United Nations High Commissioner for Human Rights (OHCHR) to Big Tech—to add critical human rights protections to, and fix other major flaws in, the proposed UN surveillance treaty, which as written will jeopardize fundamental rights for people across the globe.

Six influential organizations representing the UN itself, cybersecurity companies, civil society, and internet service providers have in recent days weighed in on the flawed treaty ahead of the two-week negotiating session that began today.

The message is clear and unambiguous: the proposed UN treaty is highly flawed and dangerous and must be fixed.

The groups have raised many points EFF raised over the last two and half years, including whether the treaty is necessary at all, the risks it poses to journalists and security researchers, and an overbroad scope that criminalizes offenses beyond core cybercrimes—crimes against computer systems, data, and networks. We have summarized
our concerns here.

Some delegates meeting in New York are showing enthusiasm to approve the draft treaty, despite its numerous flaws. We question whether UN Member States, including the U.S., will take the lead over the next two weeks to push for significant changes in the text. So, we applaud the six organizations cited here for speaking out at this crucial time.

“The concluding session is a pivotal moment for human rights in the digital age,” the OHCHR said in
comments on the new draft. Many of its provisions fail to meet international human rights standards, the commissioner said.

“These shortcomings are particularly problematic against the backdrop of an already expansive use of existing cybercrime laws in some jurisdictions to unduly restrict freedom of expression, target dissenting voices and arbitrarily interfere with the privacy and anonymity of communications.”

The OHCHR recommends including in the draft an explicit reference to specific human rights instruments, in particular the International Covenant on Civil and Political Right, narrowing the treaty’s scope, explicitly including language that crimes covered by the treaty must be committed with “criminal intent,” and several other changes.

The proposed treaty should comprehensively integrate human rights throughout the text, OHCHR said. Without that, the convention “could jeopardize the protection of human rights of people world-wide, undermine the functionality of the internet infrastructure, create new security risks and undercut business opportunities and economic well-being.”

EFF has called on delegates to oppose the treaty if it’s not significantly improved, and we are not alone in this stance.

The Global Network Initiative (GNI), a multistakeholder organization that sets standards for responsible business conduct based on human rights, in the liability of online platforms for offenses committed by their users, raising the risk that online intermediaries could be liable when they don’t know or are unaware of such user-generated content.

“This could lead to excessively broad content moderation and removal of legitimate, protected speech by platforms, thereby negatively impacting freedom of expression,” GNI said.

“Countries committed to human rights and the rule of law must unite to demand stronger data protection and human rights safeguards. Without these they should refuse to agree to the draft Convention.”

Human Rights Watch (HRW), a close EFF ally on the convention, called out the draft’s article on offenses related to online child sexual abuse or child sexual exploitation material (CSAM), which could lead to criminal liability for service providers acting as mere conduits. Moreover, it could criminalize or risk criminalizing content and conduct that has evidentiary, scientific, or artistic value, and doesn’t sufficiently decriminalize the consensual conduct of older children in consensual relationships.

This is particularly dangerous for rights organizations that investigate child abuse and collect material depicting children subjected to torture or other abuses, including material that is sexual in nature. The draft text isn’t clear on whether legitimate use of this material is excluded from criminalization, thereby jeopardizing the safety of survivors to report CSAM activity to law enforcement or platforms.

HRW recommends adding language that excludes material manifestly artistic, among other uses, and conduct that is carried out for legitimate purposes related to documentation of human rights abuses or the administration of justice.

The Cybersecurity Tech Accord, which represents over 150 companies, raised concerns in a statement today that aspects of the draft treaty allow cooperation between states to be kept confidential or secret, without mandating any procedural legal protections.

The convention will result in more private user information being shared with more governments around the world, with no transparency or accountability. The
statement provides specific examples of national security risks that could result from abuse of the convention’s powers.

The International Chamber of Commerce, a proponent of international trade for businesses in 170 countries,
said the current draft would make it difficult for service providers to challenge overbroad data requests or extraterrestrial requests for data from law enforcement, potentially jeopardizing the safety and freedom of tech company employees in places where they could face arrest “as accessories to the crime for which that data is being sought.”

Further, unchecked data collection, especially from traveling employees, government officials, or government contractors, could lead to sensitive information being exposed or misused, increasing risks of security breaches or unauthorized access to critical data, the group said.

The Global Initiative Against Transnational Organized Crime, a network of law enforcement, governance, and development officials, raised concerns in a recent analysis about the draft treaty’s new title, which says the convention is against both cybercrime and, more broadly, crimes committed through the use of an information or communications technology (ICT) system.

“Through this formulation, it not only privileges Russia’s preferred terminology but also effectively redefines cybercrime,” the analysis said. With this title, the UN effectively “redefines computer systems (and the crimes committed using them)­ as ICT—a broader term with a wider remit.”

 

Why You Should Hate the Proposed UN Cybercrime Treaty

Par : Karen Gullo
29 juillet 2024 à 13:00

International UN treaties aren’t usually on users’ radar. They are debated, often over the course of many years, by diplomats and government functionaries in Vienna or New York, and their significance is often overlooked or lost in the flood of information and news we process every day, even when they expand police powers and threaten the fundamental rights of people all over the world.

Such is the case with the proposed UN Cybercrime Treaty. For more than two years, EFF and its international civil society partners have been deeply involved in spreading the word about, and fighting to fix, seriously dangerous flaws in the draft convention. In the coming days we will publish a series of short posts that cut through the draft’s dense, highly technical text explaining the real-world effects of the convention.

The proposed treaty, pushed by Russia and shepherded by the UN Office on Drugs and Crime, is a proposed agreement between nations purportedly aimed at strengthening cross border investigations and prosecutions of cybercriminals who spread malware, steal data for ransom, and cause data breaches, among other offenses.

The problem is, as currently written, the treaty gives governments massive surveillance and data collection powers to go after not just cybercrime, but any offense they define as a serious that involves the use of a computer or communications system. In some countries, that includes criticizing the government in a social media post, expressing support online for LGBTQ+ rights, or publishing news about protests or massacres.

Tech companies and their overseas staff, under certain treaty provisions, would be compelled to help governments in their pursuit of people’s data, locations, and communications, subject to domestic jurisdictions, many of which establish draconian fines.

We have called the draft convention a blank check for surveillance abuse that can be used as a tool for human rights violations and transnational repression. It’s an international treaty that everyone should know and care about because it threatens the rights and freedoms of people across the globe. Keep an eye out for our posts explaining how.

For our key concerns, read our three-pager:

Car Makers Shouldn’t Be Selling Our Driving History to Data Brokers and Insurance Companies

You accelerated multiple times on your way to Yosemite for the weekend. You braked when driving to a doctor appointment. If your car has internet capabilities, GPS tracking or OnStar, your car knows your driving history.

And now we know: your car insurance carrier might know it, too.

In a recent New York Times article, Kashmir Hill reported how everyday moments in your car like these create a data footprint of your driving habits and routine that is, in some cases, being sold to insurance companies. Collection often happens through so-called “safe driving” programs pre-installed in your vehicle through an internet-connected service on your car or a connected car app. Real-time location tracking often starts when you download an app on your phone or tap “agree” on the dash screen before you drive your car away from the dealership lot.

Technological advancements in cars have come a long way since General Motors launched OnStar in 1996. From the influx of mobile data facilitating in-car navigation, to the rise of telematics in the 2010s, cars today are more internet-connected than ever. This enables, for example, delivery of emergency warnings, notice of when you need an oil change, and software updates. Recent research predicts that by 2030, more than 95% of new passenger cars will contain some form of internet-connected service and surveillance.

Car manufacturers including General Motors, Kia, Subaru, and Mitsubishi have some form of services or apps that collect, maintain, and distribute your connected car data to insurance companies. Insurance companies spend thousands of dollars purchasing your car data to factor in these “select insights” about your driving behavior. Those insights are then factored into your “risk score,” which can potentially spike your insurance premiums.

As Hill reported, the OnStar Smart Driver program is one example of an internet-connected service that collects driver data and sends it to car manufacturers. They then sell this digital driving profile to third-party data brokers, like Lexis-Nexus or Verisk. From there, data brokers generally sell information to anyone with the money to buy it. After Hill’s report, GM announced it would stop sharing data with these brokers.

The manufacturers and car dealerships subvert consumers’ authentic choice  to  participate in collecting and sharing of their driving data. This is where consumers should be extremely wary, and where we need stronger data privacy laws. As reported by Hill, a salesperson at the dealership may enroll you without your even realizing it, in their pursuit of an enrollment bonus.  All of this is further muddied by a car manufacturers’ lack of clear, detailed, and transparent “terms and conditions” disclosure forms. These are often too long to read and filled with technical legal jargon—especially when all you want is to drive your new car home. Even for unusual consumers who take the time to read the privacy disclosures, as noted in Hill’s article by researcher Jen Caltrider at the Mozilla Foundation, drivers “have little idea about what they are consenting to when it comes to data collection.”

Better Solutions

This whole process puts people in a rough situation. We are unknowingly surveilled to generate a digital footprint that companies later monetize, including details about many parts of daily life, from how we eat, to how long we spend on social media. And now, the way we drive and locations we visit with our car.

That's why EFF supports comprehensive consumer data privacy legislation with strong data minimization rules and requirements for clear, opt-in consent.

If there were clear data minimization guardrails in place, it would curb overzealous processing of our automotive data. General Motors would only have authority to collect, maintain, use, and disclose our data to provide a service that we asked for. For example, through the OnStar program, drivers may want to provide their GPS location data to assist rescue efforts, or to automatically call 911 if they’ve been in an accident. Any car data beyond what is needed to provide services people asked for should not be collected. And it certainly shouldn't be sold to data brokers—who then sell it to your car insurance carriers.

Hill’s article shines a light on another part of daily life that is penetrated by technology advancements that have no clear privacy guardrails. Consumers do not actually know how companies are processing their data – much less actually exercise control over this processing.

That’s why we need opt-in consent rules: companies must be forbidden from processing our data, unless they first obtain our genuine opt-in consent. This consent must be informed and specific, meaning companies cannot hide the request in legal jargon buried under pages of fine print. Moreover, this consent cannot be the product of deceptively designed user interfaces (sometimes called “dark patterns”) that impair autonomy and choice. Further, this consent must be voluntary, meaning among other things it cannot be coerced with pay-for-privacy schemes. Finally, the default must be no data processing until the driver gives permission (“opt-in consent”), as opposed to processing until the driver objects (“opt-out consent”).

But today, consumers do not control, or often even know, to whom car manufacturers are selling their data. Is it car insurers, law enforcement agencies, advertisers?

Finally, if you want to figure out what your car knows about you, and opt out of sharing when you can, check out our instructions here.

Shots Fired: Congressional Letter Questions DHS Funding of ShotSpotter

There is a growing pile of evidence that cities should drop Shotspotter, the notorious surveillance system that purportedly uses acoustic sensors to detect gunshots, due to its inaccuracies and the danger it creates in communities where it’s installed. In yet another blow to the product and the surveillance company behind it—SoundThinking—Congress members have sent a letter calling on the Department of Homeland Security to investigate how it provides funding to local police to deploy the product.

The seven page letter, from Senators Ed Markey, Ron Wyden and Elizabeth Warren, and Representative Ayanna Pressley, begins by questioning the “accuracy and effectiveness” of ShotSpotter, and then outlines some of the latest evidence of its abysmal performance, including multiple studies showing false positive rates—i.e. incorrectly classifying non-gunshot sounds as gunshots—at 70% or higher. In addition to its ineffectiveness, the Congress members voiced their serious concerns regarding ShotSpotter’s contribution to discrimination, civil rights violations, and poor policing practices due to the installation of most ShotSpotter sensors in overwhelmingly “Black, Brown and Latin[e] communities” at the request of local law enforcement. Together, the inefficacy of the technology and the placements can result in the deployment of police to what they expect to be a dangerous situation with guns drawn, increasing the chances of all-too-common police violence against civilians in the area.

In light of the grave concerns raised by the use of ShotSpotter, the lawmakers are demanding that DHS investigate its funding, and whether it’s an appropriate use of taxpayer dollars. We agree: DHS should investigate, and should end its program of offering grants to local law enforcement agencies to contract with SoundThinking. 

The letter can be read in its entirety here.

EFF to Court: Electronic Ankle Monitoring Is Bad. Sharing That Data Is Even Worse.

The government violates the privacy rights of individuals on pretrial release when it continuously tracks, retains, and shares their location, EFF explained in a friend-of-the-court brief filed in the Ninth Circuit Court of Appeals.

In the case, Simon v. San Francisco, individuals on pretrial release are challenging the City and County of San Francisco’s electronic ankle monitoring program. The lower court ruled the program likely violates the California and federal constitutions. We—along with Professor Kate Weisburd and the Cato Institute—urge the Ninth Circuit to do the same.

Under the program, the San Francisco County Sheriff collects and indefinitely retains geolocation data from people on pretrial release and turns it over to other law enforcement entities without suspicion or a warrant. The Sheriff shares both comprehensive geolocation data collected from individuals and the results of invasive reverse location searches of all program participants’ location data to determine whether an individual on pretrial release was near a specified location at a specified time.

Electronic monitoring transforms individuals’ homes, workplaces, and neighborhoods into digital prisons, in which devices physically attached to people follow their every movement. All location data can reveal sensitive, private information about individuals, such as whether they were at an office, union hall, or house of worship. This is especially true for the GPS data at issue in Simon, given its high degree of accuracy and precision. Both federal and state courts recognize that location data is sensitive, revealing information in which one has a reasonable expectation of privacy. And, as EFF’s brief explains, the Simon plaintiffs do not relinquish this reasonable expectation of privacy in their location information merely because they are on pretrial release—to the contrary, their privacy interests remain substantial.

Moreover, as EFF explains in its brief, this electronic monitoring is not only invasive, but ineffective and (contrary to its portrayal as a detention alternative) an expansion of government surveillance. Studies have not found significant relationships between electronic monitoring of individuals on pretrial release and their court appearance rates or  likelihood of arrest. Nor do studies show that law enforcement is employing electronic monitoring with individuals they would otherwise put in jail. To the contrary, studies indicate that law enforcement is using electronic monitoring to surveil and constrain the liberty of those who wouldn’t otherwise be detained.

We hope the Ninth Circuit affirms the trial court and recognizes the rights of individuals on pretrial release against invasive electronic monitoring.

Virtual Reality and the 'Virtual Wall'

Par : Dave Maass
10 avril 2024 à 18:32

When EFF set out to map surveillance technology along the U.S.-Mexico border, we weren't exactly sure how to do it. We started with public records—procurement documents, environmental assessments, and the like—which allowed us to find the GPS coordinates of scores of towers. During a series of in-person trips, we were able to find even more. Yet virtual reality ended up being one of the key tools in not only discovering surveillance at the border, but also in educating people about Customs & Border Protection's so-called "virtual wall" through VR tours.

EFF Director of Investigations Dave Maass recently gave a lightning talk at University of Nevada, Reno's annual XR Meetup explaining how virtual reality, perhaps ironically, has allowed us to better understand the reality of border surveillance.

play
Privacy info. This embed will serve content from youtube.com

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists

In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk. 

The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years.

The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.  

The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean. 

In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment. 

Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists

In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights. 

The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below.

Prior Judicial Order for Communications Surveillance and Access to Data

The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing thatthe mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.” 

Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that

“[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added) 

According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”  

The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter.

The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order.

Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.” 

To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure. 

Informational Self-Determination Recognized as an Autonomous Human Right 

In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention. 

“In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”  

The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases. 

The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law. 

According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.  

The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files.

The Vital Protection of the Right to Defend Human Rights

The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights.

For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities. 

Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State. 

Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices 

In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities. 

In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights. 

As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records. 

The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures. 

The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files. 

These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out.

In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations. 

The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.  

When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

Par : Karen Gullo
7 février 2024 à 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

Par : Karen Gullo
29 janvier 2024 à 12:42

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

Recent Surveillance Revelations, Enduring Latin American Issues: 2023 Year in Review

25 décembre 2023 à 12:39

 The challenges in ensuring strong privacy safeguards, proper oversight of surveillance powers, and effective remedy for those arbitrarily affected continued during 2023 in Latin America. Let’s take a few, non-exhaustive, examples.

We saw a scandal unveiling that Brazilian Intelligence agents monitored movements of politicians, journalists, lawyers, police officers, and judges. In Perú, leaked documents indicated negotiations between the government and an U.S. vendor of spying technologies. Amidst the Argentinian presidential elections, a thorny surveillance scheme broke in the news. In México, media reports highlighted prosecutors’ controversial data requests targeting public figures. New revelations reinforced that the Mexican government shift didn’t halt the use of Pegasus to spy on human rights defenders, while the trial on Pegasus’ abuses in the previous administration has finally begun.

Those recent surveillance stories have deep roots in legal and institutional weaknesses, many times topped by an entrenched culture of secrecy. While the challenges cited above are not (at all!) exclusive to Latin America, it remains an essential task to draw attention to and look at the arbitrary surveillance cases that occasionally emerge, allowing a broader societal scrutiny. 

The Opacity of Intelligence Activities and Privacy Loopholes

First revealed in March, the use of location tracking software by Intelligence forces in Brazil hit the headlines again in October when a Federal Police’s investigation led to 25 search warrants and the arrest of two officials. The newspaper O Globo uncovered that during three years of former President Bolsonaro’s administration, Intelligence Agency’s officials used First Mile to monitor the steps of up to 10,000 cell phone owners every 12 months without any official protocol. According to O Globo, the software First Mile, developed by the Israeli company Cognyte, can detect an individual based on the location of devices using 2G, 3G and 4G networks. By simply entering a person’s phone number, the system allows you to follow their last position on a map. It also provides targets’ displacement records and "real-time alerts" of their movements. 

 News reports indicate that the system likely exploits the Signaling System n. 7 (SS7), which is an international telecommunication protocol standard that defines how the network elements in a telephone network exchange information and control signals. It’s by using the SS7 protocol that network operators are able to route telephone calls and SMS messages to the correct recipients. Yet, security vulnerabilities in the SS7 protocol also enable attackers to find out the location of a target, among other malicious uses. While telecom companies have access to such data as part of their activities and may disclose it in response to law enforcement requests, tools like First Mile allow intelligence and police agents to skip this step. 

A high-ranking source at Abin told O Globo that the agency claimed using the tool for "state security" purposes, and on the grounds there was a “legal limbo” on the privacy protections for cell phone metadata. The primary issue the case underscores is the lack of robust regulation and oversight of intelligence activities in Brazil. Second, while the Brazilian law indeed lacks strong explicit privacy protections for telephone metadata, the access to real-time location data enjoys a higher standard at least for criminal investigations. Moreover, Brazil counts on key constitutional data privacy safeguards and case law that can provide a solid basis to challenge the arbitrary use of tools like First Mile.

The Good and the Bad Guys Cross Paths

We should not disregard how the absence of proper controls, safeguards, and tech security measures opens the door not only for law enforcement and government abuses but can feed actions from malicious third-parties – also in their relations with political powers. 

The Titan software used in Mexico also exploits the SS7 protocol and combines location data with a trove of information it gets from credit bureaus’, government, telecom, and other databases. Vice News unveiled that Mexican cartels are allegedly piggy-backing police’s use of this system to track and target their enemies.

In the Titan’s case, Vice News reported that by entering a first and last name, or a phone number, the platform gives access to a person’s Mexican ID, “including address, phone number, a log of calls made and received, a security background check showing if the person has an active or past warrant or has been in prison, credit information, and the option to geolocate the phone.” The piece points out there is an underground market of people selling Titan-enabled intel, with prices that can reach up to USD 9.000 per service.   

In turn, the surveillance scheme uncovered in Argentina doesn’t rely on a specific software, but it may involve hacking and apparently mixes up different sources and techniques to spy on persons of interest. The lead character here is a former federal police officer who compiled over 1,000 folders about politicians, judges, journalists, union leaders, and more. Various news reports suggest how the former police officer's spying services relate to his possible political ties.

Vulnerabilities on Sale, Rights at Stake

Another critical aspect concerns the current incentives to perpetuate, rather than fixing security vulnerabilities – and governments’ role in it. As we highlighted, “governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous to their own citizens,” and shift their attitude from often facilitating the spread of malicious software to actually supporting security for all of us. Yet, we still have a long way ahead.

In Perú, La Encerrona reported that an U.S. based vendor, Duality Alliance, offered spying systems to the Intelligence Division of Perú’s Ministry of Interior (DIGIMIN). According to La Encerrona, leaked documents indicated negotiations during 2021 and 2022. Among the offers, La Encerrona underlines the tool ARPON, which the vendor claimed had the ability to intercept WhatsApp messages by a zero-click attack able to circumvent security restrictions between the app and the operating system Android. DIGIMIN has assured the news site that the agency didn’t purchase any of the tools that Duality Alliance offered.

 Recent Mexican experience shows the challenges of putting an end to the arbitrary use of spywares. Despite major public outcry against the use of Pegasus by security forces to track journalists, human rights defenders, political opponents, among others, and President López Obrador’s public commitment to halt these abuses, the issue continues. New evidence of Mexican Armed Forces’ spying during Obrador’s administration burst into the media in 2023.  According to media reports, the military used Pegasus to track the country’s undersecretary for human rights, a human rights defender, and journalists.

The kick-off of the trial on the Mexican Pegasus case is definitely good news. It started in December already providing key witnesses' insights on the spying operations, According to the Mexican digital rights organization R3D, a trial witness included the former President Enrique Peña Nieto and other high-ranked officials in the chain of command behind infections with Pegasus. As R3D pointed out, this trial must serve as a starting point to investigate the espionage apparatus in Mexico built between public and private actors, which should also consider most recent cases.

Recurrent Issues, Urgent Needs

On a final but equally important note, The New York Times published that the Mexico City's Attorney General's Office (AGO) and prosecutors in the state of Colima issued controversial data requests to the Mexican telecom company Telcel targeting politicians and public officials. According to The New York Times, Mexico City's AGO denied having requested that information, although other sources confirmed. The requests didn't require previous judicial authorization as they fell into a legal exception for kidnapping investigations. R3D highlighted how the case relates to deep-seated issues, such as the obligation for indiscriminate telecom data retention set in Mexican law and the lack of adequate safeguards to prevent and punish the arbitrary access to metadata by law enforcement.

Along with R3D and other partners in Latin America, EFF has been furthering the project ¿Quién Defiende Tus Datos? ("Who Defends Your Data) since 2015 to push for stronger privacy and transparency commitments from Internet Service Providers (ISPs) in the region. In 2023, we released a comparative report building on eight years of findings and challenges. Despite advances, our conclusions show persistent gaps and new concerning trends closely connected to the set of issues this post indicates. Our recommendations aim to reinforce critical milestones companies and states should embrace for paving a way forward.

During 2023 we continued working for them to come true. Among others, we collaborated with partners in Brazil on a draft proposal for ensuring data protection in the context of public security and law enforcement, spoke to Mexican lawmakers about how cybersecurity and strong data privacy rights go hand in hand, and joined policy debates upholding solid data privacy standards. We will keep monitoring Latin America privacy's ups and downs, and contribute to turning the recurring lessons from arbitrary surveillance cases into consistent responses towards robust data privacy and security for all.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

EFF Joins Forces with 20+ Organizations in the Coalition #MigrarSinVigilancia

18 décembre 2023 à 10:12

Today, EFF joins more than 25 civil society organizations to launch the Coalition #MigrarSinVigilancia ("To Migrate Without Surveillance"). The Latin American coalition’s aim is to oppose arbitrary and indiscriminate surveillance affecting migrants across the region, and to push for the protection of human rights by safeguarding migrants' privacy and personal data.

On this International Migrants Day (December 18), we join forces with a key group of digital rights and frontline humanitarian organizations to coordinate actions and share resources in pursuit of this significant goal.

Governments increasingly use technologies to monitor migrants, asylum seekers, and others moving across borders with growing frequency and intensity. This intensive surveillance is often framed within the concept of "smart borders" as a more humanitarian approach to address and streamline border management, even though its implementation often negatively impacts the migrant population.

EFF has been documenting the magnitude and breadth of such surveillance apparatus, as well as how it grows and impacts communities at the border. We have fought in courts against the arbitrariness of border searches in the U.S. and called out the inherent dangers of amassing migrants' genetic data in law enforcement databases.  

The coalition we launch today stresses that the lack of transparency in surveillance practices and regional government collaboration violates human rights. This opacity is intertwined with the absence of effective safeguards for migrants to know and decide crucial aspects of how authorities collect and process their data.

The Coalition calls on all states in the Americas, as well as companies and organizations providing them with technologies and services for cross-border monitoring, to take several actions:

  1. Safeguard the human rights of migrants, including but not limited to the rights to migrate and seek asylum, the right to not be separated from their families, due process of law, and consent, by protecting their personal data.
  2. Recognize the mental, emotional, and legal impact that surveillance has on migrants and other people on the move.
  3. Ensure human rights safeguards for monitoring and supervising technologies for migration control.
  4. Conduct a human rights impact assessment of already implemented technologies for migration control.
  5. Refrain from using or prohibit technologies for migration control that present inherent or serious human rights harms.
  6. Strengthen efforts to achieve effective remedies for abuses, accountability, and transparency by authorities and the private sector.

We invite you to learn more about the Coalition #MigrarSinVigilancia and the work of the organizations involved, and to stand with us to safeguard data privacy rights of migrants and asylum seekers—rights that are crucial for their ability to safely build new futures.

The Growing Threat of Cybercrime Law Abuse: LGBTQ+ Rights in MENA and the UN Cybercrime Draft Convention

This is Part II  of a series examining the proposed UN Cybercrime Treaty in the context of LGBTQ+ communities. Part I looks at the draft Convention’s potential implications for LGBTQ+ rights. Part II provides a closer look at how cybercrime laws might specifically impact the LGBTQ+ community and activists in the Middle East and North Africa (MENA) region.

In the digital age, the rights of the LGBTQ+ community in the Middle East and North Africa (MENA) are gravely threatened by expansive cybercrime and surveillance legislation. This reality leads to systemic suppression of LGBTQ+ identities, compelling individuals to censor themselves for fear of severe reprisal. This looming threat becomes even more pronounced in countries like Iran, where same-sex conduct is punishable by death, and Egypt, where merely raising a rainbow flag can lead to being arrested and tortured.

Enter the proposed UN Cybercrime Convention. If ratified in its present state, the convention might not only bolster certain countries' domestic surveillance powers to probe actions that some nations mislabel as crimes, but it could also strengthen and validate international collaboration grounded in these powers. Such a UN endorsement could establish a perilous precedent, authorizing surveillance measures for acts that are in stark contradiction with international human rights law. Even more concerning, it might tempt certain countries to formulate or increase their restrictive criminal laws, eager to tap into the broader pool of cross-border surveillance cooperation that the proposed convention offers. 

The draft convention, in Article 35, permits each country to define its own crimes under domestic laws when requesting assistance from other nations in cross-border policing and evidence collection. In certain countries, many of these criminal laws might be based on subjective moral judgments that suppress what is considered free expression in other nations, rather than adhering to universally accepted standards.

Indeed, international cooperation is permissible for crimes that carry a penalty of four years of imprisonment or more; there's a concerning move afoot to suggest reducing this threshold to merely three years. This is applicable whether the alleged offense is cyber or not. Such provisions could result in heightened cross-border monitoring and potential repercussions for individuals, leading to torture or even the death penalty in some jurisdictions. 

While some countries may believe they can sidestep these pitfalls by not collaborating with countries that have controversial laws, this confidence may be misplaced. The draft treaty allows countries to refuse a request if the activity in question is not a crime in its domestic regime (the principle of "dual criminality"). However, given the current strain on the MLAT system, there's an increasing likelihood that requests, even from countries with contentious laws, could slip through the checks. This opens the door for nations to inadvertently assist in operations that might contradict global human rights norms. And where countries do share the same subjective values and problematically criminalize the same conduct, this draft treaty seemingly provides a justification for their cooperation.

One of the more recently introduced pieces of legislation that exemplifies these issues is the Cybercrime Law of 2023 in Jordan. Introduced as part of King Abdullah II’s modernization reforms to increase political participation across Jordan, this law was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. In addition to this new law, the pre-existing cybercrime law in Jordan has already been used against LGBTQ+ people, and this new law expands its capacity to do so. This law, with its overly broad and vaguely defined terms, will severely restrict individual human rights across that country and will become a tool for prosecuting innocent individuals for their online speech. 

Article 13 of the Jordan law expansively criminalizes a wide set of actions tied to online content branded as “pornographic,” from its creation to distribution. The ambiguity in defining what is pornographic could inadvertently suppress content that merely expresses various sexualities, mistakenly deeming them as inappropriate. This goes beyond regulating explicit material; it can suppress genuine expressions of identity. The penalty for such actions entails a period of no less than six months of imprisonment. 

Meanwhile, the nebulous wording in Article 14 of Jordan's laws—terms like “expose public morals,” “debauchery,” and “seduction”—is equally concerning. Such vague language is ripe for misuse, potentially curbing LGBTQ+ content by erroneously associating diverse sexual orientation with immorality. Both articles, in their current form, cast shadows on free expression and are stark reminders that such provisions can lead to over-policing online content that is not harmful at all. During debates on the bill in the Jordanian Parliament, some MPs claimed that the new cybercrime law could be used to criminalize LGBTQ+ individuals and content online. Deputy Leader of the Opposition, Saleh al Armouti, went further and claimed that “Jordan will become a big jail.” 

Additionally, the law imposes restrictions on encryption and anonymity in digital communications, preventing individuals from safeguarding their rights to freedom of expression and privacy. Article 12 of the Cybercrime Law prohibits the use of Virtual Private Networks (VPNs) and other proxies, with at least six months imprisonment or a fine for violations. 

This will force people in Jordan to choose between engaging in free online expression or keeping their personal identity private. More specifically, this will negatively impact LGBTQ+ people and human rights defenders in Jordan who particularly rely on VPNs and anonymity to protect themselves online. The impact of Article 12 is exacerbated by the fact that there is no comprehensive data privacy legislation in Jordan to protect people’s rights during cyber attacks and data breaches.  

This is not the first time Jordan has limited access to information and content online. In December 2022, Jordanian authorities blocked TikTok to prevent the dissemination of live updates and information during the workers’ protests in the country's south, and authorities there previously had blocked Clubhouse as well

This crackdown on free speech has particularly impacted journalists, such as the recent arrest of Jordanian journalist Heba Abu Taha for criticizing Jordan’s King over his connections with Israel. Given that online platforms like TikTok and Twitter are essential for activists, organizers, journalists, and everyday people around the world to speak truth to power and fight for social justice, the restrictions placed on free speech by Jordan’s new Cybercrime Law will have a detrimental impact on political activism and community building across Jordan.

People across Jordan have protested the law and the European Union has  expressed concern about how the law could limit freedom of expression online and offline. In August, EFF and 18 other civil society organizations wrote to the King of Jordan, calling for the rejection of the country’s draft cybercrime legislation. With the law now in effect, we urge Jordan to repeal the Cybercrime Law 2023.

Jordan’s Cybercrime Law has been said to be a “true copy” of the United Arab Emirates (UAE) Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes. This law replaced its predecessor, which had been used to stifle expression critical of the government or its policies—and was used to sentence human rights defender Ahmed Mansoor to 10 years in prison. 

The UAE’s new cybercrime law further restricts the already heavily-monitored online space and makes it harder for ordinary citizens, as well as journalists and activists, to share information online. More specifically, Article 22 mandates prison sentences of between three and 15 years for those who use the internet to share “information not authorized for publishing or circulating liable to harm state interests or damage its reputation, stature, or status.” 

In September 2022, Tunisia passed its new cybercrime law in Decree-Law No. 54 on “combating offenses relating to information and communication systems.” The wide-ranging decree has been used to stifle opposition free speech, and mandates a five-year prison sentence and a fine for the dissemination of “false news” or information that harms “public security.” In the year since Decree-Law 54 was enacted, authorities in Tunisia have prosecuted media outlets and individuals for their opposition to government policies or officials. 

The first criminal investigation under Decree-Law 54 saw the arrest of student Ahmed Hamada in October 2022 for operating a Facebook page that reported on clashes between law enforcement and residents of a neighborhood in Tunisia. 

Similar tactics are being used in Egypt, where the 2018 cybercrime law, Law No. 175/2018, contains broad and vague provisions to silence dissent, restrict privacy rights, and target LGBTQ+ individuals. More specifically, Articles 25 and 26 have been used by the authorities to crackdown on content that allegedly violates “family values.” 

Since its enactment, these provisions have also been used to target LGBTQ+ individuals across Egypt, particularly regarding the publication or sending of pornography under Article 8, as well as illegal access to an information network under Article 3. For example, in March 2022 a court in Egypt charged singers Omar Kamal and Hamo Beeka with “violating family values” for dancing and singing in a video uploaded to YouTube. In another example, police have used cybercrime laws to prosecute LGBTQ+ individuals for using dating apps such as Grindr.

And in Saudi Arabia, national authorities have used cybercrime regulations and counterterrorism legislation to prosecute online activism and stifle dissenting opinions. Between 2011 and 2015, at least 39 individuals were jailed under the pretense of counterterrorism for expressing themselves online—for composing a tweet, liking a Facebook post, or writing a blog post. And while Saudi Arabia has no specific law concerning gender identity and sexual orientation, authorities have used the 2007 Anti-Cyber Crime Law to criminalize online content and activity that is considered to impinge on “public order, religious values, public morals, and privacy.” 

These provisions have been used to prosecute individuals for peaceful actions, particularly since the Arab Spring in 2011. More recently, in August 2022, Salma al-Shehab was sentenced to 34 years in prison with a subsequent 34-year travel ban for her alleged “crime” of sharing content in support of prisoners of conscience and women human rights defenders.

These cybercrime laws demonstrate that if the proposed UN Cybercrime Convention is ratified in its current form with its broad scope, it would authorize domestic surveillance for the investigation of any offenses, as those in Articles 12, 13, and 14 of Jordan's law. Additionally, the convention could authorize international cooperation for investigation of crimes penalized with three or four years of imprisonment, as seen in countries such as the UAE, Tunisia, Egypt, and Saudi Arabia.

As Canada warned (at minute 01:56 ) at the recent negotiation session, these expansive provisions in the Convention permit states to unilaterally define and broaden the scope of criminal conduct, potentially paving the way for abuse and transnational repression. While the Convention may incorporate some procedural safeguards, its far-reaching scope raises profound questions about its compatibility with the key tenets of human rights law and the principles enshrined in the UN Charter. 

The root problem lies not in the severity of penalties, but in the fact that some countries criminalize behaviors and expression that are protected under international human rights law and the UN Charter. This is alarming, given that numerous laws affecting the LGBTQ+ community carry penalties within these ranges, making the potential for misuse of such cooperation profound.

In a nutshell, the proposed UN treaty amplifies the existing threats to the LGBTQ+ community. It endorses a framework where nations can surveil benign activities such as sharing LGBTQ+ content, potentially intensifying the already-precarious situation for this community in many regions.

Online, the lack of legal protection of subscriber data threatens the anonymity of the community, making them vulnerable to identification and subsequent persecution. The mere act of engaging in virtual communities, sharing personal anecdotes, or openly expressing relationships could lead to their identities being disclosed, putting them at significant risk.

Offline, the implications intensify with amplified hesitancy to participate in public events, showcase LGBTQ+ symbols, or even undertake daily routines that risk revealing their identity. The draft convention's potential to bolster digital surveillance capabilities means that even private communications, like discussions about same-sex relationships or plans for LGBTQ+ gatherings, could be intercepted and turned against them. 

To all member states: This is a pivotal moment. This is our opportunity to ensure the digital future is one where rights are championed, not compromised. Pledge to protect the rights of all, especially those communities like the LGBTQ+ that are most vulnerable. The international community must unite in its commitment to ensure that the proposed convention serves as an instrument of protection, not persecution.



❌
❌