Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Ola Bini Faces Ecuadorian Prosecutors Seeking to Overturn Acquittal of Cybercrime Charge

Par : Karen Gullo
1 avril 2024 à 12:21

Ola Bini, the software developer acquitted last year of cybercrime charges in a unanimous verdict in Ecuador, was back in court last week in Quito as prosecutors, using the same evidence that helped clear him, asked an appeals court to overturn the decision with bogus allegations of unauthorized access of a telecommunications system.

Armed with a grainy image of a telnet session—which the lower court already ruled was not proof of criminal activity—and testimony of an expert witness to the lower court—who never had access to the devices and systems involved in the alleged intrusion—prosecutors presented the theory that, by connecting to a router, Bini made partial unauthorized access in an attempt to break into a  system  provided by Ecuador’s national telecommunications company (CNT) to a presidency's
contingency center.

If this all sounds familiar, that’s because it is. In an unfounded criminal case plagued by irregularities, delays, and due process violations, Ecuadorian prosecutors have for the last five years sought to prove Bini violated the law by allegedly accessing an information system without authorization.

Bini, who resides in Ecuador, was arrested at the Quito airport in 2019 without being told why. He first learned about the charges from a TV news report depicting him as a criminal trying to destabilize the country. He spent 70 days in jail and cannot leave Ecuador or use his bank accounts.

Bini prevailed in a trial last year before a three-judge panel. The core evidence the Prosecutor’s Office and CNT’s lawyer presented to support the accusation of unauthorized access to a computer, telematic, or telecommunications system was a printed image of a telnet session allegedly taken from Bini’s mobile phone.

The image shows the user requesting a telnet connection to an open server using their computer’s command line. The open server warns that unauthorized access is prohibited and asks for a username. No username is entered. The connection then times out and closes. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network system, it shows the trail of someone who paid a visit to a publicly accessible server—and then politely obeyed the server's warnings about usage and access.

Bini’s acquittal was a major victory for him and the work of security researchers. By assessing the evidence presented, the court concluded that both the Prosecutor’s Office and CNT failed to demonstrate a crime had occurred. There was no evidence that unauthorized access had ever happened, nor anything to sustain the malicious intent that article 234 of Ecuador’s Penal Code requires to characterize the offense of unauthorized access.

The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and found that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, do not constitute evidence of cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge, or what can't be verified from digital forensics, is not proper digital evidence.

Prosecutors appealed the verdict and are back in court using the same image that didn’t prove any crime was committed. At the March 26 hearing, prosecutors said their expert witness’s analysis of the telnet image shows there was connectivity to the router. The witness compared it to entering the yard of someone’s property to see if the gate to the property is open or closed. Entering the yard is analogous to connecting to the router, the witness said.

Actually, no.
Our interpretation of the image, which was leaked to the media before Bini’s trial, is that it’s the internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and walking away. If this image could prove anything it is that no unauthorized access happened.

Yet, no expert analysis was conducted in the systems allegedly affected. The  expert witness’s testimony was based on his analysis of a CNT report—he didn’t have access to the CNT router to verify its configuration. He didn’t digitally validate whether what was shown in the report actually happened and he was never asked to verify the existence of an IP address owned or managed by CNT.

That’s not the only problem with the appeal proceedings. Deciding the appeal is a panel of three judges, two of whom ruled to keep Bini in detention after his arrest in 2019 because there were allegedly sufficient elements to establish a suspicion against him. The detention was later considered illegal and arbitrary because of a lack of such elements. Bini filed a lawsuit against the Ecuadorian state, including the two judges, for violating his rights. Bini’s defense team has sought to remove these two judges from the appeals case, but his requests were denied.

The appeals court panel is expected to issue a final ruling in the coming days.  

Four Voices You Should Hear this International Women’s Day

Around the globe, freedom of expression varies wildly in definition, scope, and level of access. The impact of the digital age on perceptions and censorship of speech has been felt across the political spectrum on a worldwide scale. In the debate over what counts as free expression and how it should work in practice, we often lose sight of how different forms of censorship can have a negative impact on different communities, and especially marginalized or vulnerable ones. This International Women’s Day, spend some time with four stories of hope and inspiration that teach us how to reflect on the past to build a better future.

1. Podcast Episode: Safer Sex Work Makes a Safer Internet

An internet that is safe for sex workers is an internet that is safer for everyone. Though the effects of stigmatization and criminalization run deep, the sex worker community exemplifies how technology can help people reduce harm, share support, and offer experienced analysis to protect each other. Public interest technology lawyer Kendra Albert and sex worker, activist, and researcher Danielle Blunt have been fighting for sex workers’ online rights for years and say that holding online platforms legally responsible for user speech can lead to censorship that hurts us all. They join EFF’s Cindy Cohn and Jason Kelley in this podcast to talk about protecting all of our free speech rights.

2. Speaking Freely: Sandra Ordoñez

Sandra (Sandy) Ordoñez is dedicated to protecting women being harassed online. Sandra is an experienced community engagement specialist, a proud NYC Latina resident of Sunset Park in Brooklyn, and a recipient of Fundación Carolina’s Hispanic Leadership Award. She is also a long-time diversity and inclusion advocate, with extensive experience incubating and creating FLOSS and Internet Freedom community tools. In this interview with EFF’s Jillian C. York, Sandra discusses free speech and how communities that are often the most directly affected are the last consulted.

3. Story: Coded Resistance, the Comic!

From the days of chattel slavery until the modern Black Lives Matter movement, Black communities have developed innovative ways to fight back against oppression. EFF's Director of Engineering, Alexis Hancock, documented this important history of codes, ciphers, underground telecommunications and dance in a blog post that became one of our favorite articles of 2021. In collaboration with The Nib and illustrator Chelsea Saunders, "Coded Resistance" was adapted into comic form to further explore these stories, from the coded songs of Harriet Tubman to Darnella Frazier recording the murder of George Floyd.

4. Speaking Freely: Evan Greer

Evan Greer is many things: a musician, an activist for LGBTQ issues, the Deputy Director of Fight for the Future, and a true believer in the free and open internet. In this interview, EFF’s Jillian C. York spoke with Evan about the state of free expression, and what we should be doing to protect the internet for future activism. Among the many topics discussed was how policies that promote censorship—no matter how well-intentioned—have historically benefited the powerful and harmed vulnerable or marginalized communities. Evan talks about what we as free expression activists should do to get at that tension and find solutions that work for everyone in society.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Infosec Tools for Resistance this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Four Infosec Tools for Resistance this International Women’s Day 

While online violence is alarmingly common globally, women are often more likely to be the target of mass online attacks, nonconsensual leaks of sensitive information and content, and other forms of online violence. 

This International Women’s Day, visit EFF’s Surveillance Self-Defense (SSD) to learn how to defend yourself and your friends from surveillance. In addition to tutorials for installing and using security-friendly software, SSD walks you through concepts like making a security plan, the importance of strong passwords, and protecting metadata.

1. Make Your Own Security Plan

This IWD, learn what a security plan looks like and how you can build one. Trying to protect your online data—like pictures, private messages, or documents—from everything all the time is impractical and exhausting. But, have no fear! Security is a process, and through thoughtful planning, you can put together a plan that’s best for you. Security isn’t just about the tools you use or the software you download. It begins with understanding the unique threats you face and how you can counter those threats. 

2. Protect Yourself on Social Networks

Depending on your circumstances, you may need to protect yourself against the social network itself, against other users of the site, or both. Social networks are among the most popular websites on the internet. Facebook, TikTok, and Instagram each have over a billion users. Social networks were generally built on the idea of sharing posts, photographs, and personal information. They have also become forums for organizing and speaking. Any of these activities can rely on privacy and pseudonymity. Visit our SSD guide to learn how to protect yourself.

3. Tips for Attending Protests

Keep yourself, your devices, and your community safe while you make your voice heard. Now, more than ever, people must be able to hold those in power accountable and inspire others through the act of protest. Protecting your electronic devices and digital assets before, during, and after a protest is vital to keeping yourself and your information safe, as well as getting your message out. Theft, damage, confiscation, or forced deletion of media can disrupt your ability to publish your experiences, and those engaging in protest may be subject to search or arrest, or have their movements and associations surveilled. 

4. Communicate Securely with Signal or WhatsApp

Everything you say in a chat app should be private, viewable by only you and the person you're talking with. But that's not how all chats or DMs work. Most of those communication tools aren't end-to-end encrypted, and that means that the company who runs that software could view your chats, or hand over transcripts to law enforcement. That's why it's best to use a chat app like Signal any time you can. Signal uses end-to-end encryption, which means that nobody, not even Signal, can see the contents of your chats. Of course, you can't necessarily force everyone you know to use the communication tool of your choice, but thankfully other popular tools, like Apple's Messages, WhatsApp and more recently, Facebook's Messenger, all use end-to-end encryption too, as long as you're communicating with others on those same platforms. The more people who use these tools, even for innocuous conversations, the better.

On International Women’s Day and every day, stay safe out there! Surveillance self-defense can help.

This blog is part of our International Women’s Day series. Read other articles about the fight for gender justice and equitable digital rights for all.

  1. Four Reasons to Protect the Internet this International Women’s Day
  2. Four Voices You Should Hear this International Women’s Day
  3. Four Actions You Can Take To Protect Digital Rights this International Women’s Day

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

Par : Karen Gullo
7 février 2024 à 10:57

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious
exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people
globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirementincluding an intent to cause damage, defraud, or harmis needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for
unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)a minor infraction ordinarily considered a civil issuecould be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as c
hanging one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblockingwhich can be done innocently simply by changing locationshould not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers
whether within organizations or independentdiscover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of
Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)

Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)
Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl,
Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundacin Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

Par : Karen Gullo
7 février 2024 à 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

Worried About AI Voice Clone Scams? Create a Family Password

31 janvier 2024 à 19:42

Your grandfather receives a call late at night from a person pretending to be you. The caller says that you are in jail or have been kidnapped and that they need money urgently to get you out of trouble. Perhaps they then bring on a fake police officer or kidnapper to heighten the tension. The money, of course, should be wired right away to an unfamiliar account at an unfamiliar bank. 

It’s a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim’s common sense and make them more likely to send money. Now, scammers are reportedly experimenting with a way to further heighten that panic by playing a simulated recording of “your” voice. Fortunately, there’s an easy and old-school trick you can use to preempt the scammers: creating a shared verbal password with your family.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology. There are myriad websites that will let you make voice clones. Some will let you use a variety of celebrity voices to say anything they want, while others will let you upload a new person’s voice to create a voice clone of anyone you have a recording of. Scammers have figured out that they can use this to clone the voices of regular people. Suddenly your relative isn’t talking to someone who sounds like a complete stranger, they are hearing your own voice. This makes the scam much more concerning. 

Voice generation scams aren’t widespread yet, but they do seem to be happening. There have been news stories and even congressional testimony from people who have been the targets of voice impersonation scams. Voice cloning scams are also being used in political disinformation campaigns as well. It’s impossible for us to know what kind of technology these scammers used, or if they're just really good impersonations. But it is likely that the scams will grow more prevalent as the technology gets cheaper and more ubiquitous. For now, the novelty of these scams, and the use of machine learning and deepfakes, technologies which are raising concerns across many sectors of society, seems to be driving a lot of the coverage. 

The family password is a decades-old, low tech solution to this modern high tech problem. 

The first step is to agree with your family on a password you can all remember and use. The most important thing is that it should be easy to remember in a panic, hard to forget, and not public information. You could use the name of a well known person or object in your family, an inside joke, a family meme, or any word that you can all remember easily. Despite the name, this doesn't need to be limited to your family, it can be a chosen family, workplace, anarchist witch coven, etc. Any group of people with which you associate can benefit from having a password. 

Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can’t tell it to you, then they might be a fake. You could of course further verify this with other questions,  like, “what is my cat's name” or “when was the last time we saw each other?” These sorts of questions work even if you haven’t previously set up a passphrase in your family or friend group. But keep in mind people tend to forget basic things when they have experienced trauma or are in a panic. It might be helpful, especially for   people with less robust memories, to write down the password in case you forget it. After all, it’s not likely that the scammer will break into your house to find the family password.

These techniques can be useful against other scams which haven’t been invented yet, but which may come around as deepfakes become more prevalent, such as machine-generated video or photo avatars for “proof.” Or should you ever find yourself in a hackneyed sci-fi situation where there are two identical copies of your friend and you aren’t sure which one is the evil clone and which one is the original. 

An image of spider-man pointing at another spider-man who is pointing at him. A classic meme.

Spider-man hopes The Avengers haven't forgotten their secret password!

The added benefit of this technique is that it gives you a minute to step back, breath, and engage in some critical thinking. Many scams of this nature rely on panic and keeping you in your lower brain, by asking for the passphrase you can also take a minute to think. Is your kid really in Mexico right now? Can you call them back at their phone number to be sure it’s them?  

So, go make a family password and a friend password to keep your family and friends from getting scammed by AI impostors (or evil clones).

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

Par : Karen Gullo
29 janvier 2024 à 12:42

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

EFF’s 2024 In/Out List

19 janvier 2024 à 09:46

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.


IN

1. Affordable and future-proof
internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A
privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

Sketchy and Dangerous Android Children’s Tablets and TV Set-Top Boxes: 2023 in Review

You may want to save your receipts if you gifted any low-end Android TV set-top boxes or children's tablets to a friend or loved one this holiday season. In a series of investigations this year, EFF researchers confirmed the existence of dangerous malware on set-top boxes manufactured by AllWinner and RockChip, and discovered sketchyware on a tablet marketed for kids from the manufacturer Dragon Touch. 

Though more reputable Android devices are available for watching TV and keeping the little ones occupied, they come with a higher price tag as well. This means that those who can afford such devices get more assurance in the security and privacy of these devices, while those who can only afford cheaper devices by little-known manufacturers are put at greater risk.

The digital divide could not be more apparent. Without a clear warning label, consumers who cannot afford devices from well-known brands such as Apple, Amazon, or Google are being sold devices which come out-of-the-box ready to spy on their children. This malware opens their home internet connection as a proxy to unknown users, and exposes them to legal risks. 

Traditionally, if a device like a vacuum cleaner was found to be defective or dangerous, we would expect resellers to pull these devices from the department store floor and to the best of their ability notify customers who have already bought these items and brought them into their homes. Yet we observed the devices in question continued to be sold by online vendors months after widely circulated news of their defects.

After our investigation of the set-top boxes, we urged the FTC to take action against the vendors who sell devices known to be riddled with malware. Amazon and AliExpress were named in the letter, though more vendors are undoubtedly still selling these devices. Not to spoil the holiday cheer, but if you have received one of these devices, you may want to ask for another gift and have the item refunded.

In the case of the Dragon Touch tablets, it was apparent that this issue went beyond just Android TV boxes and even encompassed budget Android devices specifically marketed for children. The tablet we investigated had an outdated pre-installed parental controls app that was labeled as adware, leftover remnants of malware, and sketchy update software. It’s clear this issue reached a wide variety of Android devices and it should not be left up to the consumer to figure this out. Even for devices on the market that are “normal,” there still needs to be work done by the consumer just to properly set up devices for their kids and themselves. But there’s no total consumer-side solution for pre-installed malware and there shouldn’t have to be.

Compared with the products of yesteryear, our “smart” and IOT devices carry a new set of risks to our security and privacy. Yet, we feel confident that with better digital product testing—along with regulatory oversight—can go a long way in mitigating these dangers. We applaud efforts such as Mozilla’s Privacy Not Included to catalog just how much our devices are protecting our data, since as it currently stands it is up to us as consumers to assess the risks ourselves and take appropriate steps.

Spritely and Veilid: Exciting Projects Building the Peer-to-Peer Web

13 décembre 2023 à 12:49

While there is a surge in federated social media sites, like Bluesky and Mastodon, some technologists are hoping to take things further than this model of decentralization with fully peer-to-peer applications. Two leading projects, Spritely and Veilid, hint at what this could look like.

There are many technologies used behind the scenes to create decentralized tools and platforms. There has been a lot of attention lately, for example, around interoperable and federated social media sites using ActivityPub, such as Mastodon, as well as platforms like BlueSky using a similar protocol. These types of services require most individuals to sign up with an intermediary service host in order to participate, but they are decentralized in so far as any user has a choice of intermediary, and can run one of those services themselves while participating in the larger network.

Another model for decentralized communications does away with the intermediary services altogether in favor of a directly peer-to-peer model. This model is technically much more challenging to implement, particularly in cases where privacy and security are crucial, but it does result in a system that gives individuals even more control over their data and their online experience. Fortunately, there are a few projects being developed that are aiming to make purely peer-to-peer applications achievable and easy for developers to create. Two leading projects in this effort are Spritely and Veilid.

Spritely

Spritely is worth keeping an eye on. Being developed by the Institute of the same name, Spritely is a framework for building distributed apps that don’t even have to know that they’re distributed. The project is spearheaded by Christine Lemmer-Webber, who was one of the co-authors of the ActivityPub spec that drives the fediverse. She is taking the lessons learned from that work, combining them with security and privacy minded object capabilities models, and mixing it all up into a model for peer to peer computation that could pave the way for a generation of new decentralized tools.

Spritely is so promising because it is tackling one of the hard questions of decentralized technology: how do we protect privacy and ensure security in a system where data is passing directly between people on the network? Our best practices in this area have been shaped by many years of centralized services, and tackling the challenges of a new paradigm will be important.

One of the interesting techniques that Spritely is bringing to bear on the problem is the concept of object capabilities. OCap is a framework for software design that only gives processes the ability to view and manipulate data that they’ve been given access to. That sounds like common sense, but it is in contrast to the way that most of our computers work, in which the game Minesweeper (just to pick one example) has full access to your entire home directory once you start it up. That isn’t to say that it or any other program is actually reading all your documents, but it has the ability to, which means that a security flaw in that program could exploit that ability.

The Spritely Institute is combining OCap with a message passing protocol that doesn’t care if the other party it's communicating with is on the same device, on another device in the same room, or on the other side of the world. And to top things off they’re working on the protocol in the open, with a handful of other dedicated organizations. We’re looking forward to seeing what the Spritely team creates and what their work enables in the future.

Veilid

Another leading project in the push for full p2p apps was just announced a few months ago. The Veilid project was released at DEFCON 31 in August and has a number of promising features that could lead to it being a fundamental tool in future decentralized systems. Described as a cross between TOR and Interplanetary File System (IPFS), Veilid is a framework and protocol that offers two complementary tools. The first is private routing, which, much like TOR, can construct an encrypted private tunnel over the public internet allowing two devices to communicate with each other without anyone else on the network knowing who is talking to whom.

The second tool that Veilid offers is a Distributed Hash Table (DHT), which lets anyone look up a bit of data associated with a specific key, wherever that data lives on the network. DHTs go all the way back to Bittorrent’s tracker, where they help direct users to other nodes in the network that have the chunk of a file that they need, and they form the backbone of IPFS’s system. Veilid’s DHT is particularly intriguing because it is “multi-writer.” In most DHTs, only one party can set the value stored at a particular key, but in Veilid the creator of a DHT key can choose to share the writing capability with others, creating a system where nodes can communicate by leaving notes for each other in the DHT. Veilid has created an early alpha of a chat program, VeilidChat, based on exactly this feature.

Both of these features are even more valuable because Veilid is a very mobile-friendly framework. The library is available for a number of platforms and programming languages, including the cross-platform Flutter framework, which means it is easy to build iOS and Android apps that use it. Mobile has been a difficult platform to build peer-to-peer apps on for a variety of reasons, so having a turn-key solution in the form of Veilid could be a game changer for decentralization in the next couple years. We’re excited to see what gets built on top of it.

Public interest in decentralized tools and services is growing, as people realize that there are downsides to centralized control over the platforms that connect us all. The past year has seen interest in networks like the fediverse and Bluesky explode and there’s no reason to expect that to change. Projects like Spritely and Veilid are pushing the boundaries of how we might build apps and services in the future. The things that they are making possible may well form the foundation of social communication on the internet in the next decade, making our lives online more free, secure, and resilient.

Meta Announces End-to-End Encryption by Default in Messenger

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Low Budget Should Not Mean High Risk: Kids' Tablet Came Preloaded with Sketchyware

14 novembre 2023 à 17:04

It’s easy to get Android devices from online vendors like Amazon at different price points. Unfortunately, it is also easy to end up with an Android device with malware at these lower budgets. There are several factors that contribute to this: multiple devices manufactured in the same facility, lack of standards on security when choosing components, and lack of quality assurance and scrutiny by the vendors that sell these devices. We investigated a tablet that had potential malware on it bought from the online vendor Amazon; a Dragon Touch KidzPad Y88X 10 kid’s tablet. As of this post, the tablet in question is no longer listed on Amazon, although it was available for the majority of this year.

Blue Box that says KidzPad Y88X 10

Dragon Touch KidzPad Y88X 10

It turns out malware was present, with an added bonus of pre-installed riskware and a very outdated parental control app. This is a major concern since this is a tablet marketed for kids.

Parents have plenty of worry and concern about how their kids use technology as it is. Ongoing conversations and negotiations about the time spent on devices happen in many households. Potential malware or riskware should not be a part of these concerns just because you purchased a budget Android tablet for your child. It just so happens that some of the parents at EFF conduct security research. But this is not what it should take to keep your kid safe.

“Stock Android”

To understand this issue better, it's useful to know what “stock Android” means and how manufacturers approach choosing an OS. The Android operating system is open sourced by Google and officially known as the "Android Open Source Project" or AOSP. The source code is stripped down and doesn't even include Google apps or the Google Play Store. Most phones or tablets you purchase with Android are AOSP with layers of customization; or a “skinned” version of AOSP. Even the current Google flagship phone, Pixel, does not come with stock Android.

Even though custom Android distributions or ROMs (Android Read Only Memory) can come with useful features, others can come with “bloatware” or unwanted apps. For example, in 2019 when Samsung pre-installed the Facebook app on their phones, the only option was to “disable” the app. Worse, in some cases custom ROMS can come with pre-installed malware. Android OEMs (original equipment manufacturers) can pre-install apps that have high-level privileges and may not be as obvious as an icon you can see on your home screen. It's not just apps, though. New features provided with AOSP may be severely delayed with custom OEMs if the device manufacturer isn't diligent about porting them in. This could be because of reasons like hardware limitations or not prioritizing updates.

Screen Time for Sketchyware

Similar to an Android TV we looked into earlier this year, we found the now notorious Corejava malware directories on the Dragon Touch tablet. Unlike the Android TV box we saw, this tablet didn’t come rooted. However, we could see that the directories /data/system/Corejava and /data/system/Corejava/nodewere present on the device. This indicates Corejava was active on this tablet’s firmware.

We originally didn’t suspect this malware’s presence until we saw links to other manufacturers and odd requests made from the tablet prompting us to take a look. We first booted up this Dragon Touch tablet in May 2023, after the Command and Control (C2) servers that Corejava depends on were taken down. So any attempts to download malicious payloads, if active, wouldn't work (for now). With the lack of “noise” from the device, we suspect that this malware indicator is at minimum, a leftover remnant of “copied homework” from hasty production; or at worst, left for possible future activity.

The tablet also came preloaded with Adups (which were also found on the Android TV boxes) in the form of “firmware over the air” (FOTA) update software that came as the application called “Wireless Update.”

App list that contains the app "Wireless Update"

Adups has a history of being malware, but there are “clean versions” that exist. One of those “clean” versions was on this tablet. Thanks to its history and extensive system level permissions to download whatever application it wants from the Adups servers, it still poses a concern. Adups comes preinstalled with this Dragon Touch OEM, if you factory reset this device, the app will return. There’s no way to uninstall or disable this variant of Adups without technical knowledge and being comfortable with the command line. Using an OTA software with such a fraught history is a very questionable decision for a children’s tablet.

Connecting the Dots

The connection between the infected Dragon Touch and the Android TV box we previously investigated was closer than we initially thought. After seeing a customer review for an Android TV box for a company at the same U.S. address as Dragon Touch, we discovered Dragon Touch is owned and trademarked by one company that also owns and distributes other products under different brand names.

This group that registered multiple brands, and shared an address with Dragon Touch, sold the same tablet we looked at in other online markets, like Walmart. This same entity apparently once sold the T95Z model of Android TV boxes under the brand name “Tablet Express,” along with devices like the Dragon Touch tablet. The T95Z was in the family of TV boxes investigated after researchers started taking a closer look at these types of devices.

With the widespread use of these devices, it’s safe to say that any Android devices attached to these sellers should be met with scrutiny.

Privacy Issues

The Dragon Touch tablet also came with a very outdated version of the KIDOZ app pre-installed. This app touts being “COPPA Certified” and “turns phones & tablets into kids friendly devices for playing and learning with the best kids’ apps, videos and online content.” This version operates as kind of like a mini operating system where you can download games, apps, and configure parental controls within the app.

We noticed the referrer for this app was “ANDROID_V4_TABLET_EXPRESS_PRO_GO.” “Tablet Express” is no longer an operational company, so it appears Dragon Touch repurposed an older version of the KIDOZ app. KIDOZ only distributes its app to device manufacturers to preload on devices for kids, it's not in the Google Play Store.

This version of the app still collects and sends data to “kidoz.net” on usage and physical attributes of the device. This includes information like device model, brand, country, timezone, screen size, view events, click events, logtime of events, and a unique “KID ID.” In an email, KIDOZ told us that the “calls remain unused even though they are 100% certified (COPPA)” in reference to the information sent to their servers from the app. The older version still has an app store of very outdated apps as well. For example, we found a drawing app, "Kids Paint FREE", attempting to send exact GPS coordinates to an ad server. The ad server this app calls no longer exists, but some of these apps in the KIDOZ store are still operational despite having deprecated code. This leakage of device specific information over primarily HTTP (insecure) web requests can be targeted by bad actors who want to siphon information either on device or by obtaining these defunct domains.

Several security vendors have labeled the version of the KIDOZ app we reviewed as adware. The current version of KIDOZ is less of an issue since the internal app store was removed, so it's no longer labeled as adware. Thankfully, you can uninstall this version of KIDOZ. KIDOZ does offer the latest version of their app to OEM manufacturers, so ultimately the responsibility lies with Dragon Touch. When we reached out to KIDOZ, they said they would follow up with various OEMs to offer the latest version of the app.

KIDOZ apps asking for excessive permissions

Simple racing games from the old KIDOZ app store asking for location and contacts.

Malware and riskware come in many different forms. The burden of remedy for pre-installed malware and sketchyware falling to consumers is absolutely unacceptable. We'd like to see some basic improvements for how these devices marketed for children are sold and made:

  • There should be better security benchmarks for devices sold in large online markets. Especially devices packaged to appear safe for kids.
  • If security researchers find malware on a device, there should be a more effective path to remove these devices from the market and alert customers.
  • There should be a minimum standard set on Android OEMs sold to offer a minimum requirement of available security and privacy features from AOSP. For instance, this Dragon Touch kid’s tablet is running Android 9, which is now five years old. Android 14 is currently the latest stable OS at the time of this report.

Devices with software with a malicious history and out-of-date apps that leak children’s data create a larger scope of privacy and security problems that should be watched with closer scrutiny than they are now. It took over 25 hours to assess all the issues with this one tablet. Since this was a custom Android OEM, the only possible source of documentation was from the company, and there wasn’t much. We were left to look at the breadcrumbs they leave on the image instead, such as custom system level apps, chip processor specific quirks, and pre-installed applications. In this case, following the breadcrumbs allowed us to make the needed connections to how this device was made and the circumstances that lead to the sketchyware on it. Most parents aren't security researchers and do not have the time, will, or energy to think about these types of problems, let alone fix them. Online vendors like Amazon and Walmart should start proactively catching these issues and invest in better quality and source checks on the many consumer electronics on their markets.

Investigated Apps, Logs, and Tools List:

APKs (Apps):

Logs:

Tools:

  • Android Debug Bridge (adb) and Android Studio for shell and emulation.
  • Logcat for app activity on device.
  • MOBSF for initial APK scans.
  • JADX GUI for static analysis of APKs.
  • PiHole for DNS requests from devices.
  • VirusTotal for graphing connections to suspicious domains and APKs.

EFF Director of Investigations Dave Maass contributed research to this report.

What the !#@% is a Passkey?

This is part 1 of our series on passkeys. Part 2, on privacy, is here.

A new login technique is becoming available in 2023: the passkey. The passkey promises to solve phishing and prevent password reuse. But lots of smart and security-oriented folks are confused about what exactly a passkey is. There’s a good reason for that. A passkey is in some sense one of two (or three) different things, depending on how it’s stored.

First off: is a passkey one of those little plastic things you stick in your USB port for two-factor authentication? No, that’s a security key. More on security keys in a minute. A passkey is also not something you can type in; it’s not a password, passcode, passphrase, or a PIN.

A passkey is approximately 100-1400 bytes of random data1, generated on your device (like your phone, laptop, or security key) for the purpose of logging in on a specific website. Once the passkey is generated, your browser registers it with the website and it gets stored somewhere safe (for instance, your password manager). From then on, you can use that passkey to log in to that website without entering a password. When you go to a website’s login page, you’ll have the option to “Sign in with a passkey.” If you choose that option you’ll get a confirmation prompt from your password manager, and will be logged in after confirming. For all this to work, there needs to be passkey support in the website, your browser, your password manager, and usually also your operating system.

You can create many passkeys: each passkey unlocks a single account on a single website. For multiple accounts on a single website, you can have multiple passkeys for that website. For instance, if you have a social media account for personal use and one for business, you would have different passkeys for each account.

You can usually have both a password and a passkey on your account2, and can log in with either. Logging in with a passkey is generally faster, since your password manager will offer to do it in a single click, instead of the multiple clicks that logging in with a password usually takes. Also, logging in with a passkey typically lets you skip traditional two-factor authentication (SMS, authenticator app, or security key).

Why is it safe for passkeys to skip traditional two-factor authentication? Passkeys build in a second factor. Each time you use the passkey to log in, your browser or operating system may ask you to re-enter your device unlock PIN. If you use a fingerprint or facial recognition to unlock your device, your browser might instead request you re-enter your fingerprint or show your face, to confirm that it’s really you asking to log in. That gives two factors of authentication: the device that stores your passkey is something you have, and it’s accompanied by something you know (the PIN) or something you are (a fingerprint or a face).

Storage and Backup

A passkey stored on just one computer or phone isn’t that useful. What if you want to log in from a different device? What if your device falls in the toilet? There are at least three solutions here and they’re very different, which is part of why passkeys are in practice three very different things.

  • Solution 1: Passkeys are stored in the password manager, which encrypts them, backs them up to the cloud, and helps you copy them onto all of your devices.
  • Solution 2a: Passkeys are created and stored in a physical security key that you plug in via USB3. To log in on a different device, you plug in the security key when prompted. Passkeys created this way can’t be copied. Only recently-made security keys support this.
  • Solution 2b: Passkeys are created and stored on a high-security chip built into your computer or phone (for instance, a TPM or Secure Enclave, available on most devices made in the last few years). Like solution 2, these passkeys can’t be copied.

Solutions 2a and 2b are less convenient (and solution 2a costs a little bit of money, to buy a security key). But they offer a higher level of security against someone stealing your devices. With solution 1, someone who steals your computer might be able to copy the passkeys if your password manager is unlocked.

Also, solutions 2a and 2b don’t really solve the “device falls in toilet” problem. If you’re using one of those solutions, you should have multiple passkeys stored on different devices as backup. Alternatively you may wind up relying on email-based account recovery.

If you’re using solution 1, you trust your password manager to keep your passkeys secure. Also note that password managers generally won’t let you export a copy of your passkeys for offline backup.

How do passkeys prevent phishing?

Each passkey contains a record of which domain name the passkey was created for. If someone sends you a link to a login page on a lookalike domain name, you may be fooled but your browser will not, since browsers can easily check for an exact match. So your browser will not send the passkey to the lookalike domain name and you’ll be safe.

However, so long as you still have a memorized password in addition to your passkey, a lookalike site could tell you your passkey isn't working and you need to enter the password instead. If you do enter the password, the phishing attack will succeed. So phishing is still possible, but someone who typically logs in on a given site with a passkey is more likely to get suspicious when asked to enter a password instead, which provides some protection even if it’s not complete protection.

Should I use passkeys?

Like all security and privacy topics, the answer is “it depends.” But for most people, passkeys are a good idea. If you’re already using a password manager, generating long unique passwords for each website, and always using the autofill features to log in (i.e. not copy-pasting passwords), passkeys will provide a slightly higher level of security with significantly more convenience.

If you’re not already using a password manager, passkeys will be a tremendous increase in security (and will also require you to start using a password manager).

For sites where you are using two factor authentication (2FA), passkeys will be much more convenient, and may be more secure. SMS or authenticator app 2FA methods are vulnerable to phishing attacks, since a fake site can ask you for the one-time code and pass it along to the real site along with your phished password. Passkeys are more secure than SMS or authenticator app 2FA because they aren’t vulnerable to phishing; your browser knows exactly which site goes with which passkey, and isn’t tricked by fake websites.

Security key 2FA also isn’t vulnerable to phishing, so switching from security key 2FA to a passkey is mainly a matter of convenience; it means one less step during login, and one less password to remember. If you store your passkeys on a security key (protected with a PIN or biometric), you’ll achieve similar results as security key 2FA. If you store your passkeys in a password manager instead, that’s slightly less safe, because anyone who gains access to your password manager can use your passkeys, without needing physical access to your security key.

As of late 2023, passkey support is very uneven, particularly for syncing. For instance, Adam Langley says “Windows Hello doesn’t sync at all, Google Password Manager can only sync between Android devices, and iCloud Keychain only works on Apple devices.” Even once those problems are solved, cross-ecosystem syncing (for instance between iOS and Windows) will remain a big problem. Third-party password managers 1Password, Bitwarden, and Dashlane have passkey support and can sync across ecosystems. But they don’t necessarily support all platforms yet (for instance, 1Password doesn’t fully support passkeys on Android as of October 2023). If you want to try out passkeys on a throwaway account, you can create one on passkeys.io or webauthn.io.

If you like being an early adopter, go ahead and give passkeys a try. You may run into stumbling blocks along the way and have to fall back to that embattled ancient tool, the password.

More about passkeys in part 2, Passkeys and Privacy.

  • 1. A cryptographic public/private key pair.
  • 2. This is true in 2023, though if passkeys see wide adoption, some websites might let you sign up by generating a passkey, and never have a password at all.
  • 3. Or, less commonly, connect via NFC or Bluetooth Low-Energy (BLE)

EFF And Other Experts Join in Pointing Out Pitfalls of Proposed EU Cyber-Resilience Act

Today we join a set of 56 experts from organizations such as Google, Panasonic, Citizen Lab, Trend Micro and many others in an open letter calling on the European Commission, European Parliament, and Spain’s Ministry of Economic Affairs and Digital Transformation to reconsider the obligatory vulnerability reporting mechanisms built into Article 11 of the EU’s proposed Cyber-Resilience Act (CRA). As we’ve pointed out before, this reporting obligation raises major cybersecurity concerns. Broadening the knowledge of unpatched vulnerabilities to a larger audience will increase the risk of exploitation, and software publishers being forced to report these vulnerabilities to government regulators introduces the possibility of governments adding it to their offensive arsenals. These aren’t just theoretical threats: vulnerabilities stored on Intelligence Community infrastructure have been breached by hackers before.

Technology companies and others who create, distribute, and patch software are in a tough position. The intention of the CRA is to protect the public from companies who shirk their responsibilities by leaving vulnerabilities unpatched and their customers open to attack. But companies and software publishers who do the right thing by treating security vulnerabilities as well-guarded secrets until a proper fix can be applied and deployed now face an obligation to disclose vulnerabilities to regulators within 24 hours of exploitation. This significantly increases the danger these vulnerabilities present to the public. As the letter points out, the CRA “already requires software publishers to mitigate vulnerabilities without delay” separate from the reporting obligation. The letter also points out that this reporting mechanism may interfere with the collaboration and trusted relationship between companies and security researchers who work with companies to produce a fix.

The letter suggests to either remove this requirement entirely or change the reporting obligation to be a 72-hour window after patches are made and deployed. It also calls on European law- and policy-makers to prohibit use of reported vulnerabilities “for intelligence, surveillance, or offensive purposes.” These changes would go a long way in ensuring security vulnerabilities discovered by software publishers don’t wind up being further exploited by falling into the wrong hands.

Separately, EFF (and others) have pointed out the dangers the CRA presents to open-source software developers by making them liable for vulnerabilities in their software if they so much as solicit donations for their efforts. The obligatory reporting mechanism and open-source liability clauses of the CRA must be changed or removed. Otherwise, software publishers and open-source developers who are doing a public service will fall under a burdensome and undue liability.

❌
❌