Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 24 mai 2025Flux principal

Security Theater REALized and Flying without REAL ID

After multiple delays of the REAL ID Act of 2005 and its updated counterpart, the REAL ID Modernization Act, in the United States, the May 7th deadline of REAL ID enforcement has finally arrived. Does this move our security forward in the skies? The last 20 years says we got along fine without it. There were and are issues along the way that REAL ID does impose on everyday people, such as potential additional costs and rigid documentation, even if you already have a state issued ID. While TSA states this is not a national ID or a federal database, but a set of minimum standards required for federal use, we are still watchful of the mechanisms that have pivoted to potential privacy issues with the expansion of digital IDs.

But you don’t need a REAL ID just to fly domestically. There are alternatives.

The most common alternatives are passports or passport cards. You can use either instead of a REAL ID, which might save you an immediate trip to the DMV. And the additional money for a passport at least provides you the extra benefit of international travel.

Passports and passport cards are not the only alternatives to REAL ID. Additional documentation is also accepted as well: (this list is subject to change by the TSA):

  • REAL ID-compliant driver's licenses or other state photo identity cards issued by Department of Motor Vehicles (or equivalent and this excludes a temporary driver’s license)
  • State-issued Enhanced Driver's License (EDL) or Enhanced ID (EID)
  • U.S. passport
  • U.S. passport card
  • DHS trusted traveler cards (Global Entry, NEXUS, SENTRI, FAST)
  • U.S. Department of Defense ID, including IDs issued to dependents
  • Permanent resident card
  • Border crossing card
  • An acceptable photo ID issued by a federally recognized Tribal Nation/Indian Tribe, including Enhanced Tribal Cards (ETCs)
  • HSPD-12 PIV card
  • Foreign government-issued passport
  • Canadian provincial driver's license or Indian and Northern Affairs Canada card
  • Transportation Worker Identification Credential (TWIC)
  • U.S. Citizenship and Immigration Services Employment Authorization Card (I-766)
  • U.S. Merchant Mariner Credential
  • Veteran Health Identification Card (VHIC)

Foreign government-issued passports are on this list. However, using a foreign-government issued passport may increase your chances of closer scrutiny at the security gate. REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status. Realistically, interactions with secondary screening and law enforcement are not out of the realm of possibility for non-citizens. The power dynamics of the border has now been brought to flying domestically thanks to REAL ID. The privileges of who can and can’t fly are more sensitive now.

REAL ID and other federally accepted documents are supposed to be about verifying your identity, not about your citizenship status

Mobile Driver’s Licenses (mDLs)

Many states have rolled out the option for a Mobile Driver's License, which acts as a form of your state-issued ID on your phone and is supposed to come with an exception for REAL ID compliance. This is something we asked for since mDLs appear to satisfy their fears of forgery and cloning. But the catch is that states had to apply for this waiver:

“The final rule, effective November 25, 2024, allows states to apply to TSA for a temporary waiver of certain REAL ID requirements written in the REAL ID regulations.”

TSA stated they would publish the list of states with this waiver. But we do not see it on the website where they stated it would be. This bureaucratic hurdle appears to have rendered this exception useless, which is disappointing considering the TSA pushed for mDLs to be used first in their context.

Google ID Pass

Another exception appears to bypass state issued waivers, Google Wallet’s “ID Pass”. If a state partnered with Google to issue mDLs, or if you have a passport, then that is acceptable to TSA. This is a large leap in terms of reach of the mDL ecosystem expanding past state scrutiny to partnering directly with a private company to bring acceptable forms of ID for TSA. There’s much to be said on our worries with digital ID and the rapid expansion of them outside of the airport context. This is another gateway that highlights how ID is being shaped and accepted in the digital sense.

Both with ID Pass and mDLs, the presentation flow allows for you to tap with your phone without unlocking it. Which is a bonus, but it is not clear if TSA has the tech to read these IDs at all airports nationwide and it is still encouraged to bring a physical ID for additional verification.

A lot of the privilege dynamics of flying appear through types of ID you can obtain, whether your shoes stay on, how long you wait in line, etc. This is mostly tied to how much you can spend on traveling and how much preliminary information you establish with TSA ahead of time. The end result is that less wealthy people are subjected to the most security mechanisms at the security gate. For now, you can technically still fly without a REAL ID, but that means being subject to additional screening to verify who you are.

REAL ID enforcement has some leg room for those who do not want or can’t get a REAL ID. But the progression of digital ID is something we are keeping watch of that continues to be presented as the solution to worries of fraud and forgery. Governments and private corporations alike are pushing major efforts for rapid digital ID deployments and more frequent presentation of one’s ID attributes. Your government ID is one of the narrowest, static verifications of who you are as a person. Making sure that information is not used to create a centralized system of information was as important yesterday with REAL ID as it is today with digital IDs.

Age Verification in the European Union: The Commission's Age Verification App

This is the second part of a three-part series about age verification in the European Union. In this blog post, we take a deep dive into the age verification app solicited by the European Commission, based on digital identities. Part one gives an overview of the political debate around age verification in the EU and part three explores measures to keep all users safe that do not require age checks. 

In part one of this series on age verification in the European Union, we gave an overview of the state of the debate in the EU and introduced an age verification app, or mini-wallet, that the European Commission has commissioned. In this post, we will take a more detailed look at the app, how it will work and what some of its shortcomings are.

According to the original tender and the app’s recently published specifications, the Commission is soliciting the creation of a mobile application that will act as a digital wallet by storing a proof of age to enable users to verify their ages and access age-restricted content.

After downloading the app, a user would request proof of their age. For this crucial step, the Commission foresees users relying on a variety of age verification methods, including national eID schemes, physical ID cards, linking the app to another app that contains information about a user’s age, like a banking app, or age assessment through third parties like banks or notaries. 

In the next step, the age verification app would generate a proof of age. Once the user would access a website restricting content for certain age cohorts, the platform would request proof of the user’s age through the app. The app would then present proof of the user’s age via the app, allowing online services to verify the age attestation and the user would then access age-restricted websites or content in question. The goal is to build an app that will be aligned and allows for integration with the architecture of the upcoming EU Digital Identity Wallet

The user journey of the European Commission's age verification app

Review of the Commission’s Specifications for an Age Verification Mini-ID Wallet 

According to the specifications for the app, interoperability, privacy and security are key concerns for the Commission in designing the main requirements of the app. It acknowledges that the development of the app is far from finished, but an interactive process, and that key areas require feedback from stakeholders across industry and civil society. 

The specifications consider important principles to ensure the security and privacy of users verifying their age through the app, including data minimization, unlinkability (to ensure that only the identifiers required for specific linkable transactions are disclosed), storage limitations, transparency and measures to secure user data and prevent the unauthorized interception of personal data. 

However, taking a closer look at the specifications, many of the mechanisms envisioned to protect users’ privacy are not necessary requirements, but optional. For example, the app  should implement salted hashes and Zero Knowledge Proofs (ZKPs), but is not required to do so. Indeed, the app’s specifications seem to heavily rely on ZKPs, while simultaneously acknowledging that no compatible ZKP solution is currently available. This warrants a closer inspection of what ZKPs are and why they may not be the final answer to protecting users’ privacy in the context of age verification. 

A Closer Look at Zero Knowledge Proofs

Zero Knowledge Proofs provide a cryptographic way to not give something away, like your exact date of birth and age, while proving something about it. They can offer a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. Two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and to governments to make it hard for a prover to present forged information. Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information, just the proof that said information exists. This is objectively more secure than uploading a picture of your ID  to multiple sites or applications, but it still requires an initial ID upload process as mentioned above for activation.

This scheme makes several questionable assumptions. First, that frequently used ZKPs will avoid privacy concerns, and second, that verifiers won’t combine this data with existing information, such as account data, profiles, or interests, for other purposes, such as advertising. The European Commission plans to test this assumption with extremely sensitive data: government-issued IDs. Though ZKPs are a better approach, this is a brand new system affecting millions of people, who will be asked to provide an age proof with potentially higher frequency than ever before. This rolls the dice with the resiliency of these privacy measures over time. Furthermore, not all ZKP systems are the same, and while there is  research about its use on mobile devices, this rush to implementation before the research matures puts all of the users at risk.

Who Can Ask for Proof of Your Age?

Regulation on verifiers (the service providers asking for age attestations) and what they can ask for is also just as important to limit a potential flood of verifiers that didn’t previously need age verification. This is especially true for non Know-Your-Customer (KYC) cases, in which service providers are not required to perform due diligence on their users. Equally important are rules that determine the consequences for when verifiers violate those regulations. Up until recently, the eIDAS framework, of which the technical implementation is still being negotiated, required registration certificates across all EU member states for verifiers. By forcing verifiers to register the data categories they intend to ask for, issues like illegal data requests were supposed to be mitigated. But now, this requirement has been rolled back again and the Commission’s planned mini-AV wallet will not require it in the beginning. Users will be asked to prove how old they are without the restraint on verifiers that protects from request abuse. Without verifier accountability, or at least industry-level data categories being given a determined scope, users are being asked to enter into an imbalanced relationship. An earlier mock-up gave some hope for  empowered selective disclosure, where a user could toggle giving discrete information on and off during the time of the verifier request. It would be more proactive to provide that setting to the holder in their wallet settings, before a request is made from a relying party.

Privacy tech is offered in this system as a concession to users forced to share information even more frequently, rather than as an additional way to bring equity in existing interactions with those who hold power, through mediating access to information, loans, jobs, and public benefits. Words mean things, and ZKPs are not the solution, but a part of one. Most ZKP systems are more focused on making proof and verification time more efficient than they are concerned with privacy itself. The result of the latest research with digital credentials are more privacy oriented ways to share information. But at this scale, we will need regulation and added measures on aggressive verification to complete the promise of better privacy for eID use.

Who Will Have Access to the Mini-ID Wallet, and Who Will Be Left Out?

Beyond its technical specifications, the proposed app raises a number of accessibility and participation issues. At its heart, the mini-ID wallet will rely on the verification of a user’s age through a proof of age. According to the tender, the wallet should support four methods for the issuance and proving of age of a user.

Different age verification methods foreseen by the app

The first options are national eID schemes, which is an obvious choice: Many Member States are currently working on (or have already notified) national eID schemes in the context of the eIDAS, Europe’s eID framework. The goal is to allow the mini-ID wallet to integrate with the eIDAS node operated by the European Commission to verify a user’s age. Although many Member States are working on national eID schemes, previous uptake of eIDs has been reluctant, and it's questionable whether an EU-wide rollout of eIDs will be successful. 

But even if an EU-wide roll out was achievable, many will not be able to participate. Those who are not in possession of ID cards, passports, residence permits, or documents like birth certificates will not be able to attain an eID and will be at risk of losing access to knowledge, information, and services. This is especially relevant for already marginalized groups like refugees or unhoused people who may lose access to critical resources. But also many children and teenagers will not be able to participate in eID schemes. There are no EU-wide rules on when children need to have government-issued IDs, and while some countries, like Germany, mandate that every citizen above the age of 16 possess an ID, others, like Sweden, don’t require their citizens to have an ID or passport. In most EU Member States, the minimum age at which children can apply for an ID without parental consent is 18. So even in cases where children and teenagers may have a legal option to get an ID, their parents might withhold consent, thereby making it impossible for a child to verify their age in order to access information or services online.

The second option are so-called smartcards, or physical eID cards, such as national ID cards, e-passports or other trustworthy physical eID cards. The same limitations as for eIDs apply. Additionally, the Commission’s tender suggests the mini-ID wallet will rely on biometric recognition software to compare a user to the physical ID card they are using to verify their age. This leads to a host of questions regarding the processing and storing of sensitive biometric data. A recent study by the National Institute of Standards and Technology compared different age estimation algorithms based on biometric data and found that certain ethnicities are still underrepresented in training data sets, thus exacerbating the risk age estimation systems of discriminating against people of color. The study also reports higher error rates for female faces compared to male faces and that overall accuracy is strongly influenced by factors people have no control over, including “sex, image quality, region-of-birth, age itself, and interactions between those factors.” Other studies on the accuracy of biometric recognition software have reported higher error rates for people with disabilities as well as trans and non-binary people

The third option foresees a procedure to allow for the verification of a user’s identity through institutions like a bank, a notary, or a citizen service center. It is encouraging that the Commission’s tender foresees an option for different, non-state institutions to verify a user’s age. But neither banks nor notary offices are especially accessible for people who are undocumented, unhoused, don’t speak a Member State’s official language, or are otherwise marginalized or discriminated against. Banks and notaries also often require a physical ID in order to verify a client’s identity, so the fundamental access issues outlined above persist.

Finally, the specification suggests that third party apps that already have verified a user's identity, like banking apps or mobile network operators, could provide age verification signals. In many European countries, however, showing an ID is a necessary prerequisite for opening a bank account, setting up a phone contract, or even buying a SIM card. 

In summary, none of the options the Commission considers to allow for proving someone’s age accounts for the obstacles faced by different marginalized groups, leaving potentially millions of people across the EU unable to access crucial services and information, thereby undermining their fundamental rights. 

The question of which institutions will be able to verify ages is only one dimension when considering the ramification of approaches like the mini-ID wallet for accessibility and participation. Although often forgotten in policy discussions, not everyone has access to a personal device. Age verification methods like the mini-ID wallet, which are device dependent, can be a real obstacle to people who share devices, or users who access the internet through libraries, schools, or internet cafés, which do not accommodate the use of personal age verification apps. The average number of devices per household has been  found to correlate strongly with income and education levels, further underscoring the point that it is often those who are already on the margins of society who are at risk of being left behind by age verification mandates based on digital identities. 

This is why we need to push back against age verification mandates. Not because child safety is not a concern – it is. But because age verification mandates risk undermining crucial access to digital services, eroding privacy and data protection, and limiting the freedom of expression. Instead, we must ensure that the internet remains a space where all voices can be heard, free from discrimination, and where we do not have to share sensitive personal data to access information and connect with each other.

Digital Identities and the Future of Age Verification in Europe

This is the first part of a three-part series about age verification in the European Union. In this blog post, we give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks. 

As governments across the world pass laws to “keep children safe online,” more times than not, notions of safety rest on the ability of platforms, websites, and online entities being able to discern users by age. This legislative trend has also arrived in the European Union, where online child safety is becoming one of the issues that will define European tech policy for years to come. 

Like many policymakers elsewhere, European regulators are increasingly focused on a range of online harms they believe are associated with online platforms, such as compulsive design and the effects of social media consumption on children’s and teenagers’ mental health. Many of these concerns lack robust scientific evidence; studies have drawn a far more complex and nuanced picture about how social media and young people’s mental health interact. Still, calls for mandatory age verification have become as ubiquitous as they have become trendy. Heads of state in France and Denmark have recently called for banning under 15 year olds from social media Europe-wide, while Germany, Greece and Spain are working on their own age verification pilots. 

EFF has been fighting age verification mandates because they undermine the free expression rights of adults and young people alike, create new barriers to internet access, and put at risk all internet users’ privacy, anonymity, and security. We do not think that requiring service providers to verify users’ age is the right approach to protecting people online. 

Policy makers frame age verification as a necessary tool to prevent children from accessing content deemed unsuitable, to be able to design online services appropriate for children and teenagers, and to enable minors to participate online in age appropriate ways. Rarely is it acknowledged that age verification undermines the privacy and free expression rights of all users, routinely blocks access to resources that can be life saving, and undermines the development of media literacy. Rare, too, are critical conversations about the specific rights of young users: The UN Convention on the Rights of the Child clearly expresses that minors have rights to freedom of expression and access to information online, as well as the right to privacy. These rights are reflected in the European Charter of Fundamental Rights, which establishes the rights to privacy, data protection and free expression for all European citizens, including children. These rights would be steamrolled by age verification requirements. And rarer still are policy discussions of ways to improve these rights for young people.

Implicitly Mandatory Age Verification

Currently, there is no legal obligation to verify users’ age in the EU. However, different European legal acts that recently entered into force or are being discussed implicitly require providers to know users’ ages or suggest age assessments as a measure to mitigate risks for minors online. At EFF, we consider these proposals akin to mandates because there is often no alternative method to comply except to introduce age verification. 

Under the General Data Protection Regulation (GDPR), in practice, providers will often need to implement some form of age verification or age assurance (depending on the type of service and risks involved): Article 8 stipulates that the processing of personal data of children under the age of 16 requires parental consent. Thus, service providers are implicitly required to make reasonable efforts to assess users’ ages – although the law doesn’t specify what “reasonable efforts” entails. 

Another example is the child safety article (Article 28) of the Digital Services Act (DSA), the EU’s recently adopted new legal framework for online platforms. It requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. The article also prohibits targeting minors with personalized ads. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy, and taking measures to protect minors specifically, but it's presently unclear which measures providers must take to comply with these obligations. Recital 71 of the DSA states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. The European Commission is currently working on guidelines for the implementation of Article 28 and may come up with criteria for what they believe would be effective and privacy-preserving age verification. 

The DSA does explicitly name age verification as one measure the largest platforms – so called Very Large Online Platforms (VLOPs) that have more than 45 million monthly users in the EU – can choose to mitigate systemic risks related to their services. Those risks, while poorly defined, include negative impacts on the protection of minors and users’ physical and mental wellbeing. While this is also not an explicit obligation, the European Commission seems to expect adult content platforms to adopt age verification to comply with their risk mitigation obligations under the DSA. 

Adding another layer of complexity, age verification is a major element of the dangerous European Commission proposal to fight child sexual abuse material through mandatory scanning of private and encrypted communication. While the negotiations of this bill have largely stalled, the Commission’s original proposal puts an obligation on app stores and interpersonal communication services (think messaging apps or email) to implement age verification. While the European Parliament has followed the advice of civil society organizations and experts and has rejected the notion of mandatory age verification in its position on the proposal, the Council, the institution representing member states, is still considering mandatory age verification. 

Digital Identities and Age Verification 

Leaving aside the various policy work streams that implicitly or explicitly consider whether age verification should be introduced across the EU, the European Commission seems to have decided on the how: Digital identities.

In 2024, the EU adopted the updated version of the so-called eIDAS Regulation, which sets out a legal framework for digital identities and authentication in Europe. Member States are now working on national identity wallets, with the goal of rolling out digital identities across the EU by 2026.

Despite the imminent roll out of digital identities in 2026, which could facilitate age verification, the European Commission clearly felt pressure to act sooner than that. That’s why, in the fall of 2024, the Commission published a tender for a “mini-ID wallet”, offering four million euros in exchange for the development of an “age verification solution” by the second quarter of 2025 to appease Member States anxious to introduce age verification today. 

Favoring digital identities for age verification follows an overarching trend to push obligations to conduct age assessments continuously further down in the stack – from apps to app stores to operating service providers. Dealing with age verification at the app store, device, or operating system level is also a demand long made by providers of social media and dating apps seeking to avoid liability for insufficient age verification. Embedding age verification at the device level will make it more ubiquitous and harder to avoid. This is a dangerous direction; digital identity systems raise serious concerns about privacy and equity.

This approach will likely also lead to mission creep: While the Commission limits its tender to age verification for 18+ services (specifically adult content websites), it is made abundantly clear that once available, age verification could be extended to “allow age-appropriate access whatever the age-restriction (13 or over, 16 or over, 65 or over, under 18 etc)”. Extending age verification is even more likely when digital identity wallets don’t come in the shape of an app, but are baked into operating systems. 

In the next post of this series, we will be taking a closer look at the age verification app the European Commission has been working on.

Certbot 4.0: Long Live Short-Lived Certs!

10 avril 2025 à 18:50

When Let’s Encrypt, a free certificate authority, started issuing 90 day TLS certificates for websites, it was considered a bold move that helped push the ecosystem towards shorter certificate life times. Beforehand, certificate authorities normally issued certificate lifetimes lasting a year or more. With 4.0, Certbot is now supporting Let’s Encrypt’s new capability for six day certificates through ACME profiles and dynamic renewal at:

  • 1/3rd of lifetime left
  • 1/2 of lifetime left, if the lifetime is shorter than 10 days

There’s a few, significant reasons why shorter lifetimes are better:

  • If a certificate's private key is compromised, that compromise can't last as long.
  • With shorter life spans for the certificates, automation is encouraged. Which facilitates robust security of web servers.
  • Certificate revocation is historically flaky. Lifetimes 10 days and under prevent the need to invoke the revocation process and deal with continued usage of a compromised key.

There is debate on how short these lifetimes should be, but with ACME profiles you can have the default or “classic” Let’s Encrypt experience (90 days) or start actively using other profile types through Certbot with the --preferred-profile and --required-profile flags. For six day certificates, you can choose the “shortlived” profile.

These new options are just the beginning of the modern features the ecosystem can support and we are glad to have dynamic renewal times to start leveraging a more agile web that facilitates better security and flexible options for everyone. Thank you to the community and the Certbot team for making this happen!

UPDATE (05/02/2025): To clear up any confusion, Certbot offers support for these profiles but Let's Encrypt plans to have this feature fully available by the end of this year.

Love ♥️ Certbot as much as us? Donate today to support this work.

À partir d’avant-hierFlux principal

Simple Phish Bait: EFF Is Not Investigating Your Albion Online Forum Account

We recently learned that users of the Albion Online gaming forum have received direct messages purporting to be from us. That message, which leverages the fear of an account ban, is a phishing attempt.

If you’re an Albion Online forum user and receive a message that claims to be from “the EFF team,” don’t click the link, and be sure to use the in-forum reporting tool to report the message and the user who sent it to the moderators.

A screenshot of the message shared by a user of the forums.

The message itself has some of the usual hallmarks of a phishing attempt, including tactics like creating a sense of fear that your account may be suspended, leveraging the name of a reputable group, and further raising your heart rate with claims that the message needs a quick response. The goal appears to be to get users to download a PDF file designed to deliver malware. That PDF even uses our branding and typefaces (mostly) correctly.

A full walk through of this malware and what it does was discovered by the Hunt team. The PDF is a trojan, or malware disguised as a non malicious file or program, that has an embedded script that calls out to an attacker server. The attacker server then sends a “stage 2” payload that installs itself onto the user’s device. The attack structure used was discovered to be the Pyramid C2 framework. In this case, it is a Windows operating system intended malware. There’s a variety of actions it takes, like writing and modifying files to the victim’s physical drive. But the most worrisome discovery is that it appears to connect the user’s device to a malicious botnet and has potential access to the “VaultSvc” service. This service securely stores user credentials, such as usernames and passwords

File-based IoCs:
act-7wbq8j3peso0qc1.pages[.]dev/819768.pdf
Hash: 4674dec0a36530544d79aa9815f2ce6545781466ac21ae3563e77755307e0020

This incident is a good reminder that often, the best ways to avoid malware and phishing attempts are the same: avoid clicking strange links in unsolicited emails, keep your computer’s software updated, and always scrutinize messages claiming to come from computer support or fraud detection. If a message seems suspect, try to verify its authenticity through other channels—in this case, poking around on the forum and asking other users before clicking on anything. If you ever absolutely must open a file, do so in an online document reader, like Google Drive, or try sending the link through a tool like VirusTotal, but try to avoid opening suspicious files whenever possible.

For more information to help protect yourself, check out our guides for protecting yourself from malware and avoiding phishing attacks.

One Down, Many to Go with Pre-Installed Malware on Android

27 novembre 2024 à 17:56

Last year, we investigated a Dragon Touch children’s tablet (KidzPad Y88X 10) and confirmed that it was linked to a string of fully compromised Android TV Boxes that also had multiple reports of malware, adware, and a sketchy firmware update channel. Since then, Google has taken the (now former) tablet distributor off of their list of Play Protect certified phones and tablets. The burden of catching this type of threat should not be placed on the consumer. Due diligence by manufacturers, distributors, and resellers is the only way to tackle this issue of pre-installed compromised devices making their way into the hands of unknowing customers. But in order to mitigate this issue, regulation and transparency need to be a part of the strategy. 

As of October, Dragon Touch is not selling any tablets on their website anymore. However, there is lingering inventory still out there in places like Amazon and Newegg. There are storefronts that exist only on reseller sites for better customer reach, but considering Dragon Touch also wiped their blog of any mention of their tablets, we assume a little more than a strategy shift happened here.

We wrote a guide to help parents set up their kid’s Android devices safely, but it’s difficult to choose which device to purchase to begin with. Advising people to simply buy a more expensive iPad or Amazon Fire Tablet doesn’t change the fact people are going to purchase low-budget devices. Lower budget devices can be just as reputable if the ecosystem provided a path for better accountability.

Who is Responsible?

There are some tools in development for consumer education, like the newly developed, voluntary Cyber Trust Mark by the FCC. This label would aim to inform consumers of the capabilities and guarantee that minimum security standards were met for an IoT device. However, the consumer holding the burden to check for pre-installed malware is absolutely ridiculous. Responsibility should fall to regulators, manufacturers, distributors, and resellers to check for this kind of threat.

More often than not, you can search for low budget Android devices on retailers like Amazon or Newegg, and find storefront pages with little transparency on who runs the store and whether or not they come from a reputable distributor. This is true for more than just Android devices, but considering how many products are created for and with the Android ecosystem, working on this problem could mean better security for thousands of products.

Yes, it is difficult to track hundreds to thousands of distributors and all of their products. It is hard to keep up with rapidly developing threats in the supply chain. You can’t possibly know of every threat out there.

With all due respect to giant resellers, especially the multi-billion dollar ones: tough luck. This is what you inherit when you want to “sell everything.” You also inherit the responsibility and risk of each market you encroach or supplant. 

Possible Remedy: Firmware Transparency

Thankfully, there is hope on the horizon and tools exist to monitor compromised firmware.

Last year, Google presented Android Binary Transparency in response to pre-installed malware. This would help track firmware that has been compromised with these two components:

  • An append-only log of firmware information that is immutable, globally observable, consistent, and auditable. Assured with cryptographic properties.
  • A network of participants that invest in witnesses, log health, and standardization.

Google is not the first to think of this concept. This is largely extracting lessons of success from Certificate Transparency. Yet, better support directly from the Android ecosystem for Android images would definitely help. This would provide an ecosystem of transparency of manufacturers and developers that utilize the Android Open Source Project (AOSP) to be just as respected as higher-priced brands.

We love open source here at EFF and would like to continue to see innovation and availability in devices that aren’t necessarily created by bigger, more expensive names. But there needs to be an accountable ecosystem for these products so that pre-installed malware can be more easily detected and not land in consumer hands so easily. Right now you can verify your Pixel device if you have a little technical skill. We would like verification to be done by regulators and/or distributors instead of asking consumers to crack out their command lines to verify themselves.

It would be ideal to see existing programs like Android Play Protect certified run a log like this with open-source log implementations, like Trillian. This way, security researchers, resellers, and regulating bodies could begin to monitor and query information on different Android Original Equipment Manufacturers (OEMs).

There are tools that exist to verify firmware, but right now this ecosystem is a wishlist of sorts. At EFF, we like to imagine what could be better. While a hosted comprehensive log of Android OEMs doesn’t currently exist, the tools to create it do. Some early participants for accountability in the Android realm include F-Droid’s Android SDK Transparency Log and the Guardian Project’s (Tor) Binary Transparency Log.

Time would be better spent on solving this problem systemically, than researching whether every new electronic evil rectangle or IoT device has malware or not.

A complementary solution with binary transparency is the Software Bill of Materials (SBOMs). Think of this as a “list of ingredients” that make up software. This is another idea that is not very new, but has gathered more institutional and government support. The components listed in an SBOM could highlight issues or vulnerabilities that were reported for certain components of a software. Without binary transparency though, researchers, verifiers, auditors, etc. could still be left attempting to extract firmware from devices that haven’t listed their images. If manufacturers readily provided these images, SBOMs can be generated more easily and help create a less opaque market of electronics. Low budget or not.

We are glad to see some movement from last year’s investigations. Right in time for Black Friday. More can be done and we hope to see not only devices taken down more swiftly when reported, especially with shady components, but better support for proactive detection. Regardless of how much someone can spend, everyone deserves a safe, secure device that doesn’t have malware crammed into it.

Digital ID Isn't for Everybody, and That's Okay

25 septembre 2024 à 18:57

How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID.

These mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking.

In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, overzealously promoting them as a potential equalizer.

What’s in Your Wallet?

With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs.

Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost.

Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. 

However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors.

New Announcements, New Scope

Digital ID has been moving fast this summer.

Proponents of digital ID frequently present the “over 21” example, which is often described like this:

You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name.

But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet.

While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs.

In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport.

Equity First > Digital First

We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy).

With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content.

For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification.

Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems.

Inevitable?

No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue.

State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves.

Lawmakers should ensure better access for people with or without a digital ID.

 

A Wider View on TunnelVision and VPN Advice

If you listen to any podcast long enough, you will almost certainly hear an advertisement for a Virtual Private Network (VPN). These advertisements usually assert that a VPN is the only tool you need to stop cyber criminals, malware, government surveillance, and online tracking. But these advertisements vastly oversell the benefits of VPNs. The reality is that VPNs are mainly useful for one thing: routing your network connection through a different network. Many people, including EFF, thought that VPNs were also a useful tool for encrypting your traffic in the scenario that you didn’t trust the network you were on, such as at a coffee shop, university, or hacker conference. But new research from Leviathan Security demonstrates a reminder that this may not be the case and highlights the limited use-cases for VPNs.

TunnelVision is a recently published attack method that can allow an attacker on a local network to force internet traffic to bypass your VPN and route traffic over an attacker-controlled channel instead. This allows the attacker to see any unencrypted traffic (such as what websites you are visiting). Traditionally, corporations deploy VPNs for employees to access private company sites from other networks. Today, many people use a VPN in situations where they don't trust their local network. But the TunnelVision exploit makes it clear that using an untrusted network is not always an appropriate threat model for VPNs because they will not always protect you if you can't trust your local network.

TunnelVision exploits the Dynamic Host Configuration Protocol (DHCP) to reroute traffic outside of a VPN connection. This preserves the VPN connection and does not break it, but an attacker is able to view unencrypted traffic. Think of DHCP as giving you a nametag when you enter the room at a networking event. The host knows at least 50 guests will be in attendance and has allocated 50 blank nametags. Some nametags may be reserved for VIP guests, but the rest can be allocated to guests if you properly RSVP to the event. When you arrive, they check your name and then assign you a nametag. You may now properly enter the room and be identified as "Agent Smith." In the case of computers, this “name” is the IP address DHCP assigns to devices on the network. This is normally done by a DHCP server but one could manually try it by way of clothespins in a server room.

TunnelVision abuses one of the configuration options in DHCP, called Option 121, where an attacker on the network can assign a “lease” of IPs to a targeted device. There have been attacks in the past like TunnelCrack that had similar attack methods, and chances are if a VPN provider addressed TunnelCrack, they are working on verifying mitigations for TunnelVision as well.

In the words of the security researchers who published this attack method:

“There’s a big difference between protecting your data in transit and protecting against all LAN attacks. VPNs were not designed to mitigate LAN attacks on the physical network and to promise otherwise is dangerous.”

Rather than lament the many ways public, untrusted networks can render someone vulnerable, there are many protections provided by default that can assist as well. Originally, the internet was not built with security in mind. Many have been working hard to rectify this. Today, we have other many other tools in our toolbox to deal with these problems. For example, web traffic is mostly encrypted with HTTPS. This does not change your IP address like a VPN could, but it still encrypts the contents of the web pages you visit and secures your connection to a website. Domain Name Servers (which occur before HTTPS in the network stack) have also been a vector for surveillance and abuse, since the requested domain of the website is still exposed at this level. There have been wide efforts to secure and encrypt this as well. Availability for encrypted DNS and HTTPS by default now exists in every major browser, closing possible attack vectors for snoops on the same network as you. Lastly, major browsers have implemented support for Encrypted Client Hello (ECH). Which encrypts your initial website connection, sealing off metadata that was originally left in cleartext.

TunnelVision is a reminder that we need to clarify what tools can and cannot do. A VPN does not provide anonymity online and neither can encrypted DNS or HTTPS (Tor can though). These are all separate tools that handle similar issues. Thankfully, HTTPS, encrypted DNS, and encrypted messengers are completely free and usable without a subscription service and can provide you basic protections on an untrusted network. VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

Restricting Flipper is a Zero Accountability Approach to Security: Canadian Government Response to Car Hacking

On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth.

EFF explores toilet hacking

While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers.

This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies.

When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices.

Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions.

It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not.

We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good.

*Though concerns have been raised about Flipper Devices' connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices' servers, and much of the dedicated security and pen-testing hardware which hasn't been banned also suffers from similar problems.

Decoding the California DMV's Mobile Driver's License

18 mars 2024 à 21:16

The State of California is currently rolling out a “mobile driver’s license” (mDL), a form of digital identification that raises significant privacy and equity concerns. This post explains the new smartphone application, explores the risks, and calls on the state and its vendor to focus more on protection of the users. 

What is the California DMV Wallet? 

The California DMV Wallet app came out in app stores last year as a pilot, offering the ability to store and display your mDL on your smartphone, without needing to carry and present a traditional physical document. Several features in this app replicate how we currently present the physical document with key information about our identity—like address, age, birthday, driver class, etc. 

However, other features in the app provide new ways to present the data on your driver’s license. Right now, we only take out our driver’s license occasionally throughout the week. However, with the app’s QR Code and “add-on” features, the incentive for frequency may grow. This concerns us, given the rise of age verification laws that burden everyone’s access to the internet, and the lack of comprehensive consumer data privacy laws that keep businesses from harvesting and selling identifying information and sensitive personal information. 

For now, you can use the California DMV Wallet app with TSA in airports, and with select stores that have opted in to an age verification feature called TruAge. That feature generates a separate QR Code for age verification on age-restricted items in stores, like alcohol and tobacco. This is not simply a one-to-one exchange of going from a physical document to an mDL. Rather, this presents a wider scope of possible usage of mDLs that needs expanded protections for those who use them. While California is not the first state to do this, this app will be used as an example to explain the current landscape.

What’s the QR Code? 

There are two ways to present your information on the mDL: 1) a human readable presentation, or 2) a QR code. 

The QR code with a normal QR code scanner will display an alphanumeric string of text that starts with “mdoc:”. For example: 

 “mdoc:owBjMS4wAY..." [shortened for brevity]

This “mobile document” (mdoc) text is defined by the International Organization for Standardization’s ISO/IEC18013-5. The string of text afterwards details driver’s license data that has been signed by the issuer (i.e., the California DMV), encrypted, and encoded. This data sequence includes technical specifications and standards, open and enclosed.  

In the digital identity space, including mDLs, the most referenced and utilized are the ISO standard above, the American Association of Motor Vehicle Administrators (AAMVA) standard, and the W3C’s Verified Credentials (VC). These standards are often not siloed, but rather used together since they offer directions on data formats, security, and methods of presentation that aren’t completely covered by just one. However, ISO and AAMVA are not open standards and are decided internally. VCs were created for digital credentials generally, not just for mDLs. These standards are relatively new and still need time to mature to address potential gaps.

The decrypted data could possibly look like this JSON blob:

         {"family_name":"Doe",
          "given_name":"John",
          "birth_date":"1980-10-10",
          "issue_date":"2020-08-10",
          "expiry_date":"2030-10-30",
          "issuing_country":"US",
          "issuing_authority":"CA DMV",
          "document_number":"I12345678",
          "portrait":"../../../../test/issuance/portrait.b64",
          "driving_privileges":[
            {
               "vehicle_category_code":"A",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            },
            {
               "vehicle_category_code":"B",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            }
          ],
          "un_distinguishing_sign":"USA",
          {
          "weight":70,
          "eye_colour":"hazel",
          "hair_colour":"red",
          "birth_place":"California",
          "resident_address":"2415 1st Avenue",
          "portrait_capture_date":"2020-08-10T12:00:00Z",
          "age_in_years":42,
          "age_birth_year":1980,
          "age_over_18":true,
          "age_over_21":true,
          "issuing_jurisdiction":"US-CA",
          "nationality":"US",
          "resident_city":"Sacramento",
          "resident_state":"California",
          "resident_postal_code":"95818",
          "resident_country": "US"}
}

Application Approach and Scope Problems 

California decided to contract a vendor to build a wallet app rather than use Google Wallet or Apple Wallet (not to be conflated with Google and Apple Pay). A handful of other states use Google and Apple, perhaps because many people have one or the other. There are concerns about large companies being contracted by the states to deliver mDLs to the public, such as their controlling the public image of digital identity and device compatibility.  

This isn’t the first time a state contracted with a vendor to build a digital credential application without much public input or consensus. For example, New York State contracted with IBM to roll out the Excelsior app during the beginning of COVID-19 vaccination availability. At the time, EFF raised privacy and other concerns about this form of digital proof of vaccination. The state ultimately paid the vendor a staggering $64 million. While initially proprietary, the application later opened to the SMART Health Card standard, which is based on the W3C’s VCs. The app was sunset last year. It’s not clear what effect it had on public health, but it’s good that it wound down as social distancing measures relaxed. The infrastructure should be dismantled, and the persistent data should be discarded. If another health crisis emerges, at least a law in New York now partially protects the privacy of this kind of data. NY state legislature is currently working on a bill around mDLs after a round-table on their potential pilot. However, the New York DMV has already entered into a $1.75 million dollar contract with the digital identity vendor IDEMIA. It will be a race to see if protections will be established prior to pilot deployment. 

Scope is also a concern with California’s mDL. The state contracted with Spruce ID to build this app. The company states that its purpose is to empower “organizations to manage the entire lifecycle of digital credentials, such as mobile driver’s licenses, software audit statements, professional certifications, and more.” In the “add-ons” section of the app, TruAge’s age verification QR code is available.  

Another issue is selective disclosure, meaning the technical ability for the identity credential holder to choose which information to disclose to a person or entity asking for information from their credential. This is a long-time promise from enthusiasts of digital identity. The most used example is verification that the credential holder is over 21, without showing anything else about the holder, such as their name and address that appear on the face of their traditional driver’s license. But the California DMV wallet app, has a lack of options for selective disclosure: 

  • The holder has to agree to TruAge’s terms and service and generate a separate TruAge QR Code.  
  • There is already an mDL reader option for age verification for the QR Code of an mDL. 
  • There is no current option for the holder to use selective disclosure for their mDL. But it is planned for future release, according to the California DMV via email. 
  • Lastly, if selective disclosure is coming, this makes the TruAge add-on redundant. 

The over-21 example is only as meaningful as its implementation; including the convenience, privacy, and choice given to the mDL holder. 

TruAge appears to be piloting its product in at least 6 states. With “add-ons”, the scope of the wallet app indicates expansion beyond simply presenting your driver’s license. According to the California DMV’s Office of Public Affairs via email: 

The DMV is exploring the possibility of offering additional services including disabled person parking placard ID, registration card, vehicle ownership and occupational license in the add-ons in the coming months.” 

This clearly displays how the scope of this pilot may expand and how the mDL could eventually be housed within an entire ecosystem of identity documentation. There are privacy preserving ways to present mDLs, like unlinkable proofs. These mechanisms help mitigate verifier-issuer collusion from establishing if the holder was in different places with their mDL. 

Privacy and Equity First 

At the time of this post, about 325,000 California residents have the pilot app. We urge states to take their time with creating mDLs, and even wait for verification methods that are more privacy considerate to mature. Deploying mDLs should prioritize holder control, privacy, and transparency. The speed of these pilots is possibly influenced by other factors, like the push for mDLs from the U.S. Department of Homeland Security

Digital wallet initiatives like eIDAS in the European Union are forging conversations on what user control mechanisms might look like. These might include, for example, “bringing your own wallet” and using an “open wallet” that is secure, private, interoperable, and portable. 

We also need governance that properly limits law enforcement access to information collected by mDLs, and to other information in the smartphones where holders place their mDLs. Further, we need safeguards against these state-created wallets being wedged into problematic realms like age verification mandates as a condition of accessing the internet. 

We should be speed running privacy and provide better access for all to public services and government-issued documentation. That includes a right to stick with traditional paper or plastic identification, and accommodation of cases where a phone may not be accessible.  

We urge the state to implement selective disclosure and other privacy preserving tools. The app is not required anywhere. It should remain that way no matter how cryptographically secure the system purports to be, or how robust the privacy policies. We also urge all governments to remain transparent and cautious about how they sign on vendors during pilot programs. If a contract takes away the public’s input on future protections, then that is a bad start. If a state builds a pilot without much patience for privacy and public input, then that is also turbulent ground for protecting users going forward.  

Just because digital identity may feel inevitable, doesn’t mean the dangers have to be. 

The Last Mile of Encrypting the Web: 2023 Year in Review

25 décembre 2023 à 12:21

At the start of 2023, we sunsetted the HTTPS Everywhere web extension. It encrypted browser communications with websites and made sure users benefited from the protection of HTTPS wherever possible. HTTPS Everywhere ended because all major browsers now offer the functionality to make HTTPS the default. This is due to the grand efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 years.

The immense impact of this “Encrypt the Web” initiative has translated into default “security for everybody,” without each user having to take on the burden of finding out how to enable encryption. The “hacker in a cafe” threat is no longer as dangerous as it once was, when the low technical bar of passive network sniffing of unencrypted public WiFi let bad actors see much of the online activity of people at the next table. Police have to work harder as well to inspect user traffic. While VPNs still serve a purpose, they are no longer necessary just to encrypt your traffic on the web.

“The Last Mile”

Firefox reports that over 80% of the web is encrypted, and Google reports 95% over all of its services. The last 5%-20% exists for several reasons:

  • Some websites are old and abandoned.
  • A small percentage of websites intentionally left their sites at HTTP.
  • Some mobile ecosystems do not use HTTPS by default.
  • HTTPS may still be difficult to obtain for accessibility reasons.

Plot of Encrypted traffic

To the last point, tools like Certbot could be more accessible. For places where censors might be blocking it, we now have a Tor-accessible .onion address available for certbot.eff.org. (We’ve done the same for eff.org and ssd.eff.org, EFF’s guides for individuals and organizations to protect themselves from surveillance and other security threats.)

Let’s Encrypt made much of this possible, by serving as a free and easily supported Certificate Authority (CA) that issued TLS certificates to 363 million websites. Let’s Encrypt differs from other prominent CAs. For example, Let’s Encrypt from the start encouraged short-lived certificates that were valid for 90 days. Other CAs were issuing certificates with lifespans of two years. Shorter lifespans encouraged server administrators to automate, which in turn encouraged encryption that is consistent, agile, and fast. The CA/B Forum, a voluntary consortium of CAs, browser companies, and other partners that maintain public key infrastructure (PKI) adopted ballot SC-063. Which allows 10-day certificates, and in 2026 will allow 7-day certificates. This pivotal change will make the ecosystem safer, reduce the toll on partners that manage the metadata chain, encourage automation, and push for the ecosystem to encrypt faster, with less overhead, and with better tools.

Chrome will require CAs in its root store (a trusted list of CAs allowed to secure traffic) to support the Automatic Certificate Management Environment (ACME) protocol. While Google steers this shift with ACME, the protocol is not a Google product or part of the company’s corporate agenda. Rather, ACME is a beneficial protocol that every CA should adopt, even without a “big tech” mandate to do so.

Chrome also expanded its HTTPS-First Mode to all users by default. We are glad to see the continued push for HTTPS by default, without the users needing to turn it on themselves. HTTPS “out of the box” is the ideal to strive for, far better than the current fragmented approach of requiring users to activate “enable HTTPS” settings on all major browsers.

While this year marks a major victory for the “Encrypt the Web” initiative, we still need to make sure the backbone infrastructure for HTTPS continues to work in the interest of the users. So for two years we have been monitoring eIDAS, the European Union’s digital identity framework. Its Article 45 requires browsers to display website identity with a Qualified Web Authentication Certificates (QWAC) issued by a government-mandated Root Certificate Authority. These measures hinder browsers from responding if one of these CAs acts inappropriately or has bad practices around issuing certificates. Final votes on eIDAS will occur in the upcoming weeks. While some of the proposal’s recitals suggest that browsers should be able to respond to a security event, that is not strong enough to overrule our concerns about the proposal’s most concerning text. This framework enables EU governments to snoop on their residents’ web traffic. This would roll back many of the web security and privacy gains over the past decade to a new, yet unfortunately familiar, fragmented state. We will fight to make sure HTTPS is not set up for failure in the EU.

In the movement to make HTTPS the default for everyone, we also need to be vigilant about how mobile devices handle web traffic. Too often, mobile apps are still sending clear text (insecure HTTP). So the next fight for “HTTPS Everywhere” should be HTTPS by default for app requests, without users needing to install a VPN.

The last stretch to 100% encryption will make the web ecosystem agile and bold enough to (1) ensure HTTPS as much as possible, and (2) block HTTP by default. Reaching 100% is possible and attainable from here. Even if a few people out there intentionally try to interact with an HTTP-only site once or twice a session.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Sketchy and Dangerous Android Children’s Tablets and TV Set-Top Boxes: 2023 in Review

You may want to save your receipts if you gifted any low-end Android TV set-top boxes or children's tablets to a friend or loved one this holiday season. In a series of investigations this year, EFF researchers confirmed the existence of dangerous malware on set-top boxes manufactured by AllWinner and RockChip, and discovered sketchyware on a tablet marketed for kids from the manufacturer Dragon Touch. 

Though more reputable Android devices are available for watching TV and keeping the little ones occupied, they come with a higher price tag as well. This means that those who can afford such devices get more assurance in the security and privacy of these devices, while those who can only afford cheaper devices by little-known manufacturers are put at greater risk.

The digital divide could not be more apparent. Without a clear warning label, consumers who cannot afford devices from well-known brands such as Apple, Amazon, or Google are being sold devices which come out-of-the-box ready to spy on their children. This malware opens their home internet connection as a proxy to unknown users, and exposes them to legal risks. 

Traditionally, if a device like a vacuum cleaner was found to be defective or dangerous, we would expect resellers to pull these devices from the department store floor and to the best of their ability notify customers who have already bought these items and brought them into their homes. Yet we observed the devices in question continued to be sold by online vendors months after widely circulated news of their defects.

After our investigation of the set-top boxes, we urged the FTC to take action against the vendors who sell devices known to be riddled with malware. Amazon and AliExpress were named in the letter, though more vendors are undoubtedly still selling these devices. Not to spoil the holiday cheer, but if you have received one of these devices, you may want to ask for another gift and have the item refunded.

In the case of the Dragon Touch tablets, it was apparent that this issue went beyond just Android TV boxes and even encompassed budget Android devices specifically marketed for children. The tablet we investigated had an outdated pre-installed parental controls app that was labeled as adware, leftover remnants of malware, and sketchy update software. It’s clear this issue reached a wide variety of Android devices and it should not be left up to the consumer to figure this out. Even for devices on the market that are “normal,” there still needs to be work done by the consumer just to properly set up devices for their kids and themselves. But there’s no total consumer-side solution for pre-installed malware and there shouldn’t have to be.

Compared with the products of yesteryear, our “smart” and IOT devices carry a new set of risks to our security and privacy. Yet, we feel confident that with better digital product testing—along with regulatory oversight—can go a long way in mitigating these dangers. We applaud efforts such as Mozilla’s Privacy Not Included to catalog just how much our devices are protecting our data, since as it currently stands it is up to us as consumers to assess the risks ourselves and take appropriate steps.

How to Secure Your Kid's Android Device

4 décembre 2023 à 16:40

After finding risky software on an Android (Google’s mobile operating system) device marketed for kids, we wanted to put together some tips to help better secure your kid's Android device (and even your own). Despite the dangers that exist, there are many things that can be done to at least mitigate harm and assist parents and children. There are also safety tools that your child can use at their own discretion.

There's a handful of different tools, settings, and apps that can help better secure your kid’s device, depending on their needs. We've broken them down into four categories: Parental Monitoring, Security, Safety, and Privacy.

Note: If you do not see these settings in your Android device, it may be out of date or a heavily modified Android distribution. This is based on Android 14’s features.

Parental Monitoring

Google has a free app for parental controls called Family Link, which gives you tools to establish screen time limits, app installs, and more. There’s no need to install a third-party application. Family Link sometimes comes pre-installed with some devices marketed for children, but it is also available in the Google Play store for installation. This is helpful given that some third-party parental safety apps have been caught in the act of selling children’s data and involved in major data leaks. Also, having a discussion with your child about these controls can possibly provide something that technology can’t provide: trust and understanding.

Security

There are a few basic security steps you can take on both your own Google account and your child’s device to improve their security.

  • If you control your child's Google account with your own, you should lock down your own account as best as possible. Setting up two-factor authentication is a simple thing you can do to avoid malicious access to your child’s account via yours.
  • Encrypt their device with a passcode (if you have Android 6 or later).

Safety

You can also enable safety measures your child can use if they are traveling around with their device.

  • Safety Check allows a device user to automatically reach out to established emergency contacts if they feel like they are in an unsafe situation. If they do not mark themselves “safe” after the safety check duration ends, emergency location sharing with emergency contacts will commence. The safety check reason and duration (up to 24 hours) is set by the device user. 
  • Emergency SOS assists in triggering emergency actions like calling 911, sharing your location with your emergency contacts, and recording video.
  • If the "Unknown tracker alerts" setting is enabled, a notification will trigger on the user's device if there is an unknown AirTag moving with them (this feature only works with AirTags currently, but Google says will expand to other trackers in the future). Bluetooth is required to be turned on for this feature to function properly.

Privacy

There are some configurations you can also input to deter tracking of your child’s activities online by ad networks and data brokers.

  • Delete the device’s AD ID.
  • Install an even more overall privacy preserving browser like Firefox, DuckDuckGo, or Brave. While Chrome is the default on Android and has decent security measures, they do not allow web extensions on their mobile browser. Preventing the use of helpful extensions like Privacy Badger to help prevent ad tracking.
  • Review the privacy permissions on the device to ensure no apps are accessing important features like the camera, microphone, or location without your knowledge.

For more technically savvy parents, Pi-hole (a DNS software) is very useful to automatically block ad-related network requests. It blocked most shady requests on major ad lists from the malware we saw during our investigation on a kid’s tablet. The added benefit is you can configure many devices to one Pi-hole set up.

DuckDuckGo’s App Tracking protection is an alternative to using Pi-hole that doesn’t require as much technical overhead. However, since it looks at all network traffic coming from the device, it will ask to be set up as a VPN profile upon being enabled. Android forces any app that looks at traffic in this manner to be set up like a VPN and only allows one VPN connection at a time.

It can be a source of stress to set up a new device for your child. However, taking some time to set up privacy and security settings can help you and your child discuss technology from a more informed perspective for the both of you.

Low Budget Should Not Mean High Risk: Kids' Tablet Came Preloaded with Sketchyware

14 novembre 2023 à 17:04

It’s easy to get Android devices from online vendors like Amazon at different price points. Unfortunately, it is also easy to end up with an Android device with malware at these lower budgets. There are several factors that contribute to this: multiple devices manufactured in the same facility, lack of standards on security when choosing components, and lack of quality assurance and scrutiny by the vendors that sell these devices. We investigated a tablet that had potential malware on it bought from the online vendor Amazon; a Dragon Touch KidzPad Y88X 10 kid’s tablet. As of this post, the tablet in question is no longer listed on Amazon, although it was available for the majority of this year.

Blue Box that says KidzPad Y88X 10

Dragon Touch KidzPad Y88X 10

It turns out malware was present, with an added bonus of pre-installed riskware and a very outdated parental control app. This is a major concern since this is a tablet marketed for kids.

Parents have plenty of worry and concern about how their kids use technology as it is. Ongoing conversations and negotiations about the time spent on devices happen in many households. Potential malware or riskware should not be a part of these concerns just because you purchased a budget Android tablet for your child. It just so happens that some of the parents at EFF conduct security research. But this is not what it should take to keep your kid safe.

“Stock Android”

To understand this issue better, it's useful to know what “stock Android” means and how manufacturers approach choosing an OS. The Android operating system is open sourced by Google and officially known as the "Android Open Source Project" or AOSP. The source code is stripped down and doesn't even include Google apps or the Google Play Store. Most phones or tablets you purchase with Android are AOSP with layers of customization; or a “skinned” version of AOSP. Even the current Google flagship phone, Pixel, does not come with stock Android.

Even though custom Android distributions or ROMs (Android Read Only Memory) can come with useful features, others can come with “bloatware” or unwanted apps. For example, in 2019 when Samsung pre-installed the Facebook app on their phones, the only option was to “disable” the app. Worse, in some cases custom ROMS can come with pre-installed malware. Android OEMs (original equipment manufacturers) can pre-install apps that have high-level privileges and may not be as obvious as an icon you can see on your home screen. It's not just apps, though. New features provided with AOSP may be severely delayed with custom OEMs if the device manufacturer isn't diligent about porting them in. This could be because of reasons like hardware limitations or not prioritizing updates.

Screen Time for Sketchyware

Similar to an Android TV we looked into earlier this year, we found the now notorious Corejava malware directories on the Dragon Touch tablet. Unlike the Android TV box we saw, this tablet didn’t come rooted. However, we could see that the directories /data/system/Corejava and /data/system/Corejava/nodewere present on the device. This indicates Corejava was active on this tablet’s firmware.

We originally didn’t suspect this malware’s presence until we saw links to other manufacturers and odd requests made from the tablet prompting us to take a look. We first booted up this Dragon Touch tablet in May 2023, after the Command and Control (C2) servers that Corejava depends on were taken down. So any attempts to download malicious payloads, if active, wouldn't work (for now). With the lack of “noise” from the device, we suspect that this malware indicator is at minimum, a leftover remnant of “copied homework” from hasty production; or at worst, left for possible future activity.

The tablet also came preloaded with Adups (which were also found on the Android TV boxes) in the form of “firmware over the air” (FOTA) update software that came as the application called “Wireless Update.”

App list that contains the app "Wireless Update"

Adups has a history of being malware, but there are “clean versions” that exist. One of those “clean” versions was on this tablet. Thanks to its history and extensive system level permissions to download whatever application it wants from the Adups servers, it still poses a concern. Adups comes preinstalled with this Dragon Touch OEM, if you factory reset this device, the app will return. There’s no way to uninstall or disable this variant of Adups without technical knowledge and being comfortable with the command line. Using an OTA software with such a fraught history is a very questionable decision for a children’s tablet.

Connecting the Dots

The connection between the infected Dragon Touch and the Android TV box we previously investigated was closer than we initially thought. After seeing a customer review for an Android TV box for a company at the same U.S. address as Dragon Touch, we discovered Dragon Touch is owned and trademarked by one company that also owns and distributes other products under different brand names.

This group that registered multiple brands, and shared an address with Dragon Touch, sold the same tablet we looked at in other online markets, like Walmart. This same entity apparently once sold the T95Z model of Android TV boxes under the brand name “Tablet Express,” along with devices like the Dragon Touch tablet. The T95Z was in the family of TV boxes investigated after researchers started taking a closer look at these types of devices.

With the widespread use of these devices, it’s safe to say that any Android devices attached to these sellers should be met with scrutiny.

Privacy Issues

The Dragon Touch tablet also came with a very outdated version of the KIDOZ app pre-installed. This app touts being “COPPA Certified” and “turns phones & tablets into kids friendly devices for playing and learning with the best kids’ apps, videos and online content.” This version operates as kind of like a mini operating system where you can download games, apps, and configure parental controls within the app.

We noticed the referrer for this app was “ANDROID_V4_TABLET_EXPRESS_PRO_GO.” “Tablet Express” is no longer an operational company, so it appears Dragon Touch repurposed an older version of the KIDOZ app. KIDOZ only distributes its app to device manufacturers to preload on devices for kids, it's not in the Google Play Store.

This version of the app still collects and sends data to “kidoz.net” on usage and physical attributes of the device. This includes information like device model, brand, country, timezone, screen size, view events, click events, logtime of events, and a unique “KID ID.” In an email, KIDOZ told us that the “calls remain unused even though they are 100% certified (COPPA)” in reference to the information sent to their servers from the app. The older version still has an app store of very outdated apps as well. For example, we found a drawing app, "Kids Paint FREE", attempting to send exact GPS coordinates to an ad server. The ad server this app calls no longer exists, but some of these apps in the KIDOZ store are still operational despite having deprecated code. This leakage of device specific information over primarily HTTP (insecure) web requests can be targeted by bad actors who want to siphon information either on device or by obtaining these defunct domains.

Several security vendors have labeled the version of the KIDOZ app we reviewed as adware. The current version of KIDOZ is less of an issue since the internal app store was removed, so it's no longer labeled as adware. Thankfully, you can uninstall this version of KIDOZ. KIDOZ does offer the latest version of their app to OEM manufacturers, so ultimately the responsibility lies with Dragon Touch. When we reached out to KIDOZ, they said they would follow up with various OEMs to offer the latest version of the app.

KIDOZ apps asking for excessive permissions

Simple racing games from the old KIDOZ app store asking for location and contacts.

Malware and riskware come in many different forms. The burden of remedy for pre-installed malware and sketchyware falling to consumers is absolutely unacceptable. We'd like to see some basic improvements for how these devices marketed for children are sold and made:

  • There should be better security benchmarks for devices sold in large online markets. Especially devices packaged to appear safe for kids.
  • If security researchers find malware on a device, there should be a more effective path to remove these devices from the market and alert customers.
  • There should be a minimum standard set on Android OEMs sold to offer a minimum requirement of available security and privacy features from AOSP. For instance, this Dragon Touch kid’s tablet is running Android 9, which is now five years old. Android 14 is currently the latest stable OS at the time of this report.

Devices with software with a malicious history and out-of-date apps that leak children’s data create a larger scope of privacy and security problems that should be watched with closer scrutiny than they are now. It took over 25 hours to assess all the issues with this one tablet. Since this was a custom Android OEM, the only possible source of documentation was from the company, and there wasn’t much. We were left to look at the breadcrumbs they leave on the image instead, such as custom system level apps, chip processor specific quirks, and pre-installed applications. In this case, following the breadcrumbs allowed us to make the needed connections to how this device was made and the circumstances that lead to the sketchyware on it. Most parents aren't security researchers and do not have the time, will, or energy to think about these types of problems, let alone fix them. Online vendors like Amazon and Walmart should start proactively catching these issues and invest in better quality and source checks on the many consumer electronics on their markets.

Investigated Apps, Logs, and Tools List:

APKs (Apps):

Logs:

Tools:

  • Android Debug Bridge (adb) and Android Studio for shell and emulation.
  • Logcat for app activity on device.
  • MOBSF for initial APK scans.
  • JADX GUI for static analysis of APKs.
  • PiHole for DNS requests from devices.
  • VirusTotal for graphing connections to suspicious domains and APKs.

EFF Director of Investigations Dave Maass contributed research to this report.

Privacy Advocates to TSA: Slow Down Plans for mDLs

18 octobre 2023 à 17:08

A digital form of identification should have the same privacy and security protections as physical ones. More so, because the standards governing them are so new and untested. This is at the heart of comments EFF and others submitted recently. Why now? Well, in 2021 the DHS submitted a call for comments for mobile driver’s licenses (mDLs). Since then the Transportation Security Administration (TSA) has taken up a process of making mDLs an acceptable identification at airports, and more states have adopted mDLs with either a state sponsored app or Apple and Google Wallet.

With the TSA’s proposed mDL rules, we ask: what’s the hurry? The agency’s rush to mDLs is ill-advised. For example, many mDL privacy guards are not yet well thought out, the standards referenced are not generally accessible to the public, and the scope for mDLs will reach beyond the context of an airport security line.

And so, EFF submitted comments with the American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), and Electronic Privacy Information Center (EPIC) to the TSA. We object to the agency’s proposed rules for waiving current REAL ID regulations for mobile driver’s licenses. Such premature federal action can undermine privacy, information security, democratic control, and transparency in the rollout of mDLs and other digital identification.

Even though standards bodies like the International Organization for Standardization (ISO) have frameworks for mDLs, they do not address various issues, such as an mDL potentially “phoning home” every time it is scanned. The privacy guards are still lacking, and left up to each state to implement them in their own way. With the TSA’s proposed waiver process, mDL development will likely be even more fractured, with some implementations better than others. This happened with digital vaccine credentials.

Another concern is that the standards referenced in the TSA’s proposed rules are under private, closed-off groups like the American Association of Motor Vehicle Administrators (AAMVA), and the ISO process that generated its specification 18013–5:2021. These standards have not been informed by enough transparency and public scrutiny. Moreover, there are other more openly-discussed standards that could open up interoperability. The lack of guidance around provisioning, storage, and privacy-preserving approaches is also a major cause for concern. Privacy should not be an afterthought, and we should not follow the “fail fast” model with such sensitive information.

Considering the mission and methods of the TSA, that agency should not be at the helm of creating nationwide mDL rules. That could lead to a national digital identity system, which EFF has long opposed, in an overreach of the agency’s position far outside the airport.

Well meaning intentions to allow states to “innovate” aside, mDLs done slower and right is a bigger win over fast and potentially harmful. Privacy safeguards need innovation, too, and the privacy risk is immense when it comes to digital documentation.

❌
❌