Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Digital ID Isn't for Everybody, and That's Okay

25 septembre 2024 à 18:57

How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID.

mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking.

In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, over zealously promoting them as a potential equalizer.

What’s in Your Wallet?

With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs.

Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost.

Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. 

However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors.

New Announcements, New Scope

Digital ID has been moving fast this summer.

Proponents of digital ID frequently present the “over 21” example, which is often described like this:

You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name.

But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet.

While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs.

In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport.

Equity First > Digital First

We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy).

With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content.

For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification.

Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems.

Inevitable?

No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue.

State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves.

Lawmakers should ensure better access for people with or without a digital ID.

 

A Wider View on TunnelVision and VPN Advice

If you listen to any podcast long enough, you will almost certainly hear an advertisement for a Virtual Private Network (VPN). These advertisements usually assert that a VPN is the only tool you need to stop cyber criminals, malware, government surveillance, and online tracking. But these advertisements vastly oversell the benefits of VPNs. The reality is that VPNs are mainly useful for one thing: routing your network connection through a different network. Many people, including EFF, thought that VPNs were also a useful tool for encrypting your traffic in the scenario that you didn’t trust the network you were on, such as at a coffee shop, university, or hacker conference. But new research from Leviathan Security demonstrates a reminder that this may not be the case and highlights the limited use-cases for VPNs.

TunnelVision is a recently published attack method that can allow an attacker on a local network to force internet traffic to bypass your VPN and route traffic over an attacker-controlled channel instead. This allows the attacker to see any unencrypted traffic (such as what websites you are visiting). Traditionally, corporations deploy VPNs for employees to access private company sites from other networks. Today, many people use a VPN in situations where they don't trust their local network. But the TunnelVision exploit makes it clear that using an untrusted network is not always an appropriate threat model for VPNs because they will not always protect you if you can't trust your local network.

TunnelVision exploits the Dynamic Host Configuration Protocol (DHCP) to reroute traffic outside of a VPN connection. This preserves the VPN connection and does not break it, but an attacker is able to view unencrypted traffic. Think of DHCP as giving you a nametag when you enter the room at a networking event. The host knows at least 50 guests will be in attendance and has allocated 50 blank nametags. Some nametags may be reserved for VIP guests, but the rest can be allocated to guests if you properly RSVP to the event. When you arrive, they check your name and then assign you a nametag. You may now properly enter the room and be identified as "Agent Smith." In the case of computers, this “name” is the IP address DHCP assigns to devices on the network. This is normally done by a DHCP server but one could manually try it by way of clothespins in a server room.

TunnelVision abuses one of the configuration options in DHCP, called Option 121, where an attacker on the network can assign a “lease” of IPs to a targeted device. There have been attacks in the past like TunnelCrack that had similar attack methods, and chances are if a VPN provider addressed TunnelCrack, they are working on verifying mitigations for TunnelVision as well.

In the words of the security researchers who published this attack method:

“There’s a big difference between protecting your data in transit and protecting against all LAN attacks. VPNs were not designed to mitigate LAN attacks on the physical network and to promise otherwise is dangerous.”

Rather than lament the many ways public, untrusted networks can render someone vulnerable, there are many protections provided by default that can assist as well. Originally, the internet was not built with security in mind. Many have been working hard to rectify this. Today, we have other many other tools in our toolbox to deal with these problems. For example, web traffic is mostly encrypted with HTTPS. This does not change your IP address like a VPN could, but it still encrypts the contents of the web pages you visit and secures your connection to a website. Domain Name Servers (which occur before HTTPS in the network stack) have also been a vector for surveillance and abuse, since the requested domain of the website is still exposed at this level. There have been wide efforts to secure and encrypt this as well. Availability for encrypted DNS and HTTPS by default now exists in every major browser, closing possible attack vectors for snoops on the same network as you. Lastly, major browsers have implemented support for Encrypted Client Hello (ECH). Which encrypts your initial website connection, sealing off metadata that was originally left in cleartext.

TunnelVision is a reminder that we need to clarify what tools can and cannot do. A VPN does not provide anonymity online and neither can encrypted DNS or HTTPS (Tor can though). These are all separate tools that handle similar issues. Thankfully, HTTPS, encrypted DNS, and encrypted messengers are completely free and usable without a subscription service and can provide you basic protections on an untrusted network. VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

Restricting Flipper is a Zero Accountability Approach to Security: Canadian Government Response to Car Hacking

On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth.

EFF explores toilet hacking

While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers.

This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies.

When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices.

Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions.

It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not.

We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good.

*Though concerns have been raised about Flipper Devices' connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices' servers, and much of the dedicated security and pen-testing hardware which hasn't been banned also suffers from similar problems.

Decoding the California DMV's Mobile Driver's License

18 mars 2024 à 21:16

The State of California is currently rolling out a “mobile driver’s license” (mDL), a form of digital identification that raises significant privacy and equity concerns. This post explains the new smartphone application, explores the risks, and calls on the state and its vendor to focus more on protection of the users. 

What is the California DMV Wallet? 

The California DMV Wallet app came out in app stores last year as a pilot, offering the ability to store and display your mDL on your smartphone, without needing to carry and present a traditional physical document. Several features in this app replicate how we currently present the physical document with key information about our identity—like address, age, birthday, driver class, etc. 

However, other features in the app provide new ways to present the data on your driver’s license. Right now, we only take out our driver’s license occasionally throughout the week. However, with the app’s QR Code and “add-on” features, the incentive for frequency may grow. This concerns us, given the rise of age verification laws that burden everyone’s access to the internet, and the lack of comprehensive consumer data privacy laws that keep businesses from harvesting and selling identifying information and sensitive personal information. 

For now, you can use the California DMV Wallet app with TSA in airports, and with select stores that have opted in to an age verification feature called TruAge. That feature generates a separate QR Code for age verification on age-restricted items in stores, like alcohol and tobacco. This is not simply a one-to-one exchange of going from a physical document to an mDL. Rather, this presents a wider scope of possible usage of mDLs that needs expanded protections for those who use them. While California is not the first state to do this, this app will be used as an example to explain the current landscape.

What’s the QR Code? 

There are two ways to present your information on the mDL: 1) a human readable presentation, or 2) a QR code. 

The QR code with a normal QR code scanner will display an alphanumeric string of text that starts with “mdoc:”. For example: 

 “mdoc:owBjMS4wAY..." [shortened for brevity]

This “mobile document” (mdoc) text is defined by the International Organization for Standardization’s ISO/IEC18013-5. The string of text afterwards details driver’s license data that has been signed by the issuer (i.e., the California DMV), encrypted, and encoded. This data sequence includes technical specifications and standards, open and enclosed.  

In the digital identity space, including mDLs, the most referenced and utilized are the ISO standard above, the American Association of Motor Vehicle Administrators (AAMVA) standard, and the W3C’s Verified Credentials (VC). These standards are often not siloed, but rather used together since they offer directions on data formats, security, and methods of presentation that aren’t completely covered by just one. However, ISO and AAMVA are not open standards and are decided internally. VCs were created for digital credentials generally, not just for mDLs. These standards are relatively new and still need time to mature to address potential gaps.

The decrypted data could possibly look like this JSON blob:

         {"family_name":"Doe",
          "given_name":"John",
          "birth_date":"1980-10-10",
          "issue_date":"2020-08-10",
          "expiry_date":"2030-10-30",
          "issuing_country":"US",
          "issuing_authority":"CA DMV",
          "document_number":"I12345678",
          "portrait":"../../../../test/issuance/portrait.b64",
          "driving_privileges":[
            {
               "vehicle_category_code":"A",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            },
            {
               "vehicle_category_code":"B",
               "issue_date":"2022-08-09",
               "expiry_date":"2030-10-20"
            }
          ],
          "un_distinguishing_sign":"USA",
          {
          "weight":70,
          "eye_colour":"hazel",
          "hair_colour":"red",
          "birth_place":"California",
          "resident_address":"2415 1st Avenue",
          "portrait_capture_date":"2020-08-10T12:00:00Z",
          "age_in_years":42,
          "age_birth_year":1980,
          "age_over_18":true,
          "age_over_21":true,
          "issuing_jurisdiction":"US-CA",
          "nationality":"US",
          "resident_city":"Sacramento",
          "resident_state":"California",
          "resident_postal_code":"95818",
          "resident_country": "US"}
}

Application Approach and Scope Problems 

California decided to contract a vendor to build a wallet app rather than use Google Wallet or Apple Wallet (not to be conflated with Google and Apple Pay). A handful of other states use Google and Apple, perhaps because many people have one or the other. There are concerns about large companies being contracted by the states to deliver mDLs to the public, such as their controlling the public image of digital identity and device compatibility.  

This isn’t the first time a state contracted with a vendor to build a digital credential application without much public input or consensus. For example, New York State contracted with IBM to roll out the Excelsior app during the beginning of COVID-19 vaccination availability. At the time, EFF raised privacy and other concerns about this form of digital proof of vaccination. The state ultimately paid the vendor a staggering $64 million. While initially proprietary, the application later opened to the SMART Health Card standard, which is based on the W3C’s VCs. The app was sunset last year. It’s not clear what effect it had on public health, but it’s good that it wound down as social distancing measures relaxed. The infrastructure should be dismantled, and the persistent data should be discarded. If another health crisis emerges, at least a law in New York now partially protects the privacy of this kind of data. NY state legislature is currently working on a bill around mDLs after a round-table on their potential pilot. However, the New York DMV has already entered into a $1.75 million dollar contract with the digital identity vendor IDEMIA. It will be a race to see if protections will be established prior to pilot deployment. 

Scope is also a concern with California’s mDL. The state contracted with Spruce ID to build this app. The company states that its purpose is to empower “organizations to manage the entire lifecycle of digital credentials, such as mobile driver’s licenses, software audit statements, professional certifications, and more.” In the “add-ons” section of the app, TruAge’s age verification QR code is available.  

Another issue is selective disclosure, meaning the technical ability for the identity credential holder to choose which information to disclose to a person or entity asking for information from their credential. This is a long-time promise from enthusiasts of digital identity. The most used example is verification that the credential holder is over 21, without showing anything else about the holder, such as their name and address that appear on the face of their traditional driver’s license. But the California DMV wallet app, has a lack of options for selective disclosure: 

  • The holder has to agree to TruAge’s terms and service and generate a separate TruAge QR Code.  
  • There is already an mDL reader option for age verification for the QR Code of an mDL. 
  • There is no current option for the holder to use selective disclosure for their mDL. But it is planned for future release, according to the California DMV via email. 
  • Lastly, if selective disclosure is coming, this makes the TruAge add-on redundant. 

The over-21 example is only as meaningful as its implementation; including the convenience, privacy, and choice given to the mDL holder. 

TruAge appears to be piloting its product in at least 6 states. With “add-ons”, the scope of the wallet app indicates expansion beyond simply presenting your driver’s license. According to the California DMV’s Office of Public Affairs via email: 

The DMV is exploring the possibility of offering additional services including disabled person parking placard ID, registration card, vehicle ownership and occupational license in the add-ons in the coming months.” 

This clearly displays how the scope of this pilot may expand and how the mDL could eventually be housed within an entire ecosystem of identity documentation. There are privacy preserving ways to present mDLs, like unlinkable proofs. These mechanisms help mitigate verifier-issuer collusion from establishing if the holder was in different places with their mDL. 

Privacy and Equity First 

At the time of this post, about 325,000 California residents have the pilot app. We urge states to take their time with creating mDLs, and even wait for verification methods that are more privacy considerate to mature. Deploying mDLs should prioritize holder control, privacy, and transparency. The speed of these pilots is possibly influenced by other factors, like the push for mDLs from the U.S. Department of Homeland Security

Digital wallet initiatives like eIDAS in the European Union are forging conversations on what user control mechanisms might look like. These might include, for example, “bringing your own wallet” and using an “open wallet” that is secure, private, interoperable, and portable. 

We also need governance that properly limits law enforcement access to information collected by mDLs, and to other information in the smartphones where holders place their mDLs. Further, we need safeguards against these state-created wallets being wedged into problematic realms like age verification mandates as a condition of accessing the internet. 

We should be speed running privacy and provide better access for all to public services and government-issued documentation. That includes a right to stick with traditional paper or plastic identification, and accommodation of cases where a phone may not be accessible.  

We urge the state to implement selective disclosure and other privacy preserving tools. The app is not required anywhere. It should remain that way no matter how cryptographically secure the system purports to be, or how robust the privacy policies. We also urge all governments to remain transparent and cautious about how they sign on vendors during pilot programs. If a contract takes away the public’s input on future protections, then that is a bad start. If a state builds a pilot without much patience for privacy and public input, then that is also turbulent ground for protecting users going forward.  

Just because digital identity may feel inevitable, doesn’t mean the dangers have to be. 

The Last Mile of Encrypting the Web: 2023 Year in Review

25 décembre 2023 à 12:21

At the start of 2023, we sunsetted the HTTPS Everywhere web extension. It encrypted browser communications with websites and made sure users benefited from the protection of HTTPS wherever possible. HTTPS Everywhere ended because all major browsers now offer the functionality to make HTTPS the default. This is due to the grand efforts of the many technologists and advocates involved with Let’s Encrypt, HTTPS Everywhere, and Certbot over the last 10 years.

The immense impact of this “Encrypt the Web” initiative has translated into default “security for everybody,” without each user having to take on the burden of finding out how to enable encryption. The “hacker in a cafe” threat is no longer as dangerous as it once was, when the low technical bar of passive network sniffing of unencrypted public WiFi let bad actors see much of the online activity of people at the next table. Police have to work harder as well to inspect user traffic. While VPNs still serve a purpose, they are no longer necessary just to encrypt your traffic on the web.

“The Last Mile”

Firefox reports that over 80% of the web is encrypted, and Google reports 95% over all of its services. The last 5%-20% exists for several reasons:

  • Some websites are old and abandoned.
  • A small percentage of websites intentionally left their sites at HTTP.
  • Some mobile ecosystems do not use HTTPS by default.
  • HTTPS may still be difficult to obtain for accessibility reasons.

Plot of Encrypted traffic

To the last point, tools like Certbot could be more accessible. For places where censors might be blocking it, we now have a Tor-accessible .onion address available for certbot.eff.org. (We’ve done the same for eff.org and ssd.eff.org, EFF’s guides for individuals and organizations to protect themselves from surveillance and other security threats.)

Let’s Encrypt made much of this possible, by serving as a free and easily supported Certificate Authority (CA) that issued TLS certificates to 363 million websites. Let’s Encrypt differs from other prominent CAs. For example, Let’s Encrypt from the start encouraged short-lived certificates that were valid for 90 days. Other CAs were issuing certificates with lifespans of two years. Shorter lifespans encouraged server administrators to automate, which in turn encouraged encryption that is consistent, agile, and fast. The CA/B Forum, a voluntary consortium of CAs, browser companies, and other partners that maintain public key infrastructure (PKI) adopted ballot SC-063. Which allows 10-day certificates, and in 2026 will allow 7-day certificates. This pivotal change will make the ecosystem safer, reduce the toll on partners that manage the metadata chain, encourage automation, and push for the ecosystem to encrypt faster, with less overhead, and with better tools.

Chrome will require CAs in its root store (a trusted list of CAs allowed to secure traffic) to support the Automatic Certificate Management Environment (ACME) protocol. While Google steers this shift with ACME, the protocol is not a Google product or part of the company’s corporate agenda. Rather, ACME is a beneficial protocol that every CA should adopt, even without a “big tech” mandate to do so.

Chrome also expanded its HTTPS-First Mode to all users by default. We are glad to see the continued push for HTTPS by default, without the users needing to turn it on themselves. HTTPS “out of the box” is the ideal to strive for, far better than the current fragmented approach of requiring users to activate “enable HTTPS” settings on all major browsers.

While this year marks a major victory for the “Encrypt the Web” initiative, we still need to make sure the backbone infrastructure for HTTPS continues to work in the interest of the users. So for two years we have been monitoring eIDAS, the European Union’s digital identity framework. Its Article 45 requires browsers to display website identity with a Qualified Web Authentication Certificates (QWAC) issued by a government-mandated Root Certificate Authority. These measures hinder browsers from responding if one of these CAs acts inappropriately or has bad practices around issuing certificates. Final votes on eIDAS will occur in the upcoming weeks. While some of the proposal’s recitals suggest that browsers should be able to respond to a security event, that is not strong enough to overrule our concerns about the proposal’s most concerning text. This framework enables EU governments to snoop on their residents’ web traffic. This would roll back many of the web security and privacy gains over the past decade to a new, yet unfortunately familiar, fragmented state. We will fight to make sure HTTPS is not set up for failure in the EU.

In the movement to make HTTPS the default for everyone, we also need to be vigilant about how mobile devices handle web traffic. Too often, mobile apps are still sending clear text (insecure HTTP). So the next fight for “HTTPS Everywhere” should be HTTPS by default for app requests, without users needing to install a VPN.

The last stretch to 100% encryption will make the web ecosystem agile and bold enough to (1) ensure HTTPS as much as possible, and (2) block HTTP by default. Reaching 100% is possible and attainable from here. Even if a few people out there intentionally try to interact with an HTTP-only site once or twice a session.

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Sketchy and Dangerous Android Children’s Tablets and TV Set-Top Boxes: 2023 in Review

You may want to save your receipts if you gifted any low-end Android TV set-top boxes or children's tablets to a friend or loved one this holiday season. In a series of investigations this year, EFF researchers confirmed the existence of dangerous malware on set-top boxes manufactured by AllWinner and RockChip, and discovered sketchyware on a tablet marketed for kids from the manufacturer Dragon Touch. 

Though more reputable Android devices are available for watching TV and keeping the little ones occupied, they come with a higher price tag as well. This means that those who can afford such devices get more assurance in the security and privacy of these devices, while those who can only afford cheaper devices by little-known manufacturers are put at greater risk.

The digital divide could not be more apparent. Without a clear warning label, consumers who cannot afford devices from well-known brands such as Apple, Amazon, or Google are being sold devices which come out-of-the-box ready to spy on their children. This malware opens their home internet connection as a proxy to unknown users, and exposes them to legal risks. 

Traditionally, if a device like a vacuum cleaner was found to be defective or dangerous, we would expect resellers to pull these devices from the department store floor and to the best of their ability notify customers who have already bought these items and brought them into their homes. Yet we observed the devices in question continued to be sold by online vendors months after widely circulated news of their defects.

After our investigation of the set-top boxes, we urged the FTC to take action against the vendors who sell devices known to be riddled with malware. Amazon and AliExpress were named in the letter, though more vendors are undoubtedly still selling these devices. Not to spoil the holiday cheer, but if you have received one of these devices, you may want to ask for another gift and have the item refunded.

In the case of the Dragon Touch tablets, it was apparent that this issue went beyond just Android TV boxes and even encompassed budget Android devices specifically marketed for children. The tablet we investigated had an outdated pre-installed parental controls app that was labeled as adware, leftover remnants of malware, and sketchy update software. It’s clear this issue reached a wide variety of Android devices and it should not be left up to the consumer to figure this out. Even for devices on the market that are “normal,” there still needs to be work done by the consumer just to properly set up devices for their kids and themselves. But there’s no total consumer-side solution for pre-installed malware and there shouldn’t have to be.

Compared with the products of yesteryear, our “smart” and IOT devices carry a new set of risks to our security and privacy. Yet, we feel confident that with better digital product testing—along with regulatory oversight—can go a long way in mitigating these dangers. We applaud efforts such as Mozilla’s Privacy Not Included to catalog just how much our devices are protecting our data, since as it currently stands it is up to us as consumers to assess the risks ourselves and take appropriate steps.

How to Secure Your Kid's Android Device

4 décembre 2023 à 16:40

After finding risky software on an Android (Google’s mobile operating system) device marketed for kids, we wanted to put together some tips to help better secure your kid's Android device (and even your own). Despite the dangers that exist, there are many things that can be done to at least mitigate harm and assist parents and children. There are also safety tools that your child can use at their own discretion.

There's a handful of different tools, settings, and apps that can help better secure your kid’s device, depending on their needs. We've broken them down into four categories: Parental Monitoring, Security, Safety, and Privacy.

Note: If you do not see these settings in your Android device, it may be out of date or a heavily modified Android distribution. This is based on Android 14’s features.

Parental Monitoring

Google has a free app for parental controls called Family Link, which gives you tools to establish screen time limits, app installs, and more. There’s no need to install a third-party application. Family Link sometimes comes pre-installed with some devices marketed for children, but it is also available in the Google Play store for installation. This is helpful given that some third-party parental safety apps have been caught in the act of selling children’s data and involved in major data leaks. Also, having a discussion with your child about these controls can possibly provide something that technology can’t provide: trust and understanding.

Security

There are a few basic security steps you can take on both your own Google account and your child’s device to improve their security.

  • If you control your child's Google account with your own, you should lock down your own account as best as possible. Setting up two-factor authentication is a simple thing you can do to avoid malicious access to your child’s account via yours.
  • Encrypt their device with a passcode (if you have Android 6 or later).

Safety

You can also enable safety measures your child can use if they are traveling around with their device.

  • Safety Check allows a device user to automatically reach out to established emergency contacts if they feel like they are in an unsafe situation. If they do not mark themselves “safe” after the safety check duration ends, emergency location sharing with emergency contacts will commence. The safety check reason and duration (up to 24 hours) is set by the device user. 
  • Emergency SOS assists in triggering emergency actions like calling 911, sharing your location with your emergency contacts, and recording video.
  • If the "Unknown tracker alerts" setting is enabled, a notification will trigger on the user's device if there is an unknown AirTag moving with them (this feature only works with AirTags currently, but Google says will expand to other trackers in the future). Bluetooth is required to be turned on for this feature to function properly.

Privacy

There are some configurations you can also input to deter tracking of your child’s activities online by ad networks and data brokers.

  • Delete the device’s AD ID.
  • Install an even more overall privacy preserving browser like Firefox, DuckDuckGo, or Brave. While Chrome is the default on Android and has decent security measures, they do not allow web extensions on their mobile browser. Preventing the use of helpful extensions like Privacy Badger to help prevent ad tracking.
  • Review the privacy permissions on the device to ensure no apps are accessing important features like the camera, microphone, or location without your knowledge.

For more technically savvy parents, Pi-hole (a DNS software) is very useful to automatically block ad-related network requests. It blocked most shady requests on major ad lists from the malware we saw during our investigation on a kid’s tablet. The added benefit is you can configure many devices to one Pi-hole set up.

DuckDuckGo’s App Tracking protection is an alternative to using Pi-hole that doesn’t require as much technical overhead. However, since it looks at all network traffic coming from the device, it will ask to be set up as a VPN profile upon being enabled. Android forces any app that looks at traffic in this manner to be set up like a VPN and only allows one VPN connection at a time.

It can be a source of stress to set up a new device for your child. However, taking some time to set up privacy and security settings can help you and your child discuss technology from a more informed perspective for the both of you.

Low Budget Should Not Mean High Risk: Kids' Tablet Came Preloaded with Sketchyware

14 novembre 2023 à 17:04

It’s easy to get Android devices from online vendors like Amazon at different price points. Unfortunately, it is also easy to end up with an Android device with malware at these lower budgets. There are several factors that contribute to this: multiple devices manufactured in the same facility, lack of standards on security when choosing components, and lack of quality assurance and scrutiny by the vendors that sell these devices. We investigated a tablet that had potential malware on it bought from the online vendor Amazon; a Dragon Touch KidzPad Y88X 10 kid’s tablet. As of this post, the tablet in question is no longer listed on Amazon, although it was available for the majority of this year.

Blue Box that says KidzPad Y88X 10

Dragon Touch KidzPad Y88X 10

It turns out malware was present, with an added bonus of pre-installed riskware and a very outdated parental control app. This is a major concern since this is a tablet marketed for kids.

Parents have plenty of worry and concern about how their kids use technology as it is. Ongoing conversations and negotiations about the time spent on devices happen in many households. Potential malware or riskware should not be a part of these concerns just because you purchased a budget Android tablet for your child. It just so happens that some of the parents at EFF conduct security research. But this is not what it should take to keep your kid safe.

“Stock Android”

To understand this issue better, it's useful to know what “stock Android” means and how manufacturers approach choosing an OS. The Android operating system is open sourced by Google and officially known as the "Android Open Source Project" or AOSP. The source code is stripped down and doesn't even include Google apps or the Google Play Store. Most phones or tablets you purchase with Android are AOSP with layers of customization; or a “skinned” version of AOSP. Even the current Google flagship phone, Pixel, does not come with stock Android.

Even though custom Android distributions or ROMs (Android Read Only Memory) can come with useful features, others can come with “bloatware” or unwanted apps. For example, in 2019 when Samsung pre-installed the Facebook app on their phones, the only option was to “disable” the app. Worse, in some cases custom ROMS can come with pre-installed malware. Android OEMs (original equipment manufacturers) can pre-install apps that have high-level privileges and may not be as obvious as an icon you can see on your home screen. It's not just apps, though. New features provided with AOSP may be severely delayed with custom OEMs if the device manufacturer isn't diligent about porting them in. This could be because of reasons like hardware limitations or not prioritizing updates.

Screen Time for Sketchyware

Similar to an Android TV we looked into earlier this year, we found the now notorious Corejava malware directories on the Dragon Touch tablet. Unlike the Android TV box we saw, this tablet didn’t come rooted. However, we could see that the directories /data/system/Corejava and /data/system/Corejava/nodewere present on the device. This indicates Corejava was active on this tablet’s firmware.

We originally didn’t suspect this malware’s presence until we saw links to other manufacturers and odd requests made from the tablet prompting us to take a look. We first booted up this Dragon Touch tablet in May 2023, after the Command and Control (C2) servers that Corejava depends on were taken down. So any attempts to download malicious payloads, if active, wouldn't work (for now). With the lack of “noise” from the device, we suspect that this malware indicator is at minimum, a leftover remnant of “copied homework” from hasty production; or at worst, left for possible future activity.

The tablet also came preloaded with Adups (which were also found on the Android TV boxes) in the form of “firmware over the air” (FOTA) update software that came as the application called “Wireless Update.”

App list that contains the app "Wireless Update"

Adups has a history of being malware, but there are “clean versions” that exist. One of those “clean” versions was on this tablet. Thanks to its history and extensive system level permissions to download whatever application it wants from the Adups servers, it still poses a concern. Adups comes preinstalled with this Dragon Touch OEM, if you factory reset this device, the app will return. There’s no way to uninstall or disable this variant of Adups without technical knowledge and being comfortable with the command line. Using an OTA software with such a fraught history is a very questionable decision for a children’s tablet.

Connecting the Dots

The connection between the infected Dragon Touch and the Android TV box we previously investigated was closer than we initially thought. After seeing a customer review for an Android TV box for a company at the same U.S. address as Dragon Touch, we discovered Dragon Touch is owned and trademarked by one company that also owns and distributes other products under different brand names.

This group that registered multiple brands, and shared an address with Dragon Touch, sold the same tablet we looked at in other online markets, like Walmart. This same entity apparently once sold the T95Z model of Android TV boxes under the brand name “Tablet Express,” along with devices like the Dragon Touch tablet. The T95Z was in the family of TV boxes investigated after researchers started taking a closer look at these types of devices.

With the widespread use of these devices, it’s safe to say that any Android devices attached to these sellers should be met with scrutiny.

Privacy Issues

The Dragon Touch tablet also came with a very outdated version of the KIDOZ app pre-installed. This app touts being “COPPA Certified” and “turns phones & tablets into kids friendly devices for playing and learning with the best kids’ apps, videos and online content.” This version operates as kind of like a mini operating system where you can download games, apps, and configure parental controls within the app.

We noticed the referrer for this app was “ANDROID_V4_TABLET_EXPRESS_PRO_GO.” “Tablet Express” is no longer an operational company, so it appears Dragon Touch repurposed an older version of the KIDOZ app. KIDOZ only distributes its app to device manufacturers to preload on devices for kids, it's not in the Google Play Store.

This version of the app still collects and sends data to “kidoz.net” on usage and physical attributes of the device. This includes information like device model, brand, country, timezone, screen size, view events, click events, logtime of events, and a unique “KID ID.” In an email, KIDOZ told us that the “calls remain unused even though they are 100% certified (COPPA)” in reference to the information sent to their servers from the app. The older version still has an app store of very outdated apps as well. For example, we found a drawing app, "Kids Paint FREE", attempting to send exact GPS coordinates to an ad server. The ad server this app calls no longer exists, but some of these apps in the KIDOZ store are still operational despite having deprecated code. This leakage of device specific information over primarily HTTP (insecure) web requests can be targeted by bad actors who want to siphon information either on device or by obtaining these defunct domains.

Several security vendors have labeled the version of the KIDOZ app we reviewed as adware. The current version of KIDOZ is less of an issue since the internal app store was removed, so it's no longer labeled as adware. Thankfully, you can uninstall this version of KIDOZ. KIDOZ does offer the latest version of their app to OEM manufacturers, so ultimately the responsibility lies with Dragon Touch. When we reached out to KIDOZ, they said they would follow up with various OEMs to offer the latest version of the app.

KIDOZ apps asking for excessive permissions

Simple racing games from the old KIDOZ app store asking for location and contacts.

Malware and riskware come in many different forms. The burden of remedy for pre-installed malware and sketchyware falling to consumers is absolutely unacceptable. We'd like to see some basic improvements for how these devices marketed for children are sold and made:

  • There should be better security benchmarks for devices sold in large online markets. Especially devices packaged to appear safe for kids.
  • If security researchers find malware on a device, there should be a more effective path to remove these devices from the market and alert customers.
  • There should be a minimum standard set on Android OEMs sold to offer a minimum requirement of available security and privacy features from AOSP. For instance, this Dragon Touch kid’s tablet is running Android 9, which is now five years old. Android 14 is currently the latest stable OS at the time of this report.

Devices with software with a malicious history and out-of-date apps that leak children’s data create a larger scope of privacy and security problems that should be watched with closer scrutiny than they are now. It took over 25 hours to assess all the issues with this one tablet. Since this was a custom Android OEM, the only possible source of documentation was from the company, and there wasn’t much. We were left to look at the breadcrumbs they leave on the image instead, such as custom system level apps, chip processor specific quirks, and pre-installed applications. In this case, following the breadcrumbs allowed us to make the needed connections to how this device was made and the circumstances that lead to the sketchyware on it. Most parents aren't security researchers and do not have the time, will, or energy to think about these types of problems, let alone fix them. Online vendors like Amazon and Walmart should start proactively catching these issues and invest in better quality and source checks on the many consumer electronics on their markets.

Investigated Apps, Logs, and Tools List:

APKs (Apps):

Logs:

Tools:

  • Android Debug Bridge (adb) and Android Studio for shell and emulation.
  • Logcat for app activity on device.
  • MOBSF for initial APK scans.
  • JADX GUI for static analysis of APKs.
  • PiHole for DNS requests from devices.
  • VirusTotal for graphing connections to suspicious domains and APKs.

EFF Director of Investigations Dave Maass contributed research to this report.

Privacy Advocates to TSA: Slow Down Plans for mDLs

18 octobre 2023 à 17:08

A digital form of identification should have the same privacy and security protections as physical ones. More so, because the standards governing them are so new and untested. This is at the heart of comments EFF and others submitted recently. Why now? Well, in 2021 the DHS submitted a call for comments for mobile driver’s licenses (mDLs). Since then the Transportation Security Administration (TSA) has taken up a process of making mDLs an acceptable identification at airports, and more states have adopted mDLs with either a state sponsored app or Apple and Google Wallet.

With the TSA’s proposed mDL rules, we ask: what’s the hurry? The agency’s rush to mDLs is ill-advised. For example, many mDL privacy guards are not yet well thought out, the standards referenced are not generally accessible to the public, and the scope for mDLs will reach beyond the context of an airport security line.

And so, EFF submitted comments with the American Civil Liberties Union (ACLU), Center for Democracy & Technology (CDT), and Electronic Privacy Information Center (EPIC) to the TSA. We object to the agency’s proposed rules for waiving current REAL ID regulations for mobile driver’s licenses. Such premature federal action can undermine privacy, information security, democratic control, and transparency in the rollout of mDLs and other digital identification.

Even though standards bodies like the International Organization for Standardization (ISO) have frameworks for mDLs, they do not address various issues, such as an mDL potentially “phoning home” every time it is scanned. The privacy guards are still lacking, and left up to each state to implement them in their own way. With the TSA’s proposed waiver process, mDL development will likely be even more fractured, with some implementations better than others. This happened with digital vaccine credentials.

Another concern is that the standards referenced in the TSA’s proposed rules are under private, closed-off groups like the American Association of Motor Vehicle Administrators (AAMVA), and the ISO process that generated its specification 18013–5:2021. These standards have not been informed by enough transparency and public scrutiny. Moreover, there are other more openly-discussed standards that could open up interoperability. The lack of guidance around provisioning, storage, and privacy-preserving approaches is also a major cause for concern. Privacy should not be an afterthought, and we should not follow the “fail fast” model with such sensitive information.

Considering the mission and methods of the TSA, that agency should not be at the helm of creating nationwide mDL rules. That could lead to a national digital identity system, which EFF has long opposed, in an overreach of the agency’s position far outside the airport.

Well meaning intentions to allow states to “innovate” aside, mDLs done slower and right is a bigger win over fast and potentially harmful. Privacy safeguards need innovation, too, and the privacy risk is immense when it comes to digital documentation.

❌
❌