Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 9 novembre 2024Flux principal

Creators of This Police Location Tracking Tool Aren't Vetting Buyers. Here's How To Protect Yourself

8 novembre 2024 à 20:13

404 Media, along with Haaretz, Notus, and Krebs On Security recently reported on a company that captures smartphone location data from a variety of sources and collates that data into an easy-to-use tool to track devices’ (and, by proxy, individuals’) locations. The dangers that this tool presents are especially grave for those traveling to or from out-of-state reproductive health clinics, places of worship, and the border.

The tool, called Locate X, is run by a company called Babel Street. Locate X is designed for law enforcement, but an investigator working with Atlas Privacy, a data removal service, was able to gain access to Locate X by simply asserting that they planned to work with law enforcement in the future.

With an incoming administration adversarial to those most at risk from location tracking using tools like Locate X, the time is ripe to bolster our digital defenses. Now more than ever, attorneys general in states hostile to reproductive choice will be emboldened to use every tool at their disposal to incriminate those exerting their bodily autonomy. Locate X is a powerful tool they can use to do this. So here are some timely tips to help protect your location privacy.

First, a short disclaimer: these tips provide some level of protection to mobile device-based tracking. This is not an exhaustive list of techniques, devices, or technologies that can help restore one’s location privacy. Your security plan should reflect how specifically targeted you are for surveillance. Additional steps, such as researching and mitigating the on-board devices included with your car, or sweeping for physical GPS trackers, may be prudent steps which are outside the scope of this post. Likewise, more advanced techniques such as flashing your device with a custom-built privacy- or security-focused operating system may provide additional protections which are not covered here. The intent is to give some basic tips for protecting yourself from mobile device location tracking services.

Disable Mobile Advertising Identifiers

Services like Locate X are built atop an online advertising ecosystem that incentivizes collecting troves of information from your device and delivering it to platforms to micro-target you with ads based on your online behavior. One linchpin in the way distinct information (in this case, location) delivered to an app or website at a certain point in time is connected to information delivered to a different app or website at the next point in time is through unique identifiers such as the mobile advertising identifiers (MAIDs). Essentially, MAIDs allow advertising platforms and the data brokers they sell to to “connect the dots” between an otherwise disconnected scatterplot of points on a map, resulting in a cohesive picture of the movement of a device through space and time.

As a result of significant pushback by privacy advocates, both Android and iOS provided ways to disable advertising identifiers from being delivered to third-parties. As we described in a recent post, you can do this on Android following these steps:

With the release of Android 12, Google began allowing users to delete their ad ID permanently. On devices that have this feature enabled, you can open the Settings app and navigate to Security & Privacy > Privacy > Ads. Tap “Delete advertising ID,” then tap it again on the next page to confirm. This will prevent any app on your phone from accessing it in the future.

The Android opt out should be available to most users on Android 12, but may not be available on older versions. If you don’t see an option to “delete” your ad ID, you can use the older version of Android’s privacy controls to reset it and ask apps not to track you.

And on iOS:

Apple requires apps to ask permission before they can access your IDFA. When you install a new app, it may ask you for permission to track you.

Select “Ask App Not to Track” to deny it IDFA access.

To see which apps you have previously granted access to, go to Settings > Privacy & Security > Tracking.

In this menu, you can disable tracking for individual apps that have previously received permission. Only apps that have permission to track you will be able to access your IDFA.

You can set the “Allow apps to Request to Track” switch to the “off” position (the slider is to the left and the background is gray). This will prevent apps from asking to track in the future. If you have granted apps permission to track you in the past, this will prompt you to ask those apps to stop tracking as well. You also have the option to grant or revoke tracking access on a per-app basis.

Apple has its own targeted advertising system, separate from the third-party tracking it enables with IDFA. To disable it, navigate to Settings > Privacy > Apple Advertising and set the “Personalized Ads” switch to the “off” position to disable Apple’s ad targeting.

Audit Your Apps’ Trackers and Permissions

In general, the more apps you have, the more intractable your digital footprint becomes. A separate app you’ve downloaded for flashlight functionality may also come pre-packaged with trackers delivering your sensitive details to third-parties. That’s why it’s advisable to limit the amount of apps you download and instead use your pre-existing apps or operating system to, say, find the bathroom light switch at night. It isn't just good for your privacy: any new app you download also increases your “attack surface,” or the possible paths hackers might have to compromise your device.

We get it though. Some apps you just can’t live without. For these, you can at least audit what trackers the app communicates with and what permissions it asks for. Both Android and iOS have a page in their Settings apps where you can review permissions you've granted apps. Not all of these are only “on” or “off.” Some, like photos, location, and contacts, offer more nuanced permissions. It’s worth going through each of these to make sure you still want that app to have that permission. If not, revoke or dial back the permission. To get to these pages:

On Android: Open Settings > Privacy & Security > Privacy Controls > Permission Manager

On iPhone: Open Settings > Privacy & Security.

If you're inclined to do so, there are tricks for further research. For example, you can look up tracks in Android apps using an excellent service called Exodus Privacy. As of iOS 15, you can check on the device itself by turning on the system-level app privacy report in Settings > Privacy > App Privacy Report. From that point on, browsing to that menu will allow you to see exactly what permissions an app uses, how often it uses them, and what domains it communicates with. You can investigate any given domain by just pasting it into a search engine and seeing what’s been reported on it. Pro tip: to exclude results from that domain itself and only include what other domains say about it, many search engines like Google allow you to use the syntax

-site:www.example.com

.

Disable Real-Time Tracking with Airplane Mode

To prevent an app from having network connectivity and sending out your location in real-time, you can put your phone into airplane mode. Although it won’t prevent an app from storing your location and delivering it to a tracker sometime later, most apps (even those filled with trackers) won’t bother with this extra complication. It is important to keep in mind that this will also prevent you from reaching out to friends and using most apps and services that you depend on. Because of these trade-offs, you likely will not want to keep Airplane Mode enabled all the time, but it may be useful when you are traveling to a particularly sensitive location.

Some apps are designed to allow you to navigate even in airplane mode. Tapping your profile picture in Google Maps will drop down a menu with Offline maps. Tapping this will allow you to draw a boundary box and pre-download an entire region, which you can do even without connectivity. As of iOS 18, you can do this on Apple Maps too: tap your profile picture, then “Offline Maps,” and “Download New Map.”

Other apps, such as Organic Maps, allow you to download large maps in advance. Since GPS itself determines your location passively (no transmissions need be sent, only received), connectivity is not needed for your device to determine its location and keep it updated on a map stored locally.

Keep in mind that you don’t need to be in airplane mode the entire time you’re navigating to a sensitive site. One strategy is to navigate to some place near your sensitive endpoint, then switch airplane mode on, and use offline maps for the last leg of the journey.

Separate Devices for Separate Purposes

Finally, you may want to bring a separate, clean device with you when you’re traveling to a sensitive location. We know this isn’t an option available to everyone. Not everyone can afford purchasing a separate device just for those times they may have heightened privacy concerns. If possible, though, this can provide some level of protection.

A separate device doesn’t necessarily mean a separate data plan: navigating offline as described in the previous step may bring you to a place you know Wi-Fi is available. It also means any persistent identifiers (such as the MAID described above) are different for this device, along with different device characteristics which won’t be tied to your normal personal smartphone. Going through this phone and keeping its apps, permissions, and browsing to an absolute minimum will avoid an instance where that random sketchy game you have on your normal device to kill time sends your location to its servers every 10 seconds.

One good (though more onerous) practice that would remove any persistent identifiers like long-lasting cookies or MAIDs is resetting your purpose-specific smartphone to factory settings after each visit to a sensitive location. Just remember to re-download your offline maps and increase your privacy settings afterwards.

Further Reading

Our own Surveillance Self-Defense site, as well as many other resources, are available to provide more guidance in protecting your digital privacy. Often, general privacy tips are applicable in protecting your location data from being divulged, as well.

The underlying situation that makes invasive tools like Locate X possible is the online advertising industry, which incentivises a massive siphoning of user data to micro-target audiences. Earlier this year, the FTC showed some appetite to pursue enforcement action against companies brokering the mobile location data of users. We applauded this enforcement, and hope it will continue into the next administration. But regulatory authorities only have the statutory mandate and ability to punish the worst examples of abuse of consumer data. A piecemeal solution is limited in its ability to protect citizens from the vast array of data brokers and advertising services profiting off of surveilling us all.

Only a federal privacy law with a strong private right of action which allows ordinary people to sue companies that broker their sensitive data, and which does not preempt states from enacting even stronger privacy protections for their own citizens, will have enough teeth to start to rein in the data broker industry. In the meantime, consumers are left to their own devices (pun not intended) in order to protect their most sensitive data, such as location. It’s up to us to protect ourselves, so let’s make it happen!

À partir d’avant-hierFlux principal

FTC Findings on Commercial Surveillance Can Lead to Better Alternatives

8 octobre 2024 à 13:04

On September 19, the FTC published a staff report following a multi-year investigation of nine social media and video streaming companies. The report found a myriad of privacy violations to consumers stemming largely from the ad-revenue based business models of companies including Facebook, YouTube, and X (formerly Twitter) which prompted unbridled consumer surveillance practices. In addition to these findings, the FTC points out various ways in which user data can be weaponized to lock out competitors and dominate the respective markets of these companies.

The report finds that market dominance can be established and expanded by acquisition and maintenance of user data, creating an unfair advantage and preventing new market entrants from fairly competing. EFF has found that  this is not only true for new entrants who wish to compete by similarly siphoning off large amounts of user data, but also for consumer-friendly companies who carve out a niche by refusing to play the game of dominance-through-surveillance. Abusing user data in an anti-competitive manner means users may not even learn of alternatives who have their best interests, rather than the best interests of the company advertising partners, in mind.

The relationship between privacy violations and anti-competitive behavior is elaborated upon in a section of the report which points out that “data abuse can raise entry barriers and fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers in an unvirtuous cycle.” In contrast with the recent United States v. Google LLC (2020) ruling, where Judge Amit P. Mehta found that the data collection practices of Google, though injurious to consumers, were outweighed by an improved user experience, the FTC highlighted a dangerous feedback loop in which privacy abuses beget further privacy abuses. We agree with the FTC and find the identification of this ‘unvirtuous cycle’ a helpful focal point for further antitrust action.

In an interesting segment focusing on the existing protections the European Union’s General Data Protection Regulation (GDPR) specifies for consumers’ data privacy rights which the US lacks, the report explicitly mentions not only the right of consumers to delete or correct the data held by companies, but importantly also the right to transfer (or port) one’s data to the third party of their choice. This is a right EFF has championed time and again in pointing out the strength of the early internet came from nascent technologies’ imminent need (and implemented ability) to play nicely with each other in order to make any sense—let alone be remotely usable—to consumers. It is this very concept of interoperability which can now be re-discovered and give users control over their own data by granting them the freedom to frictionlessly pack up their posts, friend connections, and private messages and leave when they are no longer willing to let the entrenched provider abuse them.

We hope and believe that the significance of the FTC staff report comes not only from the abuses they have meticulously documented, but the policy and technological possibilities that can follow from the willingness to embrace alternatives. Alternatives where corporate surveillance cementing dominant players based on selling out their users is not the norm. We look forward to seeing these alternatives emerge and grow.

Strong End-to-End Encryption Comes to Discord Calls

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

6 septembre 2024 à 18:12

Imagine your search terms, key-strokes, private chats and photographs are being monitored every time they are sent. Millions of students across the country don’t have to imagine this deep surveillance of their most private communications: it’s a reality that comes with their school districts’ decision to install AI-powered monitoring software such as Gaggle and GoGuardian on students’ school-issued machines and accounts. As we demonstrated with our own Red Flag Machine, however, this software flags and blocks websites for spurious reasons and often disproportionately targets disadvantaged, minority and LGBTQ youth.

The companies making the software claim it’s all done for the sake of student safety: preventing self-harm, suicide, violence, and drug and alcohol abuse. While a noble goal, given that suicide is the second highest cause of death among American youth 10-14 years old, no comprehensive or independent studies have shown an increase in student safety linked to the usage of this software. Quite to the contrary: a recent comprehensive RAND research study shows that such AI monitoring software may cause more harm than good.

That study also found that how to respond to alerts is left to the discretion of the school districts themselves. Due to a lack of resources to deal with mental health, schools often refer these alerts to law enforcement officers who are not trained and ill-equipped to deal with youth mental crises. When police respond to youth who are having such episodes, the resulting encounters can lead to disastrous results. So why are schools still using the software–when a congressional investigation found a need for “federal action to protect students’ civil rights, safety, and privacy”? Why are they trading in their students’ privacy for a dubious-at-best marketing claim of safety?

Experts suggest it's because these supposed technical solutions are easier to implement than the effective social measures that schools often lack resources to implement. I spoke with Isabelle Barbour, a public health consultant who has experience working with schools to implement mental health supports. She pointed out that there are considerable barriers to families, kids, and youth accessing health care and mental health supports at a community level. There is also a lack of investment in supporting schools to effectively address student health and well-being. This leads to a situation where many students come to school with needs that have been unmet and these needs impact the ability of students to learn. Although there are clear and proven measures that work to address the burdens youth face, schools often need support (time, mental health expertise, community partners, and a budget) to implement these measures. Edtech companies market largely unproven plug-and-play products to educational professionals who are stretched thin and seeking a path forward to help kids. Is it any wonder why schools sign contracts which are easy to point to when questioned about what they are doing with regard to the youth mental health epidemic?

One example: Gaggle in marketing to school districts claims to have saved 5,790 student lives between 2018 and 2023, according to shaky metrics they themselves designed. All the while they keep the inner-workings of their AI monitoring secret, making it difficult for outsiders to scrutinize and measure its effectiveness.

We give Gaggle an “F”

Reports of the errors and inability of the AI flagging to understand context keep popping up. When the Lawrence, Kansas school district signed a $162,000 contract with Gaggle, no one batted an eye: It joined a growing number of school districts (currently ~1,500) nation-wide using the software. Then, school administrators called in nearly an entire class to explain photographs Gaggle’s AI had labeled as “nudity” because the software wouldn’t tell them:

“Yet all students involved maintain that none of their photos had nudity in them. Some were even able to determine which images were deleted by comparing backup storage systems to what remained on their school accounts. Still, the photos were deleted from school accounts, so there is no way to verify what Gaggle detected. Even school administrators can’t see the images it flags.”

Young journalists within the school district raised concerns about how Gaggle’s surveillance of students impacted their privacy and free speech rights. As journalist Max McCoy points out in his article for the Kansas Reflector, “newsgathering is a constitutionally protected activity and those in authority shouldn’t have access to a journalist’s notes, photos and other unpublished work.” Despite having renewed Gaggle’s contract, the district removed the surveillance software from the devices of student journalists. Here, a successful awareness campaign resulted in a tangible win for some of the students affected. While ad-hoc protections for journalists are helpful, more is needed to honor all students' fundamental right to privacy against this new front of technological invasions.

Tips for Students to Reclaim their Privacy

Students struggling with the invasiveness of school surveillance AI may find some reprieve by taking measures and forming habits to avoid monitoring. Some considerations:

  • Consider any school-issued device a spying tool. 
  • Don’t try to hack or remove the monitoring software unless specifically allowed by your school: it may result in significant consequences from your school or law enforcement. 
  • Instead, turn school-issued devices completely off when they aren’t being used, especially while at home. This will prevent the devices from activating the camera, microphone, and surveillance software.
  • If not needed, consider leaving school-issued devices in your school locker: this will avoid depending on these devices to log in to personal accounts, which will keep data from those accounts safe from prying eyes.
  • Don’t log in to personal accounts on a school-issued device (if you can avoid it - we understand sometimes a school-issued device is the only computer some students have access to). Rather, use a personal device for all personal communications and accounts (e.g., email, social media). Maybe your personal phone is the only device you have to log in to social media and chat with friends. That’s okay: keeping separate devices for separate purposes will reduce the risk that your data is leaked or surveilled. 
  • Don’t log in to school-controlled accounts or apps on your personal device: that can be monitored, too. 
  • Instead, create another email address on a service the school doesn’t control which is just for personal communications. Tell your friends to contact you on that email outside of school.

Finally, voice your concern and discomfort with such software being installed on devices you rely on. There are plenty of resources to point to, many linked to in this post, when raising concerns about these technologies. As the young journalists at Lawrence High School have shown, writing about it can be an effective avenue to bring up these issues with school administrators. At the very least, it will send a signal to those in charge that students are uncomfortable trading their right to privacy for an elusive promise of security.

Schools Can Do Better to Protect Students Safety and Privacy

It’s not only the students who are concerned about AI spying in the classroom and beyond. Parents are often unaware of the spyware deployed on school-issued laptops their children bring home. And when using a privately-owned shared computer logged into a school-issued Google Workspace or Microsoft account, a parent’s web search will be available to the monitoring AI as well.

New studies have uncovered some of the mental detriments that surveillance causes. Despite this and the array of First Amendment questions these student surveillance technologies raise, schools have rushed to adopt these unproven and invasive technologies. As Barbour put it: 

“While ballooning class sizes and the elimination of school positions are considerable challenges, we know that a positive school climate helps kids feel safe and supported. This allows kids to talk about what they need with caring adults. Adults can then work with others to identify supports. This type of environment helps not only kids who are suffering with mental health problems, it helps everyone.”

We urge schools to focus on creating that environment, rather than subjecting students to ever-increasing scrutiny through school surveillance AI.

Georgia Prosecutors Stoke Fears over Use of Encrypted Messengers and Tor

In an indictment against Defend the Atlanta Forest activists in Georgia, state prosecutors are citing use of encrypted communications to fearmonger. Alleging the defendants—which include journalists and lawyers, in addition to activists—in the indictment were responsible for a number of crimes related to the Stop Cop City campaign, the state Attorney General’s prosecutors cast suspicion on the defendants’ use of Signal, Telegram, Tor, and other everyday data-protecting technologies.

“Indeed, communication among the Defend the Atlanta Forest members is often cloaked in secrecy using sophisticated technology aimed at preventing law enforcement from viewing their communication and preventing recovery of the information” the indictment reads. “Members often use the dark web via Tor, use end-to-end encrypted messaging app Signal or Telegram.”

The secure messaging app Signal is used by tens of millions of people, and has hundreds of millions of global downloads. In 2021, users moved to the nonprofit-run private messenger en masse as concerns were raised about the data-hungry business models of big tech. In January of that year, former world’s richest man Elon Musk tweeted simply “Use Signal.” And world-famous NSA whistle-blower Edward Snowden tweeted in 2016 what in information security circles would become a meme and truism: “Use Tor. Use Signal.”

Despite what the bombastic language would have readers believe, installing and using Signal and Tor is not an initiation rite into a dark cult of lawbreaking. The “sophisticated technology” being used here are apps that are free, popular, openly distributed, and widely accessible by anyone with an internet connection. Going further, the indictment ascribes the intentions of those using the apps as simply to obstruct law enforcement surveillance. Taking this assertion at face value, any judge or reporter reading the indictment is led to believe everyone using the app simply wants to evade the police. The fact that these apps make it harder for law enforcement to access communications is exactly because the encryption protocol protects messages from everyone not intended to receive them—including the users’ ISP, local network hackers, or the Signal nonprofit itself.

Elsewhere, the indictment hones in on the use of anti-surveillance techniques to further its tenuous attempts to malign the defendants: “Most ‘Forest Defenders’ are aware that they are preparing to break the law, and this is demonstrated by premeditation of attacks.” Among a laundry list of other techniques, the preparation is supposedly marked by “using technology avoidance devices such as Faraday bags and burner phones.” Stoking fears around the use of anti-surveillance technologies sets a dangerous precedent for all people who simply don’t want to be tracked wherever they go. In protest situations, carrying a prepaid disposable phone can be a powerful defense against being persecuted for participating in first-amendment protected activities. Vilifying such activities as the acts of wrongdoers would befit totalitarian societies, not ones in which speech is allegedly a universal right.

To be clear, prosecutors have apparently not sought to use court orders to compel either the defendants or the companies named to enter passwords or otherwise open devices or apps. But vilifying the defendants’ use of common sense encryption is a dangerous step in cases that the Dekalb County District Attorney has already dropped out of, citing “different prosecutorial philosophies.”

Using messengers which protect user communications, browsers which protect user anonymity, and employing anti-surveillance techniques when out and about are all useful strategies in a range of situations. Whether you’re looking into a sensitive medical condition, visiting a reproductive health clinic with the option of terminating a pregnancy, protecting trade secrets from a competitor, wish to avoid stalkers or abusive domestic partners, protecting attorney-client exchanges, or simply want to keep your communications, browsing, and location history private, these techniques can come in handy. It is their very effectiveness which has led to the widespread adoption of privacy-protective technologies and techniques. When state prosecutors spread fear around the use of these powerful techniques, this sets us down a dangerous path where citizens are more vulnerable and at risk.

Restricting Flipper is a Zero Accountability Approach to Security: Canadian Government Response to Car Hacking

On February 8, François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, announced Canada would ban devices used in keyless car theft. The only device mentioned by name was the Flipper Zero—the multitool device that can be used to test, explore, and debug different wireless protocols such as RFID, NFC, infrared, and Bluetooth.

EFF explores toilet hacking

While it is useful as a penetration testing device, Flipper Zero is impractical in comparison to other, more specialized devices for car theft. It’s possible social media hype around the Flipper Zero has led people to believe that this device offers easier hacking opportunities for car thieves*. But government officials are also consuming such hype. That leads to policies that don’t secure systems, but rather impedes important research that exposes potential vulnerabilities the industry should fix. Even with Canada walking back on the original statement outright banning the devices, restricting devices and sales to “move forward with measures to restrict the use of such devices to legitimate actors only” is troublesome for security researchers.

This is not the first government seeking to limit access to Flipper Zero, and we have explained before why this approach is not only harmful to security researchers but also leaves the general population more vulnerable to attacks. Security researchers may not have the specialized tools car thieves use at their disposal, so more general tools come in handy for catching and protecting against vulnerabilities. Broad purpose devices such as the Flipper have a wide range of uses: penetration testing to facilitate hardening of a home network or organizational infrastructure, hardware research, security research, protocol development, use by radio hobbyists, and many more. Restricting access to these devices will hamper development of strong, secure technologies.

When Brazil’s national telecoms regulator Anatel refused to certify the Flipper Zero and as a result prevented the national postal service from delivering the devices, they were responding to media hype. With a display and controls reminiscent of portable video game consoles, the compact form-factor and range of hardware (including an infrared transceiver, RFID reader/emulator, SDR and Bluetooth LE module) made the device an easy target to demonize. While conjuring imagery of point-and-click car theft was easy, citing examples of this actually occurring proved impossible. Over a year later, you’d be hard-pressed to find a single instance of a car being stolen with the device. The number of cars stolen with the Flipper seems to amount to, well, zero (pun intended). It is the same media hype and pure speculation that has led Canadian regulators to err in their judgment to ban these devices.

Still worse, law enforcement in other countries have signaled their own intentions to place owners of the device under greater scrutiny. The Brisbane Times quotes police in Queensland, Australia: “We’re aware it can be used for criminal means, so if you’re caught with this device we’ll be asking some serious questions about why you have this device and what you are using it for.” We assume other tools with similar capabilities, as well as Swiss Army Knives and Sharpie markers, all of which “can be used for criminal means,” will not face this same level of scrutiny. Just owning this device, whether as a hobbyist or professional—or even just as a curious customer—should not make one the subject of overzealous police suspicions.

It wasn’t too long ago that proficiency with the command line was seen as a dangerous skill that warranted intervention by authorities. And just as with those fears of decades past, the small grain of truth embedded in the hype and fears gives it an outsized power. Can the command line be used to do bad things? Of course. Can the Flipper Zero assist criminal activity? Yes. Can it be used to steal cars? Not nearly as well as many other (and better, from the criminals’ perspective) tools. Does that mean it should be banned, and that those with this device should be placed under criminal suspicion? Absolutely not.

We hope Canada wises up to this logic, and comes to view the device as just one of many in the toolbox that can be used for good or evil, but mostly for good.

*Though concerns have been raised about Flipper Devices' connection to the Russian state apparatus, no unexpected data has been observed escaping to Flipper Devices' servers, and much of the dedicated security and pen-testing hardware which hasn't been banned also suffers from similar problems.

Sketchy and Dangerous Android Children’s Tablets and TV Set-Top Boxes: 2023 in Review

You may want to save your receipts if you gifted any low-end Android TV set-top boxes or children's tablets to a friend or loved one this holiday season. In a series of investigations this year, EFF researchers confirmed the existence of dangerous malware on set-top boxes manufactured by AllWinner and RockChip, and discovered sketchyware on a tablet marketed for kids from the manufacturer Dragon Touch. 

Though more reputable Android devices are available for watching TV and keeping the little ones occupied, they come with a higher price tag as well. This means that those who can afford such devices get more assurance in the security and privacy of these devices, while those who can only afford cheaper devices by little-known manufacturers are put at greater risk.

The digital divide could not be more apparent. Without a clear warning label, consumers who cannot afford devices from well-known brands such as Apple, Amazon, or Google are being sold devices which come out-of-the-box ready to spy on their children. This malware opens their home internet connection as a proxy to unknown users, and exposes them to legal risks. 

Traditionally, if a device like a vacuum cleaner was found to be defective or dangerous, we would expect resellers to pull these devices from the department store floor and to the best of their ability notify customers who have already bought these items and brought them into their homes. Yet we observed the devices in question continued to be sold by online vendors months after widely circulated news of their defects.

After our investigation of the set-top boxes, we urged the FTC to take action against the vendors who sell devices known to be riddled with malware. Amazon and AliExpress were named in the letter, though more vendors are undoubtedly still selling these devices. Not to spoil the holiday cheer, but if you have received one of these devices, you may want to ask for another gift and have the item refunded.

In the case of the Dragon Touch tablets, it was apparent that this issue went beyond just Android TV boxes and even encompassed budget Android devices specifically marketed for children. The tablet we investigated had an outdated pre-installed parental controls app that was labeled as adware, leftover remnants of malware, and sketchy update software. It’s clear this issue reached a wide variety of Android devices and it should not be left up to the consumer to figure this out. Even for devices on the market that are “normal,” there still needs to be work done by the consumer just to properly set up devices for their kids and themselves. But there’s no total consumer-side solution for pre-installed malware and there shouldn’t have to be.

Compared with the products of yesteryear, our “smart” and IOT devices carry a new set of risks to our security and privacy. Yet, we feel confident that with better digital product testing—along with regulatory oversight—can go a long way in mitigating these dangers. We applaud efforts such as Mozilla’s Privacy Not Included to catalog just how much our devices are protecting our data, since as it currently stands it is up to us as consumers to assess the risks ourselves and take appropriate steps.

EFF And Other Experts Join in Pointing Out Pitfalls of Proposed EU Cyber-Resilience Act

Today we join a set of 56 experts from organizations such as Google, Panasonic, Citizen Lab, Trend Micro and many others in an open letter calling on the European Commission, European Parliament, and Spain’s Ministry of Economic Affairs and Digital Transformation to reconsider the obligatory vulnerability reporting mechanisms built into Article 11 of the EU’s proposed Cyber-Resilience Act (CRA). As we’ve pointed out before, this reporting obligation raises major cybersecurity concerns. Broadening the knowledge of unpatched vulnerabilities to a larger audience will increase the risk of exploitation, and software publishers being forced to report these vulnerabilities to government regulators introduces the possibility of governments adding it to their offensive arsenals. These aren’t just theoretical threats: vulnerabilities stored on Intelligence Community infrastructure have been breached by hackers before.

Technology companies and others who create, distribute, and patch software are in a tough position. The intention of the CRA is to protect the public from companies who shirk their responsibilities by leaving vulnerabilities unpatched and their customers open to attack. But companies and software publishers who do the right thing by treating security vulnerabilities as well-guarded secrets until a proper fix can be applied and deployed now face an obligation to disclose vulnerabilities to regulators within 24 hours of exploitation. This significantly increases the danger these vulnerabilities present to the public. As the letter points out, the CRA “already requires software publishers to mitigate vulnerabilities without delay” separate from the reporting obligation. The letter also points out that this reporting mechanism may interfere with the collaboration and trusted relationship between companies and security researchers who work with companies to produce a fix.

The letter suggests to either remove this requirement entirely or change the reporting obligation to be a 72-hour window after patches are made and deployed. It also calls on European law- and policy-makers to prohibit use of reported vulnerabilities “for intelligence, surveillance, or offensive purposes.” These changes would go a long way in ensuring security vulnerabilities discovered by software publishers don’t wind up being further exploited by falling into the wrong hands.

Separately, EFF (and others) have pointed out the dangers the CRA presents to open-source software developers by making them liable for vulnerabilities in their software if they so much as solicit donations for their efforts. The obligatory reporting mechanism and open-source liability clauses of the CRA must be changed or removed. Otherwise, software publishers and open-source developers who are doing a public service will fall under a burdensome and undue liability.

❌
❌