Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

FTC Rightfully Acts Against So-Called “AI Weapon Detection” Company Evolv

The Federal Trade Commission has entered a settlement with self-styled “weapon detection” company Evolv, to resolve the FTC’s claim that the company “knowingly” and repeatedly” engaged in “unlawful” acts of misleading claims about their technology. Essentially, Evolv’s technology, which is in schools, subways, and stadiums, does far less than they’ve been claiming. 

The FTC alleged in their complaint that despite the lofty claims made by Evolv, the technology is fundamentally no different from a metal detector: “The company has insisted publicly and repeatedly that Express is a ‘weapons detection’ system and not a ‘metal detector.’ This representation is solely a marketing distinction, in that the only things that Express scanners detect are metallic and its alarms can be set off by metallic objects that are not weapons.” A typical contract for Evolv costs tens of thousands of dollars per year—five times the cost of traditional metal detectors. One district in Kentucky spent $17 million to outfit its schools with the software. 

The settlement requires notice, to the many schools which use this technology to keep weapons out of classrooms, that they are allowed to cancel their contracts. It also blocks the company from making any representations about their technology’s:

  • ability to detect weapons
  • ability to ignore harmless personal items
  • ability to detect weapons while ignoring harmless personal items
  • ability to ignore harmless personal items without requiring visitors to remove any such items from pockets or bags

The company also is prohibited from making statements regarding: 

  • Weapons detection accuracy, including in comparison to the use of metal detectors
  • False alarm rates, including comparisons to the use of metal detectors
  • The speed at which visitors can be screened, as compared to the use of metal detectors
  • Labor costs, including comparisons to the use of metal detectors 
  • Testing, or the results of any testing
  • Any material aspect of its performance, efficacy, nature, or central characteristics, including, but not limited to, the use of algorithms, artificial intelligence, or other automated systems or tools.

If the company can’t say these things anymore…then what do they even have left to sell? 

There’s a reason so many people accuse artificial intelligence of being “snake oil.” Time and again, a company takes public data in order to power “AI” surveillance, only for taxpayers to learn it does no such thing. “Just walk out” stores actually required people watching you on camera to determine what you purchased. Gunshot detection software that relies on a combination of artificial intelligence and human “acoustic experts” to purportedly identify and locate gunshots “rarely produces evidence of a gun-related crime.” There’s a lot of well-justified suspicion about what’s really going on within the black box of corporate secrecy in which artificial intelligence so often operates. 

Even when artificial intelligence used by the government isn’t “snake oil,” it often does more harm than good. AI systems can introduce or exacerbate harmful biases that have massive  negative impacts on people’s lives. AI systems have been implicated with falsely accusing people of welfare fraud, increasing racial bias in jail sentencing as well as policing and crime prediction, and falsely identifying people as suspects based on facial recognition.   

Now, the politicians, schools, police departments, and private venues have been duped again. This time, by Evolv, a company which purports to sell “weapon detection technology” which they claimed would use AI to scan people entering a stadium, school, or museum and theoretically alert authorities if it recognizes the shape of a weapon on a person. 

Even before the new FTC action, there was indication that this technology was not an effective solution to weapon-based violence. From July to October, New York City rolled out a trial of Evolv technology in 20 subway systems in an attempt to keep people from bringing weapons on to the transit system. Out of 2,749 scans there were 118 false positives. Twelve knives and no guns were recovered. 

Make no mistake, false positives are dangerous. Falsely telling officers to expect an armed individual is a recipe for an unarmed person to be injured or even killed

Cities, performance venues, schools, and transit systems are understandably eager to do something about violence–but throwing money at the problem by buying unproven technology is not the answer and actually takes away resources and funding from more proven and systematic approaches. We applaud the FTC for standing up to the lucrative security theater technology industry. 

Top Ten EFF Digital Security Resources for People Concerned About the Incoming Trump Administration

In the wake of the 2024 election in the United States, many people are concerned about tightening up their digital privacy and security practices. As always, we recommend that people start making their security plan by understanding their risks. For most people in the U.S., the threats that they face and the methods by which they are likely to be surveilled or harassed have not changed, but the consequences of digital privacy or security failures may become much more serious, especially for vulnerable populations such as journalists, activists, LGBTQ+ people, people seeking or providing abortion-related care, Black or Indigenous people, and undocumented immigrants.

EFF has decades of experience in providing digital privacy and security resources, particularly for vulnerable people. We’ve written a lot of resources over the years and here are the top ten that we think are most useful right now:

1. Surveillance Self-Defense

https://ssd.eff.org/

Our Surveillance Self-Defense guides are a great place to start your journey of securing yourself against digital threats. We know that it can be a bit overwhelming, so we recommend starting with our guide on making a security plan so you can familiarize yourself with the basics and decide on your specific needs. Or, if you’re planning to head out to a protest soon and want to know the most important ways to protect yourself, check out our guide to Attending a Protest. Many people in the groups most likely to be targeted in the upcoming months will need advice tailored to their specific threat models, and for that we recommend the Security Scenarios module as a quick way to find the right information for your particular situation. 

2. Street-Level Surveillance

https://sls.eff.org/ 

If you are creating your security plan for the first time, it’s helpful to know which technologies might realistically be used to spy on you. If you’re going to be out on the streets protesting or even just existing in public, it’s important to identify which threats to take seriously. Our Street-Level Surveillance team has spent years studying the technologies that law enforcement uses and has made this handy website where you can find information about technologies including drones, face recognition, license plate readers, stingrays, and more.

3. Atlas Of Surveillance

https://atlasofsurveillance.org/ 

Once you have learned about the different types of surveillance technologies police can acquire from our Street-Level surveillance guides, you might want to know which technologies your local police has already bought. You can find that in our Atlas of Surveillance, a crowd-sourced map of police surveillance technologies in the United States. 

4. Doxxing: Tips To Protect Yourself Online & How to Minimize Harm

https://www.eff.org/deeplinks/2020/12/doxxing-tips-protect-yourself-online-how-minimize-harm

Surveillance by governments and law enforcement is far from the only kind of threat that people face online. We expect to see an increase in doxxing and harassment of vulnerable populations by vigilantes, emboldened by the incoming administration’s threatened policies. This guide is our thinking around the precautions you may want to take if  you are likely to be doxxed and how to minimize the harm if you’ve been doxxed already.

5. Using Your Phone in Times of Crisis

https://www.eff.org/deeplinks/2022/03/using-your-phone-times-crisis

Using your phone in general can be a cause for anxiety for many people. We have a short guide on what considerations you should make when you are using your phone in times of crisis. This guide is specifically written for people in war zones, but may also be useful more generally. 

6. Surveillance-Self Defense for Campus Protests

https://www.eff.org/deeplinks/2024/06/surveillance-defense-campus-protests 

One prediction we can safely make for 2025 is that campus protests will continue to be important. This blog post is our latest thinking about how to put together your security plan before you attend a protest on campus.

7. Security Education Companion

https://www.securityeducationcompanion.org/

For those who are already comfortable with Surveillance Self-Defense, you may be getting questions from your family, friends, or community about what to do now. You may even consider giving a digital security training session to people in your community, and for that you will need guidance and training materials. The Security Education Companion has everything you need to get started putting together a training plan for your community, from recommended lesson plans and materials to guides on effective teaching.

8. Police Location Tracking

https://www.eff.org/deeplinks/2024/11/creators-police-location-tracking-tool-arent-vetting-buyers-heres-how-protect 

One police surveillance technology we are especially concerned about is location tracking services. These are data brokers that get your phone's location, usually through the same invasive ad networks that are baked into almost every app, and sell that information to law enforcement. This can include historical maps of where a specific device has been, or a list of all the phones that were at a specific location, such as a protest or abortion clinic. This blog post goes into more detail on the problem and provides a guide on how to protect yourself and keep your location private.

9. Should You Really Delete Your Period Tracking App?

https://www.eff.org/deeplinks/2022/06/should-you-really-delete-your-period-tracking-app

As soon as the Supreme Court overturned Roe v. Wade, one of the most popular bits of advice going around the internet was to “delete your period tracking app.” Deleting your period tracking app may feel like an effective countermeasure in a world where seeking abortion care is increasingly risky and criminalized, but it’s not advice that is grounded in the reality of the ways in which governments and law enforcement currently gather evidence against people who are prosecuted for their pregnancy outcomes. This blog post provides some more effective ways of protecting your privacy and sensitive information. 

10. Why We Can’t Just Tell You Which Messenger App to Use

https://www.eff.org/deeplinks/2018/03/why-we-cant-give-you-recommendation

People are always asking us to give them a recommendation for the best end-to-end encrypted messaging app. Unfortunately, this is asking for a simple answer to an extremely nuanced question. While the short answer is “probably Signal most of the time,” the long answer goes into why that is not always the case. Since we wrote this in 2018, some companies have come and gone, but our thinking on this topic hasn’t changed much.

Bonus external guide

https://digitaldefensefund.org/learn

Our friends at the Digital Defense Fund have put together an excellent collection of guides aimed at particularly vulnerable people who are thinking about digital security for the first time. They have a comprehensive collection of links to other external guides as well.

***

EFF is committed to keeping our privacy and security advice accurate and up-to-date, reflecting the needs of a variety of vulnerable populations. We hope these resources will help you keep yourself and your community safe in dangerous times.

New Email Scam Includes Pictures of Your House. Don’t Fall For It.

27 septembre 2024 à 15:36

You may have arrived at this post because you received an email with an attached PDF from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation, and how to respond to an apparently personalized threat that even includes your actual “LastNameFirstName.pdf” and a picture of your house.

Don’t panic. Contrary to the claims in your email, you probably haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam —actually, a whole category of scams called "sextortion." This is a type of online phishing that is targeting people around the world and preying on digital-age fears. It generally uses publicly available information or information from data breaches, not information obtained from hacking the recipients of the emails specifically, and therefore it is very unlikely the sender has any "incriminating" photos or has actually hacked your accounts or devices.

They begin the emails showing you your address, full name, and possibly a picture of your house. 

We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom.

We have pasted an example of this email scam at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers.  The hacker promises to go away if you send them thousands of dollars, usually with bitcoin. This is different from a separate sextortion scam in which a stranger befriends and convinces a user to exchange sexual content then demands payment for secrecy; a much more perilous situation which requires a more careful response.

What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you your address, full name, and possibly a picture of your house. 

Again, this still doesn't mean you've been hacked. The scammers in this case likely found a data breach which contained a list of names, emails, and home addresses and are sending this email out to potentially millions of people, hoping that some of them would be worried enough and pay out that the scam would become profitable.

Here are some quick answers to the questions many people ask after receiving these emails.

They Have My Address and Phone Number! How Did They Get a Picture of My House?

Rest assured that the scammers were not in fact outside your house taking pictures. For better or worse, pictures of our houses are all over the internet. From Google Street View to real estate websites, finding a picture of someone’s house is trivial if you have their address. While public data on your home may be nerve-wracking, similar data about government property can have transparency benefits.

Unfortunately, in the modern age, data breaches are common, and massive sets of peoples’ personal information often make their way to the criminal corners of the Internet. Scammers likely obtained such a list or multiple lists including email addresses, names, phone numbers, and addresses for the express purpose of including a kernel of truth in an otherwise boilerplate mass email.

It’s harder to change your address and phone number than it is to change your password. The best thing you can do here is be aware that your information is out there and be careful of future scams using this information. Since this information (along with other leaked info such as your social security number) can be used for identity theft, it's a good idea to freeze your credit.

And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps.

Should I Respond to the Email?

Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all.

So,  I Shouldn’t Pay the Ransom?

You should not pay the ransom. If you pay the ransom, you’re not only losing money, but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re susceptible.

What Should I Do Instead?

Unfortunately there isn’t much you can do. But there are a few basic security hygiene steps you can take that are always a good idea. Use a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online.

One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do. This can help ease your mind if you're worried that a rogue app may be turning your camera on, or that you left it on yourself—unlikely, but possible scenarios. 

We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good security hygiene going forward!

Overall this isn’t an issue that is up to consumers to fix. The root of the problem is that data brokers and nearly every other company have been allowed to store too much information about us for too long. Inevitably this data gets breached and makes its way into criminal markets where it is sold and traded and used for scams like this one. The most effective way to combat this would be with comprehensive federal privacy laws. Because, if the data doesn’t exist, it can’t be leaked. The best thing for you to do is advocate for such a law in Congress, or at the state level. 

Below are real examples of the scam that were sent to EFF employees. The scam text is similar across many different victims..

Example 1

[Name],

I know that calling [Phone Number] or visiting [your address] would be a convenient way to contact you in case you don't act. Don't even try to escape from this. You've no idea what I'm capable of in [Your City].

I suggest you read this message carefully. Take a moment to chill, breathe, and analyze it thoroughly. 'Cause we're about to discuss a deal between you and me, and I don't play games. You do not know me but I know you very well and right now, you are wondering how, right? Well, you've been treading on thin ice with your browsing habits, scrolling through those videos and clicking on links, stumbling upon some not-so-safe sites. I placed a Malware on a porn website & you visited it to watch(you get my drift). While you were watching those videos, your smartphone began working as a RDP (Remote Control) which provided me complete control over your device. I can peep at everything on your display, flick on your camera and mic, and you wouldn't even suspect a thing. Oh, and I have got access to all your emails, contacts, and social media accounts too.

Been keeping tabs on your pathetic life for a while now. It's simply your bad luck that I accessed your misdemeanor. I gave in more time than I should have looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing filthy things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other half, its your vacant face. With simply a single click, I can send this video to every single of your contacts.

I see you are getting anxious, but let's get real. Actually, I want to wipe the slate clean, and allow you to get on with your daily life and wipe your slate clean. I will present you two alternatives. First Alternative is to disregard this email. Let us see what is going to happen if you take this path. Your video will get sent to all your contacts. The video was lit, and I can't even fathom the humiliation you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.

Option 2 is to pay me, and be confidential about it. We will name it my “privacy charges”. let me tell you what will happen if you opt this option. Your secret remains private. I will destroy all the data and evidence once you come through with the payment. You'll transfer the payment via Bitcoin only.

Pay attention, I'm telling you straight: 'We gotta make a deal'. I want you to know I'm coming at you with good intentions. My word is my bond.

Required Amount: $1950

BITCOIN ADDRESS: [REDACTED]

Let me tell ya, it's peanuts for your tranquility.

Notice: You now have one day in order to make the payment and I will only accept Bitcoins (I have a special pixel within this message, and now I know that you have read through this message). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this or negotiating, it's pointless. The email and wallet are custom-made for you, untraceable. If I suspect that you've shared or discussed this email with anyone else, the garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless. I don't make mistakes, [Name].

A picture of the EFF offices, in the style often used in this scam.

Can you notice something here?

Honestly, those online tips about covering your camera aren't as useless as they seem. I am waiting for my payment…

Example 2

[NAME],
Is visiting [ADDRESS] a better way to contact in case you don't act
Beautiful neighborhood btw
It's important you pay attention to this message right now. Take a moment to chill, breathe, and analyze it thoroughly. We're talking about something serious here, and I ain't playing games. You do not know anything about me but I know you very well and right now, you are thinking how, correct?
Well, You've been treading on thin ice with your browsing habits, scrolling through those filthy videos and clicking on links, stumbling upon some not-so-safe sites. I installed a Spyware called "Pegasus" on a app you frequently use. Pegasus is a spyware that is designed to be covertly and remotely installed on mobile phones running iOS and Android. While you were busy watching videos, your device started out working as a RDP (Remote Protocol) which gave me total control over your device. I can peep at everything on your display, flick on your cam and mic, and you wouldn't even notice. Oh, and I've got access to all your emails, contacts, and social media accounts too.
What I want?
Been keeping tabs on your pathetic existence for a while now. It's just your hard luck that I accessed your misdemeanor. I invested in more time than I probably should've looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing embarrassing things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other part, it is your vacant face. With just a click, I can send this filth to all of your contacts.
What can you do?
I see you are getting anxious, but let's get real. Wholeheartedly, I am willing to wipe the slate clean, and let you move on with your regular life and wipe your slate clean. I am about to present you two alternatives. Either turn a blind eye to this warning (bad for you and your family) or pay me a small amount to finish this mattter forever. Let us understand those 2 options in details.
First Option is to ignore this email. Let us see what will happen if you select this path. I will send your video to your contacts. The video was straight fire, and I can't even fathom the embarrasement you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.
Other Option is to pay me, and be confidential about it. We will name it my “privacy fee”. let me tell you what happens when you go with this choice. Your filthy secret will remain private. I will wipe everything clean once you send payment. You'll transfer the payment through Bitcoin only. I want you to know I'm aiming for a win-win here. I'm a person of integrity.
Transfer Amount: USD 2000
My Bitcoin Address: [BITCOIN ADDRESS]
Or, (Here is your Bitcoin QR code, you can scan it):
[IMAGE OF A QR CODE]
Once you pay up, you'll sleep like a baby. I keep my word.
Important: You now have one day to sort this out. (I've a special pixel in this message, and now I know that you've read through this mail). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this, it's pointless. The email and wallet are custom-made for you, untraceable. I don't make mistakes, [NAME]. If I notice that you've shared or discussed this mail with anyone else, your garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless.
Honestly, those online tips about covering your camera aren't as useless as they seem.
Don't dwell on it. Take it as a little lesson and keep your guard up in the future.

 

EFF to FCC: SS7 is Vulnerable, and Telecoms Must Acknowledge That

It’s unlikely you’ve heard of Signaling System 7 (SS7), but every phone network in the world is connected to it, and if you have ever roamed networks internationally or sent an SMS message overseas you have used it. SS7 is a set of telecommunication protocols that cellular network operators use to exchange information and route phone calls, text messages, and other communications between each other on 2G and 3G networks (4G and 5G networks instead use the Diameter signaling system). When a person travels outside their home network's coverage area (roaming), and uses their phone on a 2G or 3G network, SS7 plays a crucial role in registering the phone to the network and routing their communications to the right destination. On May 28, 2024, EFF submitted comments to the Federal Communications Commision demanding investigation of SS7 and Diameter security and transparency into how the telecoms handle the security of these networks.

What Is SS7, and Why Does It Matter?

When you roam onto different 2G or 3G networks, or send an SMS message internationally the SS7 system works behind the scenes to seamlessly route your calls and SMS messages. SS7 identifies the country code, locates the specific cell tower that your phone is using, and facilitates the connection. This intricate process involves multiple networks and enables you to communicate across borders, making international roaming and text messages possible. But even if you don’t roam internationally, send SMS messages, or use legacy 2G/3G networks, you may still be vulnerable to SS7 attacks because most telecommunications providers are still connected to it to support international roaming, even if they have turned off their own 2G and 3G networks. SS7 was not built with any security protocols, such as authentication or encryption, and has been exploited by governments, cyber mercenaries, and criminals to intercept and read SMS messages. As a result, many network operators have placed firewalls in order to protect users. However, there are no mandates or security requirements placed on the operators, so there is no mechanism to ensure that the public is safe.

Many companies treat your ownership of your phone number as a primary security authentication mechanism, or secondary through SMS two-factor authentication. An attacker could use SS7 attacks to intercept text messages and then gain access to your bank account, medical records, and other important accounts. Nefarious actors can also use SS7 attacks to track a target’s precise location anywhere in the world

These vulnerabilities make SS7 a public safety issue. EFF strongly believes that it is in the best interest of the public for telecommunications companies to secure their SS7 networks and publicly audit them, while also moving to more secure technologies as soon as possible.

Why SS7 Isn’t Secure

SS7 was standardized in the late 1970s and early 1980s, at a time when communication relied primarily on landline phones. During that era, the telecommunications industry was predominantly controlled by corporate monopolies. Because the large telecoms all trusted each other there was no incentive to focus on the security of the network. SS7 was developed when modern encryption and authentication methods were not in widespread use. 

In the 1990s and 2000s new protocols were introduced by the European Telecommunication Standards Institute (ETSI) and the telecom standards bodies to support mobile phones with services they need, such as roaming, SMS, and data. However, security was still not a concern at the time. As a result, SS7 presents significant cybersecurity vulnerabilities that demand our attention. 

SS7 can be accessed through telecommunications companies and roaming hubs. To access SS7, companies (or nefarious actors) must have a “Global Title,” which is a phone number that uniquely identifies a piece of equipment on the SS7 network. Each phone company that runs its own network has multiple global titles. Some telecommunications companies lease their global titles, which is how malicious actors gain access to the SS7 network. 

Concerns about potential SS7 exploits are primarily discussed within the mobile security industry and are not given much attention in broader discussions about communication security. Currently, there is no way for end users to detect SS7 exploitation. The best way to safeguard against SS7 exploitation is for telecoms to use firewalls and other security measures. 

With the rapid expansion of the mobile industry, there is no transparency around any efforts to secure our communications. The fact that any government can potentially access data through SS7 without encountering significant security obstacles poses a significant risk to dissenting voices, particularly under authoritarian regimes.

Some people in the telecommunications industry argue that SS7 exploits are mainly a concern for 2G and 3G networks. It’s true that 4G and 5G don’t use SS7—they use the Diameter protocol—but Diameter has many of the same security concerns as SS7, such as location tracking. What’s more, as soon as you roam onto a 3G or 2G network, or if you are communicating with someone on an older network, your communications once again go over SS7. 

FCC Requests Comments on SS7 Security 

Recently, the FCC issued a request for comments on the security of SS7 and Diameter networks within the U.S. The FCC asked whether the security efforts of telecoms were working, and whether auditing or intervention was needed. The three large US telecoms (Verizon, T-Mobile, and AT&T) and their industry lobbying group (CTIA) all responded with comments stating that their SS7 and Diameter firewalls were working perfectly, and that there was no need to audit the phone companies’ security measures or force them to report specific success rates to the government. However, one dissenting comment came from Cybersecurity and Infrastructure Security Agency (CISA) employee Kevin Briggs. 

We found the comments by Briggs, CISA’s top expert on telecom network vulnerabilities, to be concerning and compelling. Briggs believes that there have been successful, unauthorized attempts to access network user location data from U.S. providers using SS7 and Diameter exploits. He provides two examples of reports involving specific persons that he had seen: the tracking of a person in the United States using Provide Subscriber Information (PSI) exploitation (March 2022); and the tracking of three subscribers in the United States using Send Routing Information (SRI) packets (April 2022).  

This is consistent with reporting by Gary Miller and Citizen Lab in 2023, where they state: “we also observed numerous requests sent from networks in Saudi Arabia to geolocate the phones of Saudi users as they were traveling in the United States. Millions of these requests targeting the international mobile subscriber identity (IMSI), a number that identifies a unique user on a mobile network, were sent over several months, and several times per hour on a daily basis to each individual user.”

Briggs added that he had seen information describing how in May 2022, several thousand suspicious SS7 messages were detected, which could have masked a range of attacks—and that he had additional information on the above exploits as well as others that go beyond location tracking, such as the monitoring of message content, the delivery of spyware to targeted devices, and text-message-based election interference.

As a senior CISA official focused on telecom cybersecurity, Briggs has access to information that the general public is not aware of. Therefore his comments should be taken seriously, particularly in light of the concerns expressed by Senator Wyden in his letter to the President, referenced a non-public, independent, expert report commissioned by CISA, and alleged that CISA was “actively hiding information about [SS7 threats] from the American people.” The FCC should investigate these claims, and keep Congress and the public informed about exploitable weaknesses in the telecommunication networks we all use.

These warnings should be taken seriously and their claims should be investigated. The telecoms should submit the results of their audits to the FCC and CISA so that the public can have some reassurance that their security measures are working as they say they are. If the telecoms’ security measures aren’t enough, as Briggs and Miller suggest, then the FCC must step in and secure our national telecommunications network. 

For The Bragging Rights: EFF’s 16th Annual Cyberlaw Trivia Night

This post was authored by the mysterious Raul Duke.

The weather was unusually cool for a summer night. Just the right amount of bitterness in the air for attorneys from all walks of life to gather in San Francisco’s Mission District for EFF’s 16th annual Cyberlaw Trivia Night.

Inside Public Works, attorneys filled their plates with chicken and waffles, grabbed a fresh tech-inspired cocktail, and found their tables—ready to compete against their colleagues in obscure tech law trivia. The evening started promptly six minutes late, 7:06 PM PT, with Aaron Jue, EFF's Director of Member Engagement, introducing this year’s trivia tournament.

A lone Quizmaster, Kurt Opsahl, took the stage, noting that his walk-in was missing a key component, until The Blues Brothers started playing, filling the quizmaster with the valor to thank EFF’s intern fund supporters Fenwick and Morrison Forrester. The judges begrudgingly took the stage as the quizmaster reminded them that they have jobs at this event.

One of the judges, EFF’s Civil Liberties Director David Greene, gave some fiduciary advice to the several former EFF interns that were in the crowd. It was anyone’s guess as to whether they had gleaned any inside knowledge about the trivia.

I asked around as to what the attorneys had to gain by participating in this trivia night. I learned that not only were bragging rights on the table, but additionally teams had a chance to win champion steins.

The prizes: EFF steins!

With formalities out of the way, the first round of trivia - “General” - started with a possibly rousing question about the right to repair. Round one ended with the eighth question, which included a major typo calling the “Fourth Amendment is Not for Sale Act” the “First Amendment...” The proofreaders responsible for this mistake have been dealt with.

I was particularly struck by the names of each team: “Run DMCA,” “Ineffective Altruists,” “Subpoena Colada,” “JDs not LLM,” “The little VLOP that could,” and “As a language model, I can't answer that question.” Who knew attorneys could create such creative names?

I asked one of the lawyers if he could give me legal advice on a personal matter (I won’t get into the details here, but it concerns both maritime law and equine law). The lawyer gazed at me with the same look one gives a child who has just proudly thew their food all over the floor. I decided to drop the matter.

Back to the event. It was a close game until the sixth and final round, though we wouldn’t hear the final winners until after the tiebreaker questions.

After several minutes, the tiebreaker was announced. The prompt: which team could get the closest to Pi without going over. This sent your intrepid reporter into an existential crisis. Could one really get to the end of pi? I’m told you could get to Pluto with just the first four and didn’t see any reason in going further than that. During my descent into madness, it was revealed that team “JDs not LLMs” knew 22 digits of pi.

After that shocking revelation, the final results were read, with the winning trivia masterminds being:

1st Place: JDs not LLMs

2nd Place: The Little VLOP That Could

3rd Place: As A Language Model, I Can't Answer That Question

EFF Membership Advocate Christian Romero taking over for Raul Duke.

EFF hosts Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for tech users. Among the many firms that dedicate their time, talent, and resources to the cause, we would especially like to thank Fenwick and Morrison Foerster for supporting EFF’s Intern Fund!

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist.

Are you interested in attending or sponsoring an upcoming EFF Trivia Night? Please reach out to tierney@eff.org for more information.

Be sure to check EFF’s events page and mark your calendar for next year’s 17th annual Cyberlaw Trivia Night

New ALPR Vulnerabilities Prove Mass Surveillance Is a Public Safety Threat

Government officials across the U.S. frequently promote the supposed, and often anecdotal, public safety benefits of automated license plate readers (ALPRs), but rarely do they examine how this very same technology poses risks to public safety that may outweigh the crimes they are attempting to address in the first place. When law enforcement uses ALPRs to document the comings and goings of every driver on the road, regardless of a nexus to a crime, it results in gargantuan databases of sensitive information, and few agencies are equipped, staffed, or trained to harden their systems against quickly evolving cybersecurity threats.

The Cybersecurity and Infrastructure Security Agency (CISA), a component of the U.S. Department of Homeland Security, released an advisory last week that should be a wake up call to the thousands of local government agencies around the country that use ALPRs to surveil the travel patterns of their residents by scanning their license plates and "fingerprinting" their vehicles. The bulletin outlines seven vulnerabilities in Motorola Solutions' Vigilant ALPRs, including missing encryption and insufficiently protected credentials.

To give a sense of the scale of the data collected with ALPRs, EFF found that just 80 agencies in California using primarily Vigilant technology, collected more than 1.6 billion license plate scans (CSV) in 2022. This data can be used to track people in real time, identify their "pattern of life," and even identify their relations and associates. An EFF analysis from 2021 found that 99.9% of this data is unrelated to any public safety interest when it's collected. If accessed by malicious parties, the information could be used to harass, stalk, or even extort innocent people.

Unlike location data a person shares with, say, GPS-based navigation app Waze, ALPRs collect and store this information without consent and there is very little a person can do to have this information purged from these systems. And while a person can turn off their phone if they are engaging in a sensitive activity, such as visiting a reproductive health facility or attending a protest, tampering with your license plate is a crime in many jurisdictions. Because drivers don't have control over ALPR data, the onus for protecting the data lies with the police and sheriffs who operate the surveillance and the vendors that provide the technology.

It's a general tenet of cybersecurity that you should not collect and retain more personal data than you are capable of protecting. Perhaps ironically, a Motorola Solutions cybersecurity specialist wrote an article in Police Chief magazine this month that  public safety agencies "are often challenged when it comes to recruiting and retaining experienced cybersecurity personnel," even though "the potential for harm from external factors is substantial." 

That partially explains why, more than 125 law enforcement agencies reported a data breach or cyberattacks between 2012 and 2020, according to research by former EFF intern Madison Vialpando. The Motorola Solutions article claims that ransomware attacks "targeting U.S. public safety organizations increased by 142 percent" in 2023.

Yet, the temptation to "collect it all" continues to overshadow the responsibility to "protect it all." What makes the latest CISA disclosure even more outrageous is it is at least the third time in the last decade that major security vulnerabilities have been found in ALPRs.

In 2015, building off the previous works of University of Arizona researchers, EFF published an investigation that found more than 100 ALPR cameras in Louisiana, California and Florida were connected unsecured to the internet, many with publicly accessible websites that anyone could use to manipulate the controls of the cameras or siphon off data. Just by visiting a URL, a malicious actor, without any specialized knowledge, could view live feeds of the cameras, including one that could be used to spy on college students at the University of Southern California. Some of the agencies involved fixed the problem after being alerted about that problem. However, 3M, which had recently bought the ALPR manufacturer PIPS Technology (which has since been sold to Neology), claimed zero responsibility for the problem, saying instead that it was the agencies' responsibility to manage the devices' cybersecurity. "The security features are clearly explained in our packaging," they wrote. Four years later, TechCrunch found that the problem still persisted.

In 2019, Customs & Border Protections' vendor providing ALPR technology for Border Patrol checkpoints was breached, with hackers gaining access to 105,000 license plate images, as well as more than 184,000 images of travelers from a face recognition pilot program. Some of those images made it onto the dark web, according to reporting by journalist Joseph Cox.

If there's one positive thing we can say about the latest Vigilant vulnerability disclosures, it's that for once a government agency identified and reported the vulnerabilities before they could do damage. The initial discovery was made by the Michigan State Police Michigan Cyber Command Center, which passed the information onto CISA, which then worked with Motorola Solutions to address the problems.

The Michigan Cyber Command center found a total of seven vulnerabilities in Vigilant devices; two of which were medium severity and 5 of which were high severity vulnerabilities.

One of the most severe vulnerabilities (given a score of 8.6 out of 10,) was that every camera sold by Motorola had a wifi network turned on by default that used the same hardcoded password as every other camera, meaning that if someone was able to find the password to connect to one camera they could connect to any other camera as long as they were near it.

Someone with physical access to the camera could also easily install a backdoor, which would allow them access to the camera even if the wifi was turned off. An attacker could even log into the system locally using a default username and password. Once they connected to that camera they would be able to see live video and control the camera, even disable it. Or they could view historic recordings of license plate data stored without any kind of encryption. They would also see logs containing authentication information which could be used to connect to a back-end server where more information is stored. Motorola claims that they have mitigated all of these vulnerabilities.

When vulnerabilities are found, it's not enough for them be patched: They must be used as a stark warnings for policy makers and the courts. Following EFF's report in 2015, Louisiana Gov. Bobby Jindal spiked a statewide ALPR program, writing in his veto message:

Camera programs such as these that make private information readily available beyond the scope of law enforcement, pose a fundamental risk to personal privacy and create large pools of information belonging to law abiding citizens that unfortunately can be extremely vulnerable to theft or misuse.

In May, a Norfolk Circuit Court Judge reached the same conclusion, writing in an order suppressing the data collected by ALPRs in a criminal case:

The Court cannot ignore the possibility of a potential hacking incident either. For example, a team of computer scientists at the University of Arizona was able to find vulnerable ALPR cameras in Washington, California, Texas, Oklahoma, Louisiana, Mississippi, Alabama, Florida, Virginia, Ohio, and Pennsylvania. (Italics added for emphasis.) … The citizens of Norfolk may be concerned to learn the extent to which the Norfolk Police Department is tracking and maintaining a database of their every movement for 30 days. The Defendant argues “what we have is a dragnet over the entire city” retained for a month and the Court agrees.

But a data breach isn't the only way that ALPR data can be leaked or abused. In 2022, an officer in the Kechi (Kansas) Police Department accessed ALPR data shared with his department by the Wichita Police Department to stalk his wife. Meanwhile, recently the Orrville (Ohio) Police Department released a driver's raw ALPR scans to a total stranger in response to a public records request, 404 Media reported.

Public safety agencies must resist the allure of marketing materials promising surveillance omniscience, and instead collect only the data they need for actual criminal investigations. They must never store more data than they adequately protect within their limited resources–or they must keep the public safe from data breaches by not collecting the data at all.

The Next Generation of Cell-Site Simulators is Here. Here’s What We Know.

Dozens of policing agencies are currently using cell-site simulators (CSS) by Jacobs Technology and its Engineering Integration Group (EIG), according to newly-available documents on how that company provides CSS capabilities to local law enforcement. 

A proposal document from Jacobs Technology, provided to the Massachusetts State Police (MSP) and first spotted by the Boston Institute for Nonprofit Journalism (BINJ), outlines elements of the company’s CSS services, which include discreet integration of the CSS system into a Chevrolet Silverado and lifetime technical support. The proposal document is part of a winning bid Jacobs submitted to MSP earlier this year for a nearly $1-million contract to provide CSS services, representing the latest customer for one of the largest providers of CSS equipment.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police.

An image of the Jacobs CSS system as integrated into a Chevrolet Silverado for the Virginia State Police. Source: 2024 Jacobs Proposal Response

The proposal document from Jacobs provides some of the most comprehensive information about modern CSS that the public has had access to in years. It confirms that law enforcement has access to CSS capable of operating on 5G as well as older cellular standards. It also gives us our first look at modern CSS hardware. The Jacobs system runs on at least nine software-defined radios that simulate cellular network protocols on multiple frequencies and can also gather wifi intelligence. As these documents describe, these CSS are meant to be concealed within a common vehicle. Antennas are hidden under a false roof so nothing can be seen outside the vehicles, which is a shift from the more visible antennas and cargo van-sized deployments we’ve seen before.  The system also comes with a TRACHEA2+ and JUGULAR2+ for direction finding and mobile direction finding. 

The Jacobs 5G CSS base station system.

The Jacobs 5G CSS base station system. Source: 2024 Jacobs Proposal Response

CSS, also known as IMSI catchers, are among law enforcement’s most closely-guarded secret surveillance tools. They act like real cell phone towers, “tricking” mobile devices into connecting to them, designed to intercept the information that phones send and receive, like the location of the user and metadata for phone calls, text messages, and other app traffic. CSS are highly invasive and used discreetly. In the past, law enforcement used a technique called “parallel construction”—collecting evidence in a different way to reach an existing conclusion in order to avoid disclosing how law enforcement originally collected it—to circumvent public disclosure of location findings made through CSS. In Massachusetts, agencies are expected to get a warrant before conducting any cell-based location tracking. The City of Boston is also known to own a CSS. 

This technology is like a dragging fishing net, rather than a focused single hook in the water. Every phone in the vicinity connects with the device; even people completely unrelated to an investigation get wrapped up in the surveillance. CSS, like other surveillance technologies, subjects civilians to widespread data collection, even those who have not been involved with a crime, and has been used against protestors and other protected groups, undermining their civil liberties. Their adoption should require public disclosure, but this rarely occurs. These new records provide insight into the continued adoption of this technology. It remains unclear whether MSP has policies to govern its use. CSS may also interfere with the ability to call emergency services, especially for people who have to use accessibility technologies for those who cannot hear.

Important to the MSP contract is the modification of a Chevrolet Silverado with the CSS system. This includes both the surreptitious installment of the CSS hardware into the truck and the integration of its software user interface into the navigational system of the vehicle. According to Jacobs, this is the kind of installation with which they have a lot of experience.

Jacobs has built its CSS project on military and intelligence community relationships, which are now informing development of a tool used in domestic communities, not foreign warzones in the years after September 11, 2001. Harris Corporation, later L3Harris Technologies, Inc., was the largest provider of CSS technology to domestic law enforcement but stopped selling to non-federal agencies in 2020. Once Harris stopped selling to local law enforcement the market was open to several competitors, one of the largest of which was KeyW Corporation. Following Jacobs’s 2019 acquisition of The KeyW Corporation and its Engineering Integration Group (EIG), Jacobs is now a leading provider of CSS to police, and it claims to have more than 300 current CSS deployments globally. EIG’s CSS engineers have experience with the tool dating to late 2001, and they now provide the spectrum of CSS-related services to clients, including integration into vehicles, training, and maintenance, according to the document. Jacobs CSS equipment is operational in 35 state and local police departments, according to the documents.

EFF has been able to identify 13 agencies using the Jacobs equipment, and, according to EFF’s Atlas of Surveillance, more than 70 police departments have been known to use CSS. Our team is currently investigating possible acquisitions in California, Massachusetts, Michigan, and Virginia. 

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system.

An image of the Jacobs CSS system interface integrated into the factory-provided vehicle navigation system. Source: 2024 Jacobs Proposal Response

The proposal also includes details on other agencies’ use of the tool, including that of the Fontana, CA Police Department, which it says has deployed its CSS more than 300 times between 2022 and 2023, and Prince George's County Sheriff (MO), which has also had a Chevrolet Silverado outfitted with CSS. 

Jacobs isn’t the lone competitor in the domestic CSS market. Cognyte Software and Tactical Support Equipment, Inc. also bid on the MSP contract, and last month, the City of Albuquerque closed a call for a cell-site simulator that it awarded to Cognyte Software Ltd. 

A Wider View on TunnelVision and VPN Advice

If you listen to any podcast long enough, you will almost certainly hear an advertisement for a Virtual Private Network (VPN). These advertisements usually assert that a VPN is the only tool you need to stop cyber criminals, malware, government surveillance, and online tracking. But these advertisements vastly oversell the benefits of VPNs. The reality is that VPNs are mainly useful for one thing: routing your network connection through a different network. Many people, including EFF, thought that VPNs were also a useful tool for encrypting your traffic in the scenario that you didn’t trust the network you were on, such as at a coffee shop, university, or hacker conference. But new research from Leviathan Security demonstrates a reminder that this may not be the case and highlights the limited use-cases for VPNs.

TunnelVision is a recently published attack method that can allow an attacker on a local network to force internet traffic to bypass your VPN and route traffic over an attacker-controlled channel instead. This allows the attacker to see any unencrypted traffic (such as what websites you are visiting). Traditionally, corporations deploy VPNs for employees to access private company sites from other networks. Today, many people use a VPN in situations where they don't trust their local network. But the TunnelVision exploit makes it clear that using an untrusted network is not always an appropriate threat model for VPNs because they will not always protect you if you can't trust your local network.

TunnelVision exploits the Dynamic Host Configuration Protocol (DHCP) to reroute traffic outside of a VPN connection. This preserves the VPN connection and does not break it, but an attacker is able to view unencrypted traffic. Think of DHCP as giving you a nametag when you enter the room at a networking event. The host knows at least 50 guests will be in attendance and has allocated 50 blank nametags. Some nametags may be reserved for VIP guests, but the rest can be allocated to guests if you properly RSVP to the event. When you arrive, they check your name and then assign you a nametag. You may now properly enter the room and be identified as "Agent Smith." In the case of computers, this “name” is the IP address DHCP assigns to devices on the network. This is normally done by a DHCP server but one could manually try it by way of clothespins in a server room.

TunnelVision abuses one of the configuration options in DHCP, called Option 121, where an attacker on the network can assign a “lease” of IPs to a targeted device. There have been attacks in the past like TunnelCrack that had similar attack methods, and chances are if a VPN provider addressed TunnelCrack, they are working on verifying mitigations for TunnelVision as well.

In the words of the security researchers who published this attack method:

“There’s a big difference between protecting your data in transit and protecting against all LAN attacks. VPNs were not designed to mitigate LAN attacks on the physical network and to promise otherwise is dangerous.”

Rather than lament the many ways public, untrusted networks can render someone vulnerable, there are many protections provided by default that can assist as well. Originally, the internet was not built with security in mind. Many have been working hard to rectify this. Today, we have other many other tools in our toolbox to deal with these problems. For example, web traffic is mostly encrypted with HTTPS. This does not change your IP address like a VPN could, but it still encrypts the contents of the web pages you visit and secures your connection to a website. Domain Name Servers (which occur before HTTPS in the network stack) have also been a vector for surveillance and abuse, since the requested domain of the website is still exposed at this level. There have been wide efforts to secure and encrypt this as well. Availability for encrypted DNS and HTTPS by default now exists in every major browser, closing possible attack vectors for snoops on the same network as you. Lastly, major browsers have implemented support for Encrypted Client Hello (ECH). Which encrypts your initial website connection, sealing off metadata that was originally left in cleartext.

TunnelVision is a reminder that we need to clarify what tools can and cannot do. A VPN does not provide anonymity online and neither can encrypted DNS or HTTPS (Tor can though). These are all separate tools that handle similar issues. Thankfully, HTTPS, encrypted DNS, and encrypted messengers are completely free and usable without a subscription service and can provide you basic protections on an untrusted network. VPNs—at least from providers who've worked to mitigate TunnelVision—remain useful for routing your network connection through a different network, but they should not be treated as a security multi-tool.

Add Bluetooth to the Long List of Border Surveillance Technologies

A new report from news outlet NOTUS shows that at least two Texas counties along the U.S.-Mexico border have purchased a product that would allow law enforcement to track devices that emit Bluetooth signals, including cell phones, smartwatches, wireless earbuds, and car entertainment systems. This incredibly personal model of tracking is the latest level of surveillance infrastructure along the U.S.-Mexico border—where communities are not only exposed to a tremendous amount of constant monitoring, but also serves as a laboratory where law enforcement agencies at all levels of government test new technologies.

The product now being deployed in Texas, called TraffiCatch, can detect wifi and Bluetooth signals in moving cars to track them. Webb County, which includes Laredo, has had TraffiCatch technology since at least 2019, according to GovSpend procurement data. Val Verde County, which includes Del Rio, approved the technology in 2022. 

This data collection is possible because all Bluetooth devices regularly broadcast a Bluetooth Device Address. This address can be either a public address or a random address. Public addresses don’t change for the lifetime of the device, making them the easiest to track. Random addresses are more common and have multiple levels of privacy, but for the most part change regularly (this is the case with most modern smartphones and products like AirTags.) Bluetooth products with random addresses would be hard to track for a device that hasn’t paired with them. But if the tracked person is also carrying a Bluetooth device that has a public address, or if tracking devices are placed close to each other so a device is seen multiple times before it changes its address, random addresses could be correlated with that person over long periods of time.

It is unclear whether TraffiCatch is doing this sort of advanced analysis and correlation, and how effective it would be at tracking most modern Bluetooth devices.

According to TraffiCatch’s manufacturer, Jenoptik, this data derived from Bluetooth is also combined with data collected from automated license plate readers, another form of vehicle tracking technology placed along roads and highways by federal, state, and local law enforcement throughout the Texas border. ALPRs are well understood technology for vehicle tracking, but the addition of Bluetooth tracking may allow law enforcement to track individuals even if they are using different vehicles.

This mirrors what we already know about how Immigration and Customs Enforcement (ICE) has been using cell-site simulators (CSSs). Also known as Stingrays or IMSI catchers, CSS are devices that masquerade as legitimate cell-phone towers, tricking phones within a certain radius into connecting to the device rather than a tower. In 2023, the Department of Homeland Security’s Inspector General released a troubling report detailing how federal agencies like ICE, its subcomponent Homeland Security Investigations (HSI), and the Secret Service have conducted surveillance using CSSs without proper authorization and in violation of the law. Specifically, the Inspector General found that these agencies did not adhere to federal privacy policy governing the use of CSS and failed to obtain special orders required before using these types of surveillance devices.

Law enforcement agencies along the border can pour money into overlapping systems of surveillance that monitor entire communities living along the border thanks in part to Operation Stonegarden (OPSG), a Department of Homeland Security (DHS) grant program, which rewards state and local police for collaborating in border security initiatives. DHS doled out $90 million in OPSG funding in 2023, $37 million of which went to Texas agencies. These programs are especially alarming to human rights advocates due to recent legislation passed in Texas to allow local and state law enforcement to take immigration enforcement into their own hands.

As a ubiquitous wireless interface to many of our personal devices and even our vehicles, Bluetooth is a large and notoriously insecure attack surface for hacks and exploits. And as TraffiCatch demonstrates, even when your device’s Bluetooth tech isn’t being actively hacked, it can broadcast uniquely identifiable information that make you a target for tracking. This is one in the many ways surveillance, and the distrust it breeds in the public over technology and tech companies, hinders progress. Hands-free communication in cars is a fantastic modern innovation. But the fact that it comes at the cost of opening a whole society up to surveillance is a detriment to all.

Internet Service Providers Plan to Subvert Net Neutrality. Don’t Let Them

In the absence of strong net neutrality protections, internet service providers (ISPs) have made all sorts of plans that would allow them to capitalize on something called "network slicing." While this technology has all sorts of promise, what the ISPs have planned would subvert net neutrality—the principle that all data be treated equally by your service provider—by allowing them to recreate the kinds of “fast lanes” we've already agreed should not be allowed. If their plans succeed, then the new proposed net neutrality protections will end up doing far less for consumers than the old rules did.

The FCC released draft rules to reinstate net neutrality, with a vote on adopting the rules to come the 25th of April. Overall, the order is a great step for net neutrality. However, to be truly effective the rules must not preempt states from protecting their residents with stronger laws and clearly find the creation of “fast lanes” via positive discrimination and unpaid prioritization of specific applications or services are violations of net neutrality.

Fast Lanes and How They Could Harm Competition

Since “fast lanes” aren’t a technical term, what do we mean when we are talking about a fast lane? To understand, it is helpful to think about data traffic and internet networking infrastructure like car traffic and public road systems. As roads connect people, goods, and services across distances, so does network infrastructure allow for data traffic to flow from one place to another. And just as a road with more capacity in the way of more lanes theoretically means the road can support more traffic moving at speed1, internet infrastructure with more “lanes” (i.e. bandwidth) should mean that a network can better support applications like streaming services and online gaming.

Individual ISPs have a maximum network capacity, and speed, of internet traffic they can handle. To continue the analogy, the road leading to your neighborhood has a set number of lanes. This is why the speed of your internet may change throughout the day. At peak hours your internet service may slow down because a slowdown has occurred from too much requested traffic clogging up the lanes.

It’s not inherently a bad thing to have specific lanes for certain types of traffic, actual fast lanes on freeways can improve congestion by not making faster moving vehicles compete for space with slower moving traffic, having exit and entry lanes in freeways also allows cars to perform specialized tasks without impeding other traffic. A lane only for buses isn’t a bad thing as long as every bus gets equal access to that lane and everyone has equal access to riding those buses. Where this becomes a problem is if there is a special lane only for Google buses, or for consuming entertainment content instead of participating in video calls. In these scenarios you would be increasing the quality of certain bus rides at the expense of degraded service for everyone else on the road.

An internet “fast lane” would be the designation of part of the network with more bandwidth and/or lower latency to only be used for certain services. On a technical level, the physical network infrastructure would be split amongst several different software defined networks with different use cases using network slicing. One network might be optimized for high bandwidth applications such as video streaming, another might be optimized for applications needing low latency (e.g. a short distance between the client and the server), and another might be optimized for IoT devices. The maximum physical network capacity is split among these slices. To continue our tortured metaphor, your original six lane general road is now a four lane general road with two lanes reserved for, say, a select list of streaming services. Think dedicated high speed lanes for Disney+, HBO, and Netflix, but those services only. In a network neutral construction of the infrastructure, all internet traffic shares all lanes, and no specific app or service is unfairly sped up or slowed down. This isn’t to say that we are inherently against network management techniques like quality of service or network slicing. But it’s important that quality of service efforts be undertaken, as much as possible, in an application agnostic manner.

The fast lanes metaphor isn’t ideal. On the road having fast lanes is a good thing, it can protect more slow and cautious drivers from dangerous driving and improve the flow of traffic. Bike lanes are a good thing because they make cyclists safer and allow cars to drive more quickly and not have to navigate around them. But with traffic lanes it’s the driver, not the road, that decides which lane they belong in (with penalties for doing obviously bad faith things such as driving in the bike lane.)

Internet service providers (ISPs) are already testing their ability to create these network slices. They already have plans of creating market offerings where certain applications and services, chosen by them, are given exclusive reserved fast lanes while the rest of the internet must shoulder their way through what is left. This kind of networking slicing is a violation of net neutrality. We aren’t against network slicing as a technology, it could be useful for things like remote surgery or vehicle to vehicle communication which requires low latency connections and is in the public interest, which are separate offerings and not part of the broadband services covered in the draft order. We are against network slicing being used as a loophole to circumvent principles of net neutrality.

Fast Lanes Are a Clear Violation of Net Neutrality

Where net neutrality is the principle that all ISPs should treat all legitimate traffic coming over their networks equally, discriminating between  certain applications or types of traffic is a clear violation of that principle. When fast lanes speed up certain applications or certain classes of applications, they cannot do so without having a negative impact on other internet traffic, even if it’s just by comparison. This is throttling, plain and simple.

Further, because ISPs choose which applications or types of services get to be in the fast lane, they choose winners and losers within the internet, which has clear harms to both speech and competition. Whether your access to Disney+ is faster than your access to Indieflix because Disney+ is sped up or because Indieflix is slowed down doesn’t matter because the end result is the same: Disney+ is faster than Indieflix and so you are incentivized to use Disney+ over Indieflix.

ISPs should not be able to harm competition even by deciding to prioritize incumbent services over new ones, or that one political party’s website is faster than another’s. It is the consumer who should be in charge of what they do online. Fast lanes have no place in a network neutral internet.

  • 1. Urban studies research shows that this isn’t actually the case, still it remains the popular wisdom among politicians and urban planners.

EFF Helps News Organizations Push Back Against Legal Bullying from Cyber Mercenary Group

Cyber mercenaries present a grave threat to human rights and freedom of expression. They have been implicated in surveillance, torture, and even murder of human rights defenders, political candidates, and journalists. One of the most effective ways that the human rights community pushes back against the threat of targeted surveillance and cyber mercenaries is to investigate and expose these companies and their owners and customers. 

But for the last several months, there has emerged a campaign of bullying and censorship seeking to wipe out stories about the mercenary hacking campaigns of a less well-known company, Appin Technology, in general, and the company’s cofounder, Rajat Khare, in particular. These efforts follow a familiar pattern: obtain a court order in a friendly international jurisdiction and then misrepresent the force and substance of that order to bully publishers around the world to remove their stories.

We are helping to push back on that effort, which seeks to transform a very limited and preliminary Indian court ruling into a global takedown order. We are representing Techdirt and MuckRock Foundation, two of the news entities asked to remove Appin-related content from their sites. On their behalf, we challenged the assertions that the Indian court either found the Reuters reporting to be inaccurate or that the order requires any entities other than Reuters and Google to do anything. We requested a response – so far, we have received nothing.

Background

If you worked in cybersecurity in the early 2010’s, chances are that you remember Appin Technology, an Indian company offering information security education and training with a sideline in (at least according to many technical reports) hacking-for-hire. 

On November 16th, 2023, Reuters published an extensively-researched story titled “How an Indian Startup Hacked the World” about Appin Technology and its cofounder Rajat Khare. The story detailed hacking operations carried out by Appin against private and government targets all over the world while Khare was still involved with the company. The story was well-sourced, based on over 70 original documents and interviews with primary sources from inside Appin. But within just days of publication, the story—and many others covering the issue—disappeared from most of the web.

On December 4th, an Indian court preliminarily ordered Reuters to take down their story about Appin Technology and Khare while a case filed against them remains pending in the court. Reuters subsequently complied with the order and took the story offline. Since then dozens of other journalists have written about the original story and about the takedown that followed. 

At the time of this writing, more than 20 of those stories have been taken down by their respective publications, many at the request of an entity called “Association of Appin Training Centers (AOATC).” Khare’s lawyers have also sent letters to news sites in multiple countries demanding they remove his name from investigative reports. Khare’s lawyers also succeeded in getting Swiss courts to issue an injunction against reporting from Swiss public television, forcing them to remove his name from a story about Qatar hiring hackers to spy on FIFA officials in preparation for the World Cup. Original stories, cybersecurity reports naming Appin, stories about the Reuters story, and even stories about the takedown have all been taken down. Even the archived version of the Reuters story was taken down from archive.org in response to letters sent by the Association of Appin Training Centers.

One of the letters sent by AOATC to Ron Deibert, the founder and director of Citizen Lab, reads:

A letter from the association of appin training centers to citizenlab asking the latter to take down their story .

Ron Deibert had the following response:

 "The #SLAPP story killers from India 🇮🇳 looking to silence @Reuters  @Bing_Chris  @razhael  & colleagues are coming after me too!  I received the following 👇  "takedown" notice from the "Association of Appin Training Centers" to which I say:  🖕🖕🖕🖕🖕🖕🖕"

Not everyone has been as confident as Ron Deibert. Some of the stories that were taken down have been replaced with a note explaining the takedown, while others were redacted into illegibility, such as the story from Lawfare:

 On Dec. 28, 2023, Lawfare received a letter notifying us that the Reuters story summarized in this article had been taken down pursuant to court order in response to allegations that it is false and defamatory. The letter demanded that we retract this post as well. The article in question has, indeed, been removed from the Reuters web site, replac

It is not clear who is behind The Association of Appin Training Centers, but according to documents surfaced by Reuters, the organization didn’t exist until after the lawsuit was filed against Reuters in Indian court. Khare’s lawyers have denied any connection between Khare and the training center organization. Even if this is true, it is clear that the goals of both parties are fundamentally aligned in silencing any negative press covering Appin or Rajat Khare.  

Regardless of who is behind the Association of Appin Training Centers, the links between Khare and Appin Technology are extensive and clear. Khare continues to claim that he left Appin in 2013, before any hacking-for-hire took place. However, Indian corporate records demonstrate that he stayed involved with Appin long after that time. 

Khare has also been the subject of multiple criminal investigations. Reuters published a sworn 2016 affidavit by Israeli private investigator Aviram Halevi in which he admits hiring Appin to steal emails from a Korean businessman. It also published a 2012 Dominican prosecutor’s filing which described Khare as part of an alleged hacker’s “international criminal network.” A publicly available criminal complaint filed with India’s Central Bureau of Investigation shows that Khare is accused, with others, of embezzling nearly $100 million from an Indian education technology company. A Times of India story from 2013 notes that Appin was investigated by an unnamed Indian intelligence agency over alleged “wrongdoings.”

Response to AOATC

EFF is helping two news organizations stand up to the Association of Appin Training Centers’ bullying—Techdirt and Muckrock Foundation. 

Techdirt received a similar request to the one Ron Diebert received, after it published an article about the Reuters takedown, but then also received the following emails:

Dear Sir/Madam,

I am writing to you on behalf of Association of Appin Training Centers in regards to the removal of a defamatory article running on https://www.techdirt.com/ that refers to Reuters story, titled: “How An Indian Startup Hacked The World” published on 16th November 2023.

As you must be aware, Reuters has withdrawn the story, respecting the order of a Delhi court. The article made allegations without providing substantive evidence and was based solely on interviews conducted with several people.

In light of the same, we request you to kindly remove the story as it is damaging to us.

Please find the URL mentioned below.

https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

Thanks & Regards

Association of Appin Training Centers

And received the following email twice, roughly two weeks apart:

Hi Sir/Madam

This mail is regarding an article published on your website,

URL : https://www.techdirt.com/2023/12/07/indian-court-orders-reuters-to-take-down-investigative-report-regarding-a-hack-for-hire-company/

dated on 7th Dec. 23 .

As you have stated in your article, the Reuters story was declared defamatory by the Indian Court which was subsequently removed from their website.

However, It is pertinent to mention here that you extracted a portion of your article from the same defamatory article which itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.

You are advised to remove this article from your website with immediate effect.

 

Thanks & Regards

Association of Appin Training Centers

We responded to AOATC on behalf of Techdirt and MuckRock Foundation to the “requests for assistance” which were sent to them, challenging AOATC’s assertions about the substance and effect of the Indian court interim order. We pointed out that the Indian court order is only interim and not a final judgment that Reuters’ reporting was false, and that it only requires Reuters and Google to do anything. Furthermore, we explained that even if the court order applied to MuckRock and Techdirt, the order is inconsistent with the First Amendment and would be unenforceable in US courts pursuant to the SPEECH Act:

To the Association of Appin Training Centers:

We represent and write on behalf of Techdirt and MuckRock Foundation (which runs the DocumentCloud hosting services), each of which received correspondence from you making certain assertions about the legal significance of an interim court order in the matter of Vinay Pandey v. Raphael Satter & Ors. Please direct any future correspondence about this matter to me.

We are concerned with two issues you raise in your correspondence.

First, you refer to the Reuters article as containing defamatory materials as determined by the court. However, the court’s order by its very terms is an interim order, that indicates that the defendants’ evidence has not yet been considered, and that a final determination of the defamatory character of the article has not been made. The order itself states “this is only a prima-facie opinion and the defendants shall have sufficient opportunity to express their views through reply, contest in the main suit etc. and the final decision shall be taken subsequently.”

Second, you assert that reporting by others of the disputed statements made in the Reuters article “itself is a violation of an Indian Court Order, thereby making you also liable under Contempt of Courts Act, 1971.” But, again by its plain terms, the court’s interim order applies only to Reuters and to Google. The order does not require any other person or entity to depublish their articles or other pertinent materials. And the order does not address its effect on those outside the jurisdiction of Indian courts. The order is in no way the global takedown order your correspondence represents it to be. Moreover, both Techdirt and MuckRock Foundation are U.S. entities. Thus, even if the court’s order could apply beyond the parties named within it, it will be unenforceable in U.S. courts to the extent it and Indian defamation law is inconsistent with the First Amendment to the U.S. Constitution and 47 U.S.C. § 230, pursuant to the SPEECH Act, 28 U.S.C. § 4102. Since the First Amendment would not permit an interim depublication order in a defamation case, the Pandey order is unenforceable.

If you disagree, please provide us with legal authority so we can assess those arguments. Unless we hear from you otherwise, we will assume that you concede that the order binds only Reuters and Google and that you will cease asserting otherwise to our clients or to anyone else.

We have not yet received any response from AOATC. We hope that others who have received takedown requests and demands from AOATC will examine their assertions with a critical eye.  

If a relatively obscure company like AOATC or an oligarch like Rajat Khare can succeed in keeping their name out of the public discourse with strategic lawsuits, it sets a dangerous precedent for other larger, better-resourced, and more well-known companies such as Dark Matter or NSO Group to do the same. This would be a disaster for civil society, a disaster for security research, and a disaster for freedom of expression.

Worried About AI Voice Clone Scams? Create a Family Password

31 janvier 2024 à 19:42

Your grandfather receives a call late at night from a person pretending to be you. The caller says that you are in jail or have been kidnapped and that they need money urgently to get you out of trouble. Perhaps they then bring on a fake police officer or kidnapper to heighten the tension. The money, of course, should be wired right away to an unfamiliar account at an unfamiliar bank. 

It’s a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim’s common sense and make them more likely to send money. Now, scammers are reportedly experimenting with a way to further heighten that panic by playing a simulated recording of “your” voice. Fortunately, there’s an easy and old-school trick you can use to preempt the scammers: creating a shared verbal password with your family.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology. There are myriad websites that will let you make voice clones. Some will let you use a variety of celebrity voices to say anything they want, while others will let you upload a new person’s voice to create a voice clone of anyone you have a recording of. Scammers have figured out that they can use this to clone the voices of regular people. Suddenly your relative isn’t talking to someone who sounds like a complete stranger, they are hearing your own voice. This makes the scam much more concerning. 

Voice generation scams aren’t widespread yet, but they do seem to be happening. There have been news stories and even congressional testimony from people who have been the targets of voice impersonation scams. Voice cloning scams are also being used in political disinformation campaigns as well. It’s impossible for us to know what kind of technology these scammers used, or if they're just really good impersonations. But it is likely that the scams will grow more prevalent as the technology gets cheaper and more ubiquitous. For now, the novelty of these scams, and the use of machine learning and deepfakes, technologies which are raising concerns across many sectors of society, seems to be driving a lot of the coverage. 

The family password is a decades-old, low tech solution to this modern high tech problem. 

The first step is to agree with your family on a password you can all remember and use. The most important thing is that it should be easy to remember in a panic, hard to forget, and not public information. You could use the name of a well known person or object in your family, an inside joke, a family meme, or any word that you can all remember easily. Despite the name, this doesn't need to be limited to your family, it can be a chosen family, workplace, anarchist witch coven, etc. Any group of people with which you associate can benefit from having a password. 

Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can’t tell it to you, then they might be a fake. You could of course further verify this with other questions,  like, “what is my cat's name” or “when was the last time we saw each other?” These sorts of questions work even if you haven’t previously set up a passphrase in your family or friend group. But keep in mind people tend to forget basic things when they have experienced trauma or are in a panic. It might be helpful, especially for   people with less robust memories, to write down the password in case you forget it. After all, it’s not likely that the scammer will break into your house to find the family password.

These techniques can be useful against other scams which haven’t been invented yet, but which may come around as deepfakes become more prevalent, such as machine-generated video or photo avatars for “proof.” Or should you ever find yourself in a hackneyed sci-fi situation where there are two identical copies of your friend and you aren’t sure which one is the evil clone and which one is the original. 

An image of spider-man pointing at another spider-man who is pointing at him. A classic meme.

Spider-man hopes The Avengers haven't forgotten their secret password!

The added benefit of this technique is that it gives you a minute to step back, breath, and engage in some critical thinking. Many scams of this nature rely on panic and keeping you in your lower brain, by asking for the passphrase you can also take a minute to think. Is your kid really in Mexico right now? Can you call them back at their phone number to be sure it’s them?  

So, go make a family password and a friend password to keep your family and friends from getting scammed by AI impostors (or evil clones).

What Apple's Promise to Support RCS Means for Text Messaging

31 janvier 2024 à 16:51

You may have heard recently that Apple is planning to implement Rich Communication Services (RCS) on iPhones, once again igniting the green versus blue bubble debate. RCS will thankfully bring a number of long-missing features to those green bubble conversations in Messages, but Apple's proposed implementation has a murkier future when it comes to security. 

The RCS standard will replace SMS, the protocol behind basic everyday text messages, and MMS, the protocol for sending pictures in text messages. RCS has a number of improvements over SMS, including being able to send longer messages, sending high quality pictures, read receipts, typing indicators, GIFs, location sharing, the ability to send and receive messages over Wi-Fi, and improved group messaging. Basically, it's a modern messaging standard with features people have grown to expect. 

The RCS standard is being worked on by the same standards body (GSMA) that wrote the standard for SMS and many other core mobile functions. It has been in the works since 2007 and supported by Google since 2019. Apple had previously said it wouldn’t support RCS, but recently came around and declared that it will support sending and receiving RCS messages starting some time in 2024. This is a win for user experience and interoperability, since now iPhone and Android users will be able to send each other rich modern text messages using their phone’s default messaging apps. 

But is it a win for security? 

On its own, the core RCS protocol is currently not any more secure than SMS. The protocol is not encrypted by default, meaning that anyone at your phone company or any law enforcement agent (ordinarily with a warrant) will be able to see the contents and metadata of your RCS messages. The RCS protocol by itself does not specify or recommend any type of end-to-end encryption. The only encryption of messages is in the incidental transport encryption that happens between your phone and a cell tower. This is the same way it works for SMS.

But what’s exciting about RCS is its native support for extensions. Google has taken advantage of this ability to implement its own plan for encryption on top of RCS using a version of the Signal protocol. As of now, this only works for users who are both using Google’s default messaging app (Google Messages), and whose phone companies support RCS messaging (the big three in the U.S. all do, as do a majority around the world). If encryption is not supported by either user the conversation continues to use the default unencrypted version. A user’s phone company could actively choose to block encrypted RCS in a specific region or for a specific user or for a specific pair of users by pretending it doesn’t support RCS. In that case the user will be given the option of resending the messages unencrypted, but can choose to not send the message over the unencrypted channel. Google’s implementation of encrypted RCS also doesn’t hide any metadata about your messages, so law enforcement could still get a record of who you conversed with, how many messages were sent, at what times, and how big the messages were. It's a significant security improvement over SMS, but people with heightened risk profiles should still consider apps that leak less metadata, like Signal. Despite those caveats this is a good step by Google towards a fully encrypted text messaging future.

Apple stated it will not use any type of proprietary end-to-end encryption–presumably referring to Google's approach—but did say it would work to make end-to-end encryption part of the RCS standard. Avoiding a discordant ecosystem with a different encryption protocol for each company is desirable goal. Ideally Apple and Google will work together on standardizing end-to-end encryption in RCS so that the solution is guaranteed to work with both companies’ products from the outset. Hopefully encryption will be a part of the RCS standard by the time Apple officially releases support for it, otherwise users will be left with the status quo of having to use third-party apps for interoperable encrypted messaging.

We hope that the GSMA members will agree on a standard soon, that any standard will use modern cryptographic techniques, and that the standard will do more to protect metadata and downgrade attacks than the current implementation of encrypted RCS. We urge Google and Apple to work with the GSMA to finalize and adopt such a standard quickly. Interoperable, encrypted text messaging by default can’t come soon enough.

Meta Announces End-to-End Encryption by Default in Messenger

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Tor University Challenge: First Semester Report Card

4 décembre 2023 à 13:29

In August of 2023 EFF announced the Tor University Challenge, a campaign to get more universities around the world to operate Tor relays. The primary goal of this campaign is to strengthen the Tor network by creating more high bandwidth and reliable Tor nodes. We hope this will also make the Tor network more resilient to censorship since any country or smaller network cutting off access to Tor means it would be also cutting itself off from a large swath of universities, academic knowledge, and collaborations.

If you have already started a relay at your university, and want help or a prize LET US KNOW.

We started the campaign with thirteen institutions:

  • Technical University Berlin (Germany)
  • Boston University (US)
  • University of Cambridge (England)
  • Carnegie Mellon University (US)
  • University College London (England)
  • Georgetown University (US)
  • Johannes Kepler Universität Linz (Austria)
  • Karlstad University (Sweden)
  • KU Leuven (Belgium)
  • University of Michigan (US)
  • University of Minnesota (US)
  • Massachusetts Institute of Technology (US)
  • University of Waterloo (Canada)

People at each of these institutions have been running Tor relays for over a year and are contributing significantly to the Tor network.

Since August, we've spent much of our time discovering and making contact with existing relays.  People at these institutions were already accomplishing the campaign goals, but hadn't made it into the launch:

  • University of North Carolina (US)
  • Universidad Nacional Autónoma de México (Mexico)
  • University of the Philippines (Philippines)
  • University of Bremen (Germany)
  • University of Twente (Netherlands)
  • Karlsruhe Institute of Technology (Germany)
  • Universitatea Politehnica Timișoara (Romania)

In addition, two of the institutions in the original launch list have started public relays. University of Michigan used to run only a Snowflake back-end bridge, and now they're running a new exit relay too. Georgetown University used to run only a default obfs4 bridge, and now they're running a non-exit relay as well.

Setting up new relays at educational institutions can be a lengthy process, because it can involve getting buy-in and agreement from many different parts of the organization. Five new institutions are in the middle of this process, and we're hopeful we'll be able to announce them soon. For many of the institutions on our list we were able to reaffirm their commitment to running Tor relays or help provide the impetus needed to make the relay more permanent. In some cases we were also able to provide answers to technical questions or other concerns.

In Europe, we are realizing that relationship-building with the per-country National Research and Education Network organizations (NREN) is key to sustainability. In the United States each university buys its own internet connection however it likes, but in Europe each university gets its internet from its nation's NREN. That means relays running in the NRENs themselves—while not technically in a university—are even more important because they represent Tor support at the national level. Our next step is to make better contact with the NRENs that appear especially Tor-supportive: Switzerland, Sweden, the Netherlands, and Greece.

Now that we have fostered connections with many of the existing institutions that are running relays we want to get new institutions on board! We need more institutions to step up and start running Tor relays, whether as part of your computer science or cybersecurity department, or in any other  department where you can establish a group of people to maintain such a relay. But you don’t have to be a CS or engineering student or professor to join us! Political science, international relations, journalism, and any other department can all join in on the fun and be a part of making a more censorship resistant internet! We also welcome universities from anywhere in the world. For now universities from the US and EU make up the bulk of the relays. We would love to see more universities from the global south join our coalition.

We have many helpful technical, legal, and policy arguments about why your university should run a Tor relay on our website if you need help convincing people at your university.

And don’t forget about the prizes! Any university who keeps a Tor relay up for more than a year will receive these fantastic custom designed challenge coins, one for each member of your Tor team!

Front and back of the tor university challenge coins, the front has three cute onions jumping rope on a purple background with gold trim and the back has the words "Tor University Challenge"  on a gold background with purple trim

The beautiful challenge coins you can get for participating in the Tor University Challenge

If you have already started a relay at your university, and want help or a prize LET US KNOW.

❌
❌