Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 27 septembre 2024Electronic Frontier Foundation

New Email Scam Includes Pictures of Your House. Don’t Fall For It.

27 septembre 2024 à 15:36

You may have arrived at this post because you received an email with an attached PDF from a purported hacker who is demanding payment or else they will send compromising information—such as pictures sexual in nature—to all your friends and family. You’re searching for what to do in this frightening situation, and how to respond to an apparently personalized threat that even includes your actual “LastNameFirstName.pdf” and a picture of your house.

Don’t panic. Contrary to the claims in your email, you probably haven't been hacked (or at least, that's not what prompted that email). This is merely a new variation on an old scam —actually, a whole category of scams called "sextortion." This is a type of online phishing that is targeting people around the world and preying on digital-age fears. It generally uses publicly available information or information from data breaches, not information obtained from hacking the recipients of the emails specifically, and therefore it is very unlikely the sender has any "incriminating" photos or has actually hacked your accounts or devices.

They begin the emails showing you your address, full name, and possibly a picture of your house. 

We’ll talk about a few steps to take to protect yourself, but the first and foremost piece of advice we have: do not pay the ransom.

We have pasted an example of this email scam at the bottom of this post. The general gist is that a hacker claims to have compromised your computer and says they will release embarrassing information—such as images of you captured through your web camera or your pornographic browsing history—to your friends, family, and co-workers.  The hacker promises to go away if you send them thousands of dollars, usually with bitcoin. This is different from a separate sextortion scam in which a stranger befriends and convinces a user to exchange sexual content then demands payment for secrecy; a much more perilous situation which requires a more careful response.

What makes the email especially alarming is that, to prove their authenticity, they begin the emails showing you your address, full name, and possibly a picture of your house. 

Again, this still doesn't mean you've been hacked. The scammers in this case likely found a data breach which contained a list of names, emails, and home addresses and are sending this email out to potentially millions of people, hoping that some of them would be worried enough and pay out that the scam would become profitable.

Here are some quick answers to the questions many people ask after receiving these emails.

They Have My Address and Phone Number! How Did They Get a Picture of My House?

Rest assured that the scammers were not in fact outside your house taking pictures. For better or worse, pictures of our houses are all over the internet. From Google Street View to real estate websites, finding a picture of someone’s house is trivial if you have their address. While public data on your home may be nerve-wracking, similar data about government property can have transparency benefits.

Unfortunately, in the modern age, data breaches are common, and massive sets of peoples’ personal information often make their way to the criminal corners of the Internet. Scammers likely obtained such a list or multiple lists including email addresses, names, phone numbers, and addresses for the express purpose of including a kernel of truth in an otherwise boilerplate mass email.

It’s harder to change your address and phone number than it is to change your password. The best thing you can do here is be aware that your information is out there and be careful of future scams using this information. Since this information (along with other leaked info such as your social security number) can be used for identity theft, it's a good idea to freeze your credit.

And of course, you should always change your password when you’re alerted that your information has been leaked in a breach. You can also use a service like Have I Been Pwned to check whether you have been part of one of the more well-known password dumps.

Should I Respond to the Email?

Absolutely not. With this type of scam, the perpetrator relies on the likelihood that a small number of people will respond out of a batch of potentially millions. Fundamentally this isn't that much different from the old Nigerian prince scam, just with a different hook. By default they expect most people will not even open the email, let alone read it. But once they get a response—and a conversation is initiated—they will likely move into a more advanced stage of the scam. It’s better to not respond at all.

So,  I Shouldn’t Pay the Ransom?

You should not pay the ransom. If you pay the ransom, you’re not only losing money, but you’re encouraging the scammers to continue phishing other people. If you do pay, then the scammers may also use that as a pressure point to continue to blackmail you, knowing that you’re susceptible.

What Should I Do Instead?

Unfortunately there isn’t much you can do. But there are a few basic security hygiene steps you can take that are always a good idea. Use a password manager to keep your passwords strong and unique. Moving forward, you should make sure to enable two-factor authentication whenever that is an option on your online accounts. You can also check out our Surveillance Self-Defense guide for more tips on how to protect your security and privacy online.

One other thing to do to protect yourself is apply a cover over your computer’s camera. We offer some through our store, but a small strip of electrical tape will do. This can help ease your mind if your worried that a rogue app may be turning your camera on, or that you left it on yourself—unlikely, but possible scenarios. 

We know this experience isn't fun, but it's also not the end of the world. Just ignore the scammers' empty threats and practice good security hygiene going forward!

Overall this isn’t an issue that is up to consumers to fix. The root of the problem is that data brokers and nearly every other company have been allowed to store too much information about us for too long. Inevitably this data gets breached and makes its way into criminal markets where it is sold and traded and used for scams like this one. The most effective way to combat this would be with comprehensive federal privacy laws. Because, if the data doesn’t exist, it can’t be leaked. The best thing for you to do is advocate for such a law in Congress, or at the state level. 

Below is a real example of the scam that was sent to an EFF employee. The scam text is the same across many different victims..

Example 1

[Name],

I know that calling [Phone Number] or visiting [your address] would be a convenient way to contact you in case you don't act. Don't even try to escape from this. You've no idea what I'm capable of in [Your City].

I suggest you read this message carefully. Take a moment to chill, breathe, and analyze it thoroughly. 'Cause we're about to discuss a deal between you and me, and I don't play games. You do not know me but I know you very well and right now, you are wondering how, right? Well, you've been treading on thin ice with your browsing habits, scrolling through those videos and clicking on links, stumbling upon some not-so-safe sites. I placed a Malware on a porn website & you visited it to watch(you get my drift). While you were watching those videos, your smartphone began working as a RDP (Remote Control) which provided me complete control over your device. I can peep at everything on your display, flick on your camera and mic, and you wouldn't even suspect a thing. Oh, and I have got access to all your emails, contacts, and social media accounts too.

Been keeping tabs on your pathetic life for a while now. It's simply your bad luck that I accessed your misdemeanor. I gave in more time than I should have looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing filthy things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other half, its your vacant face. With simply a single click, I can send this video to every single of your contacts.

I see you are getting anxious, but let's get real. Actually, I want to wipe the slate clean, and allow you to get on with your daily life and wipe your slate clean. I will present you two alternatives. First Alternative is to disregard this email. Let us see what is going to happen if you take this path. Your video will get sent to all your contacts. The video was lit, and I can't even fathom the humiliation you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.

Option 2 is to pay me, and be confidential about it. We will name it my “privacy charges”. let me tell you what will happen if you opt this option. Your secret remains private. I will destroy all the data and evidence once you come through with the payment. You'll transfer the payment via Bitcoin only.

Pay attention, I'm telling you straight: 'We gotta make a deal'. I want you to know I'm coming at you with good intentions. My word is my bond.

Required Amount: $1950

BITCOIN ADDRESS: [REDACTED]

Let me tell ya, it's peanuts for your tranquility.

Notice: You now have one day in order to make the payment and I will only accept Bitcoins (I have a special pixel within this message, and now I know that you have read through this message). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this or negotiating, it's pointless. The email and wallet are custom-made for you, untraceable. If I suspect that you've shared or discussed this email with anyone else, the garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless. I don't make mistakes, [Name].

A picture of the EFF offices, in the style often used in this scam.

Can you notice something here?

Honestly, those online tips about covering your camera aren't as useless as they seem. I am waiting for my payment…

Example 2

[NAME],
Is visiting [ADDRESS] a better way to contact in case you don't act
Beautiful neighborhood btw
It's important you pay attention to this message right now. Take a moment to chill, breathe, and analyze it thoroughly. We're talking about something serious here, and I ain't playing games. You do not know anything about me but I know you very well and right now, you are thinking how, correct?
Well, You've been treading on thin ice with your browsing habits, scrolling through those filthy videos and clicking on links, stumbling upon some not-so-safe sites. I installed a Spyware called "Pegasus" on a app you frequently use. Pegasus is a spyware that is designed to be covertly and remotely installed on mobile phones running iOS and Android. While you were busy watching videos, your device started out working as a RDP (Remote Protocol) which gave me total control over your device. I can peep at everything on your display, flick on your cam and mic, and you wouldn't even notice. Oh, and I've got access to all your emails, contacts, and social media accounts too.
What I want?
Been keeping tabs on your pathetic existence for a while now. It's just your hard luck that I accessed your misdemeanor. I invested in more time than I probably should've looking into your personal life. Extracted quite a bit of juicy info from your system. and I've seen it all. Yeah, Yeah, I've got footage of you doing embarrassing things in your room (nice setup, by the way). I then developed videos and screenshots where on one side of the screen, there's whatever garbage you were enjoying, and on the other part, it is your vacant face. With just a click, I can send this filth to all of your contacts.
What can you do?
I see you are getting anxious, but let's get real. Wholeheartedly, I am willing to wipe the slate clean, and let you move on with your regular life and wipe your slate clean. I am about to present you two alternatives. Either turn a blind eye to this warning (bad for you and your family) or pay me a small amount to finish this mattter forever. Let us understand those 2 options in details.
First Option is to ignore this email. Let us see what will happen if you select this path. I will send your video to your contacts. The video was straight fire, and I can't even fathom the embarrasement you'll endure when your colleagues, friends, and fam check it out. But hey, that's life, ain't it? Don't be playing the victim here.
Other Option is to pay me, and be confidential about it. We will name it my “privacy fee”. let me tell you what happens when you go with this choice. Your filthy secret will remain private. I will wipe everything clean once you send payment. You'll transfer the payment through Bitcoin only. I want you to know I'm aiming for a win-win here. I'm a person of integrity.
Transfer Amount: USD 2000
My Bitcoin Address: [BITCOIN ADDRESS]
Or, (Here is your Bitcoin QR code, you can scan it):
[IMAGE OF A QR CODE]
Once you pay up, you'll sleep like a baby. I keep my word.
Important: You now have one day to sort this out. (I've a special pixel in this message, and now I know that you've read through this mail). My system will catch that Bitcoin payment and wipe out all the dirt I got on you. Don't even think about replying to this, it's pointless. The email and wallet are custom-made for you, untraceable. I don't make mistakes, [NAME]. If I notice that you've shared or discussed this mail with anyone else, your garbage will instantly start getting sent to your contacts. And don't even think about turning off your phone or resetting it to factory settings. It's pointless.
Honestly, those online tips about covering your camera aren't as useless as they seem.
Don't dwell on it. Take it as a little lesson and keep your guard up in the future.

 

FTC Report Confirms: Commercial Surveillance is Out of Control

Par : Lena Cohen
26 septembre 2024 à 10:55

A new Federal Trade Commission (FTC) report confirms what EFF has been warning about for years: tech giants are widely harvesting and sharing your personal information to fuel their online behavioral advertising businesses. This four-year investigation into the data practices of nine social media and video platforms, including Facebook, YouTube, and X (formally Twitter), demonstrates how commercial surveillance leaves consumers with little control over their privacy. While not every investigated company committed the same privacy violations, the conclusion is clear: companies prioritized profits over privacy. 

While EFF has long warned about these practices, the FTC’s investigation offers detailed evidence of how widespread and invasive commercial surveillance has become. Here are key takeaways from the report:

Companies Collected Personal Data Well Beyond Consumer Expectations

The FTC report confirms that companies collect data in ways that far exceed user expectations. They’re not just tracking activity on their platforms, but also monitoring activity on other websites and apps, gathering data on non-users, and buying personal information from third-party data brokers. Some companies could not, or would not, disclose exactly where their user data came from. 

The FTC found companies gathering detailed personal information, such as the websites you visit, your location data, your demographic information, and your interests, including sensitive interests like “divorce support” and “beer and spirits.” Some companies could only report high-level descriptions of the user attributes they tracked, while others produced spreadsheets with thousands of attributes. 

There’s Unfettered Data Sharing With Third Parties

Once companies collect your personal information, they don’t always keep it to themselves. Most companies reported sharing your personal information with third parties. Some companies shared so widely that they claimed it was impossible to provide a list of all third-party entities they had shared personal information with. For the companies that could identify recipients, the lists included law enforcement and other companies, both inside and outside the United States. 

Alarmingly, most companies had no vetting process for third parties before sharing your data, and none conducted ongoing checks to ensure compliance with data use restrictions. For example, when companies say they’re just sharing your personal information for something that seems unintrusive, like analytics, there's no guarantee your data is only used for the stated purpose. The lack of safeguards around data sharing exposes consumers to significant privacy risks.

Consumers Are Left in the Dark

The FTC report reveals a disturbing lack of transparency surrounding how personal data is collected, shared, and used by these companies. If companies can’t tell the FTC who they share data with, how can you expect them to be honest with you?

Data tracking and sharing happens behind the scenes, leaving users largely unaware of how much privacy they’re giving up on different platforms. These companies don't just collect data from their own platforms—they gather information about non-users and from users' activity across the web. This makes it nearly impossible for individuals to avoid having their personal data swept up into these vast digital surveillance networks. Even when companies offer privacy controls, the controls are often opaque or ineffective. The FTC also found that some companies were not actually deleting user data in response to deletion requests.

The scale and secrecy of commercial surveillance described by the FTC demonstrates why the burden of protecting privacy can’t fall solely on individual consumers.

Surveillance Advertising Business Models Are the Root Cause

The FTC report underscores a fundamental issue: these privacy violations are not just occasional missteps—they’re inherent to the business model of online behavioral advertising. Companies collect vast amounts of data to create detailed user profiles, primarily for targeted advertising. The profits generated from targeting ads based on personal information drive companies to develop increasingly invasive methods of data collection. The FTC found that the business models of most of the companies incentivized privacy violations.

FTC Report Underscores Urgent Need for Legislative Action

Without federal privacy legislation, companies have been able to collect and share billions of users’ personal data with few safeguards. The FTC report confirms that self-regulation has failed: companies’ internal data privacy policies are inconsistent and inadequate, allowing them to prioritize profits over privacy. In the FTC’s own words, “The report leaves no doubt that without significant action, the commercial surveillance ecosystem will only get worse.”

To address this, the EFF advocates for federal privacy legislation. It should have many components, but these are key:

  1. Data minimization and user rights: Companies should be prohibited from processing a person’s data beyond what’s necessary to provide them what they asked for. Users should have the right to access their data, port it, correct it, and delete it.
  2. Ban on Online Behavioral Advertising: We should tackle the root cause of commercial surveillance by banning behavioral advertising. Otherwise, businesses will always find ways to skirt around privacy laws to keep profiting from intrusive data collection.
  3. Strong Enforcement with Private Right of Action: To give privacy legislation bite, people should have a private right of action to sue companies that violate their privacy. Otherwise, we’ll continue to see widespread violation of privacy laws due to limited government enforcement resources. 

Using online services shouldn't mean surrendering your personal information to countless companies to use as they see fit.  When you sign up for an account on a website, you shouldn’t need to worry about random third-parties getting your information or every click being monitored to serve you ads. For now, our Privacy Badger extension can help you block some of the tracking technologies detailed in the FTC report. But the scale of commercial surveillance revealed in this investigation requires significant legislative action. Congress must act now and protect our data from corporate exploitation with a strong federal privacy law.

Digital ID Isn't for Everybody, and That's Okay

25 septembre 2024 à 18:57

How many times do you pull out your driver’s license a week? Maybe two to four times to purchase age restricted items, pick up prescriptions, or go to a bar. If you get a mobile driver’s license (mDL) or other forms of digital identification (ID) being offered in Google and Apple wallets, you may have to share this information much more often than before, because this new technology may expand the scope of scenarios demanding your ID.

mDLs and digital IDs are being deployed faster than states can draft privacy protections, including for presenting your ID to more third parties than ever before. While proponents of these digital schemes emphasize a convenience factor, these IDs can easily expand into new territories like controversial age verification bills that censor everyone. Moreover, digital ID is simultaneously being tested in sensitive situations, and expanded into a potential regime of unprecedented data tracking.

In the digital ID space, the question of “how can we do this right?” often usurps the more pertinent question of “should we do this at all?” While there are highly recommended safeguards for these new technologies, we must always support each person’s right to choose to continue using physical documentation instead of going digital. Also, we must do more to bring understanding and decision power over these technologies to all, over zealously promoting them as a potential equalizer.

What’s in Your Wallet?

With modern hardware, phones can now safely store more sensitive data and credentials with higher levels of security. This enables functionalities like Google and Apple Pay exchanging transaction data online with e-commerce sites. While there’s platform-specific terminology, the general term to know is “Trusted Platform Module” (TPM). This hardware enables “Trusted Execution Environments” (TEEs) for sensitive data to be processed within this environment. Most modern phones, tablets, and laptops come with TPMs.

Digital IDs are considered at a higher level of security within the Google and Apple wallets (as they should be). So if you have an mDL provisioned with this device, the contents of the mDL is not “synced to the cloud.” Instead, it stays on that device, and you have the option to remotely wipe the credential if the device is stolen or lost.

Moving away from digital wallets already common on most phones, some states have their own wallet app for mDLs that would require downloading from an app store. The security on these applications can vary, along with the data they can and can’t see. Different private partners have been making wallet/ID apps for different states. These include IDEMIA, Thales, and Spruce ID, to name a few. Digital identity frameworks, like Europe’s (eIDAS), have been creating language and provisions for “open wallets,” where you don’t have to necessarily rely on big tech for a safe and secure wallet. 

However, privacy and security need to be paramount. If privacy is an afterthought, digital IDs can quickly become yet another gold mine of breaches for data brokers and bad actors.

New Announcements, New Scope

Digital ID has been moving fast this summer.

Proponents of digital ID frequently present the “over 21” example, which is often described like this:

You go to the bar, you present a claim from your phone that you are over 21, and a bouncer confirms the claim with a reader device for a QR code or a tap via NFC. Very private. Very secure. Said bouncer will never know your address or other information. Not even your name. This is called an “abstract claim”, where more-sensitive information is not exchanged, but instead just a less-sensitive attestation to the verifier. Like an age threshold rather than your date of birth and name.

But there is a high privacy price to pay for this marginal privacy benefit. mDLs will not just swap in as a 1-on-1 representation of your physical ID. Rather, they are likely to expand the scenarios where businesses and government agencies demand that you prove your identity before entering physical and digital spaces or accessing goods and services. Our personal data will be passed at more frequent rates than ever, via frequent online verification of identity per day or week with multiple parties. This privacy menace far surpasses the minor danger of a bar bouncer collecting, storing, and using your name and address after glancing at your birth-date on your plastic ID for 5 seconds in passing. In cases where bars do scan ID, we’re still being asked to consider one potential privacy risk for an even more expanded privacy risk through digital ID presentation across the internet.

While there are efforts to enable private businesses to read mDLs, these credentials today are mainly being used with the TSA. In contracts and agreements we have seen with Apple, the company largely controls the marketing and visibility of mDLs.

In another push to boost adoption, Android allows you to create a digital passport ID for domestic travel. This development must be seen through the lens of the federal government’s 20-year effort to impose “REAL ID” on state-issued identification systems. REAL ID is an objective failure of a program that pushes for regimes that strip privacy from everyone and further marginalize undocumented people. While federal-level use of digital identity so far is limited to TSA, this use can easily expand. TSA wants to propose rules for mDLs in an attempt (the agency says) to “allow innovation” by states, while they contemplate uniform rules for everyone. This is concerning, as the scope of TSA —and its parent agency, the Department of Homeland Security—is very wide. Whatever they decide now for digital ID will have implications way beyond the airport.

Equity First > Digital First

We are seeing new digital ID plans being discussed for the most vulnerable of us. Digital ID must be designed for equity (as well as for privacy).

With Google’s Digital Credential API and Apple’s IP&V Platform (as named from the agreement with California), these two major companies are going to be in direct competition with current age verification platforms. This alarmingly sets up the capacity for anyone to ask for your ID online. This can spread beyond content that is commonly age-gated today. Different states and countries may try to label additional content as harmful to children (such as LGBTQIA content or abortion resources), and require online platforms to conduct age verification to access that content.

For many of us, opening a bank account is routine, and digital ID sounds like a way to make this more convenient. Millions of working class people are currently unbanked. Digital IDs won’t solve their problems. Many people can’t get simple services and documentation for a variety of reasons that come with having low-income. Millions of people in our country don’t have identification. We shouldn’t apply regimes that utilize age verification technology against people who often face barriers to compliance, such as license suspension for unpaid, non-traffic safety related fines. A new technical system with far less friction to attempt to verify age will, without regulation to account for nuanced lives, lead to an expedited, automated “NO” from digital verification.

Another issue is that many lack a smartphone or an up-to-date smartphone, or may share a smartphone with their family. Many proponents of “digital first” solutions assume a fixed ratio of one smartphone for each person. While this assumption may work for some, others will need humans to talk to on a phone or face-to-face to access vital services. In the case of an mDL, you still need to upload your physical ID to even obtain an mDL, and need to carry a physical ID on your person. Digital ID cannot bypass the problem that some people don’t have physical ID. Failure to account for this is a rush to perceived solutions over real problems.

Inevitable?

No, digital identity shouldn’t be inevitable for everyone: many people don’t want it or lack resources to get it. The dangers posed by digital identity don’t have to be inevitable, either—if states legislate protections for people. It would also be great (for the nth time) to have a comprehensive federal privacy law. Illinois recently passed a law that at least attempts to address mDL scenarios with law enforcement. At the very minimum, law enforcement should be prohibited from using consent for mDL scans to conduct illegal searches. Florida completely removed their mDL app from app stores and asked residents who had it, to delete it; it is good they did not simply keep the app around for the sake of pushing digital ID without addressing a clear issue.

State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance as facilitate them. That’s why legal protections are at least as important as the digital IDs themselves.

Lawmakers should ensure better access for people with or without a digital ID.

 

Calls to Scrap Jordan's Cybercrime Law Echo Calls to Reject Cybercrime Treaty

In a number of countries around the world, communities—and particularly those that are already vulnerable—are threatened by expansive cybercrime and surveillance legislation. One of those countries is Jordan, where a cybercrime law enacted in 2023 has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.

We’ve criticized this law before, noting how it was issued hastily and without sufficient examination of its legal aspects, social implications, and impact on human rights. It broadly criminalizes online content labeled as “pornographic” or deemed to “expose public morals,” and prohibits the use of Virtual Private Networks (VPNs) and other proxies. Now, EFF has joined thirteen digital rights and free expression organizations in calling once again for Jordan to scrap the controversial cybercrime law.

The open letter, organized by Article 19, calls upon Jordanian authorities to cease use of the cybercrime law to target and punish dissenting voices and stop the crackdown on freedom of expression. The letter also reads: “We also urge the new Parliament to repeal or substantially amend the Cybercrime Law and any other laws that violate the right to freedom of expression and bring them in line with international human rights law.”

Jordan’s law is a troubling example of how overbroad cybercrime legislation can be misused to target marginalized communities and suppress dissent. This is the type of legislation that the U.N. General Assembly has expressed concern about, including in 2019 and 2021, when it warned against cybercrime laws being used to target human rights defenders. These concerns are echoed by years of reports from U.N. human rights experts on how abusive cybercrime laws facilitate human rights abuses.

The U.N. Cybercrime Treaty also poses serious threats to free expression. Far from protecting against cybercrime, this treaty risks becoming a vehicle for repressive cross-border surveillance practices. By allowing broad international cooperation in surveillance for any crime 'serious' under national laws—defined as offenses punishable by at least four years of imprisonment—and without robust mandatory safeguards or detailed operational requirements to ensure “no suppression” of expression, the treaty risks being exploited by government to suppress dissent and target marginalized communities, as seen with Jordan’s overbroad 2023 cybercrime law. The fate of the U.N. Cybercrime Treaty now lies in the hands of member states, who will decide on its adoption later this year.

Patient Rights and Consumer Groups Join EFF In Opposing Two Extreme Patent Bills

Par : Joe Mullin
25 septembre 2024 à 12:54

Update 9/26/24: The hearing and scheduled committee vote on PERA and PREVAIL was canceled. Supporters can continue to register their opposition via our action, as these bills may still be scheduled for a vote later in 2024. 

The U.S. Senate Judiciary Committee is set to vote this Thursday on two bills that could significantly empower patent trolls. The Patent Eligibility Restoration Act (PERA) would bring back many of the abstract computer patents that have been barred for the past 10 years under Supreme Court precedent. Meanwhile, the PREVAIL Act would severely limit how the public can challenge wrongly granted patents at the patent office. 

Take Action

Tell Congress: No New Bills For Patent Trolls

EFF has sent letters to the Senate Judiciary Committee opposing both of these bills. The letters are co-signed by a wide variety of civil society groups, think tanks, startups, and business groups that oppose these misguided bills. Our letter on PERA states: 

Under PERA, any business method, methods of practicing medicine, legal agreement, media content, or even games and entertainment could be patented so long as the invention requires some use of computers or electronic communications… It is hard to overstate just how extreme and far-reaching such a change would be.

If enacted, PERA could revive some of the most problematic patents used by patent trolls, including: 

  • The Alice Corp. patent, which claimed the idea of clearing financial transactions through a third party via a computer. 
  • The Ameranth patent, which covered the use of  mobile devices to order food at restaurants. This patent was used to sue over 100 restaurants, hotels, and fast-food chains just for merely using off-the-shelf technology.  
  • A patent owned by Hawk Technology Systems LLC, which claimed generic video technology to view surveillance videos, and was used to sue over 200 hospitals, schools, charities, grocery stores, and other businesses. 

The changes proposed in PERA open the door to patent compounds that exist in nature which nobody invented

A separate letter signed by 17 professors of IP law caution that PERA would cloud the legal landscape on patent eligibility, which the Supreme Court clarified in its 10-year-old Alice v. CLS Bank case. “PERA would overturn centuries of jurisprudence that prevents patent law from effectively restricting the public domain of science, nature, and abstract ideas that benefits all of society,” the professors write.  

The U.S. Public Interest Research Group also opposes both PERA and PREVAIL, and points out in its opposition letter that patent application misuse has improperly prevented generic drugs from coming on to the market, even years after the original patent has expired. They warn: 

“The changes proposed in PERA open the door to patent compounds that exist in nature which nobody invented, but are newly discovered,” the group writes. “This dramatic change could have devastating effects on drug pricing by expanding the universe of items that can have a patent, meaning it will be easier than ever for drug companies to build patent thickets which keep competitors off the market.” 

Patients’ rights advocacy groups have also weighed in. They argue that PREVAIL “seriously undermines citizens’ ability to promote competition by challenging patents,” while PERA “opens the door to allow an individual or corporation to acquire exclusive rights to aspects of nature and information about our own bodies.” 

Generic drug makers share these concerns. “PREVAIL will make it more difficult for generic and biosimilar manufacturers to challenge expensive brand-name drug patent thickets and bring lower-cost medicines to patients, and PERA will enable brand-name drug manufacturers to build even larger thickets and charge higher prices,” an industry group stated earlier this month. 

We urge the Senate to heed  the voices of this broad coalition of civil society groups and businesses opposing these bills. Passing them would create a more unbalanced and easily exploitable patent system. The public interest must come before the loud voices of patent trolls and a few powerful patent holders. 

Take Action

Tell Congress to reject pera and prevail

Documents: 

EFF to Federal Trial Court: Section 230’s Little-Known Third Immunity for User-Empowerment Tools Covers Unfollow Everything 2.0

EFF along with the ACLU of Northern California and the Center for Democracy & Technology filed an amicus brief in a federal trial court in California in support of a college professor who fears being sued by Meta for developing a tool that allows Facebook users to easily clear out their News Feed.

Ethan Zuckerman, a professor at the University of Massachusetts Amherst, is in the process of developing Unfollow Everything 2.0, a browser extension that would allow Facebook users to automate their ability to unfollow friends, groups, or pages, thereby limiting the content they see in their News Feed.

This type of tool would greatly benefit Facebook users who want more control over their Facebook experience. The unfollowing process is tedious: you must go profile by profile—but automation makes this process a breeze. Unfollowing all friends, groups, and pages makes the News Feed blank, but this allows you to curate your News Feed by refollowing people and organizations you want regular updates on. Importantly, unfollowing isn’t the same thing as unfriending—unfollowing takes your friends’ content out of your News Feed, but you’re still connected to them and can proactively navigate to their profiles.

As Louis Barclay, the developer of Unfollow Everything 1.0, explained:

I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly. But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically. Overnight, my Facebook addiction became manageable.

Prof. Zuckerman fears being sued by Meta, Facebook’s parent company, because the company previously sent Louis Barclay a cease-and-desist letter. Prof. Zuckerman, with the help of the Knight First Amendment Institute at Columbia University, preemptively sued Meta, asking the court to conclude that he has immunity under Section 230(c)(2)(B), Section 230’s little-known third immunity for developers of user-empowerment tools.

In our amicus brief, we explained to the court that Section 230(c)(2)(B) is unique among the immunities of Section 230, and that Section 230’s legislative history supports granting immunity in this case.

The other two immunities—Section 230(c)(1) and Section 230(c)(2)(A)—provide direct protection for internet intermediaries that host user-generated content, moderate that content, and incorporate blocking and filtering software into their systems. As we’ve argued many times before, these immunities give legal breathing room to the online platforms we use every day and ensure that those companies continue to operate, to the benefit of all internet users. 

But it’s Section 230(c)(2)(B) that empowers people to have control over their online experiences outside of corporate or government oversight, by providing immunity to the developers of blocking and filtering tools that users can deploy in conjunction with the online platforms they already use.

Our brief further explained that the legislative history of Section 230 shows that Congress clearly intended to provide immunity for user-empowerment tools like Unfollow Everything 2.0.

Section 230(b)(3) states, for example, that the statute was meant to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services,” while Section 230(b)(4) states that the statute was intended to “remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material.” Rep. Chris Cox, a co-author of Section 230, noted prior to passage that new technology was “quickly becoming available” that would help enable people to “tailor what we see to our own tastes.”

Our brief also explained the more specific benefits of Section 230(c)(2)(B). The statute incentivizes the development of a wide variety of user-empowerment tools, from traditional content filtering to more modern social media tailoring. The law also helps people protect their privacy by incentivizing the tools that block methods of unwanted corporate tracking such as advertising cookies, and block stalkerware deployed by malicious actors.

We hope the district court will declare that Prof. Zuckerman has Section 230(c)(2)(B) immunity so that he can release Unfollow Everything 2.0 to the benefit of Facebook users who desire more control over how they experience the platform.

EFF to Supreme Court: Strike Down Texas’ Unconstitutional Age Verification Law

Par : Hudson Hongo
23 septembre 2024 à 14:30
New Tech Doesn’t Solve Old Problems With Age-Gating the Internet

WASHINGTON, D.C.—The Electronic Frontier Foundation (EFF), the Woodhull Freedom Foundation, and TechFreedom urged the Supreme Court today to strike down HB 1181, a Texas law that unconstitutionally restricts adults’ access to sexual content online by requiring them to verify their age. 

Under HB 1181, signed into law last year, any website that Texas decides is composed of “one-third” or more of “sexual material harmful to minors” is forced to collect age-verifying personal information from all visitors. When the Supreme Court reviews a case challenging the law in its next term, its ruling could have major consequences for the freedom of adults to safely and anonymously access protected speech online. 

"Texas’ age verification law robs internet users of anonymity, exposes them to privacy and security risks, and blocks some adults entirely from accessing sexual content that’s protected under the First Amendment,” said EFF Staff Attorney Lisa Femia. “Applying longstanding Supreme Court precedents, other courts have consistently held that similar age verification laws are unconstitutional. To protect freedom of speech online, the Supreme Court should clearly reaffirm those correct decisions here.”  

In a flawed ruling last year, the Fifth Circuit of Appeals upheld the Texas law, diverging from decades of legal precedent that correctly recognized online ID mandates as imposing greater burdens on our First Amendment rights than in-person age checks. As EFF explains in its friend-of-the-court brief, there is nothing about HB 1181 or advances in technology that have lessened the harms the law’s age verification mandate imposes on adults wishing to exercise their constitutional rights. 

First, the Texas law forces adults to submit personal information over the internet to access entire websites, not just specific sexual materials. Second, compliance with the law will require websites to retain this information, exposing their users to a variety of anonymity, privacy, and security risks not present when briefly flashing an ID card to a cashier. Third, while sharing many of the same burdens as document-based age verification, newer technologies like “age estimation” introduce their own problems—and are unlikely to satisfy the requirements of HB 1181 anyway. 

"Sexual freedom is a fundamental human right critical to human dignity and liberty," said Ricci Levy, CEO of the Woodhull Freedom Foundation. "By requiring invasive age verification, this law chills protected speech and violates the rights of consenting adults to access lawful sexual content online.” 

Today’s friend-of-the-court brief is only the latest entry in EFF’s long history of fighting for freedom of speech online. In 1997, EFF participated as both plaintiff and co-counsel in ACLU v. Reno, the landmark Supreme Court case that established speech on the internet as meriting the highest standard of constitutional protection. And in the last year alone, EFF has urged courts to reject state censorship, throw out a sweeping ban on free expression, and stop the government from making editorial decisions about content on social media. 

For the brief: https://www.eff.org/document/fsc-v-paxton-eff-amicus-brief

For more on HB 1181: https://www.eff.org/deeplinks/2024/05/eff-urges-supreme-court-reject-texas-speech-chilling-age-verification-law

Contact: 
Lisa
Femia
Staff Attorney*

Prison Banned Books Week: Being in Jail Shouldn’t Mean Having Nothing to Read

Across the United States, nearly every state’s prison system offers some form of tablet access to incarcerated people, many of which boast of sizable libraries of eBooks. Knowing this, one might assume that access to books is on the rise for incarcerated folks. Unfortunately, this is not the case. A combination of predatory pricing, woefully inadequate eBook catalogs, and bad policies restricting access to paper literature has exacerbated an already acute book censorship problem in U.S. prison systems.

New data collected by the Prison Banned Books Week campaign focuses on the widespread use of tablet devices in prison systems, as well as their pricing structure and libraries of eBooks. Through a combination of interviews with incarcerated people and a nationwide FOIA campaign to uncover the details of these tablet programs, this campaign has found that, despite offering access to tens of thousands of eBooks, prisons’ tablet programs actually provide little in the way of valuable reading material. The tablets themselves are heavily restricted, and typically only designed by one of two companies: Securus and ViaPath. The campaign also found that the material these programs do provide may not be accessible to many incarcerated individuals.

“We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

Limited, Censored Selections at Unreasonable Prices

Many companies that offer tablets to carceral facilities advertise libraries of several thousand books. But the data reveals that a huge proportion of these books are public domain texts taken directly from Project Gutenberg. While Project Gutenberg is itself laudable for collecting freely accessible eBooks, and its library contains many of the “classics” of Western literary canon, a massive number of its texts are irrelevant and outdated. As Shawn Y., an incarcerated interviewee in Pennsylvania put it, “Books are available for purchase through the Securus systems, but most of the bookworms here [...] find the selection embarrassingly thin, laughable even. [...] We might as well be rummaging the dusty old leftovers in some thrift store or back alley dumpster.”

These limitations on eBook selections exacerbate the already widespread censorship of physical reading materials, based on a variety of factors including books being deemed “harmful” content, determinations based on the book’s vendor (which, reports indicate, can operate as a ban on publishers), and whether the incarcerated person obtained advance permission from a prison administrator. Such censorial decisionmaking undermines incarcerated individuals’ right to receive information.

These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Some facilities charge $0.99 or more per eBook—despite their often meager, antiquated selections. While this may not seem exorbitant to many people, a recent estimate of average hourly wages for incarcerated people in the US is $0.63 per hour. And these otherwise free eBooks can often cost much more: Larry, an individual incarcerated in Pennsylvania, explains, “[s]ome of the prices for other books [are] extremely outrageous.” In Larry’s facility, “[s]ome of those tablet prices range over twenty dollars and even higher.”

Even if one can afford to rent these eBooks, they may have to pay for the tablets required to read them. For some incarcerated individuals, these costs can be prohibitive: procurement contracts in some states appear to require incarcerated people to pay upwards of $99 to use them. These costs are a barrier that deprive those in carceral facilities from developing and maintaining a connection with life outside prison walls.

Part of a Trend Toward Inadequate Digital Replacements

The trend of eliminating physical books and replacing them with digital copies accessible via tablets is emblematic of a larger trend from physical to digital that is occurring throughout our carceral system. These digital copies are not adequate substitutes. One of the hallmarks of tangible physical items is access: someone can open a physical book and read it when, how, and where they want. That’s not the case with the tablet systems prisons are adopting, and worryingly this trend has also extended to such personal items as incarcerated individual's personal mail.

EFF is actively litigating to defend incarcerated individuals’ rights to access and receive tangible reading materials with our ABO Comix lawsuit. There, we—along with the Knight First Amendment Institute and Social Justice Legal Foundation—are fighting a San Mateo County (California) policy that bans those in San Mateo jails from receiving physical mail. Our complaint explains that San Mateo’s policy requires the friends and families of those jailed in its facilities to send their letters to a private company that scans them, destroys the physical copy, and retains the scan in a searchable database—for at least seven years after the intended recipient leaves the jail’s custody. Incarcerated people can only access the digital copies through a limited number of shared tablets and kiosks in common areas within the jails.

Just as incarcerated peoples’ reading materials are censored, so is their mail when physical letters are replaced with digital facsimiles. Our complaint details how ripping open, scanning, and retaining mail has impeded the ability of those in San Mateo’s facilities to communicate with their loved ones, as well as their ability to receive educational and religious study materials. These digital replacements are inadequate both in and of themselves and because the tablets needed to access them are in short supply and often plagued by technical issues. Along with our free expression allegations, our complaint also alleges that the seizing, searching, and sharing of data from and about their letters violates the rights of both senders and recipients against unreasonable searches and seizures.

Our ABO Comix litigation is ongoing. We are hopeful that the courts will recognize the free expression and privacy harms to incarcerated individuals and those who communicate with them that come from digitizing physical mail. We are also hopeful, on the occasion of this Prison Banned Books Week, for an end to the censorship of incarcerated individuals’ reading materials: restricting what some of us can read harms us all.

Square Peg, Meet Round Hole: Previously Classified TikTok Briefing Shows Error of Ban

19 septembre 2024 à 16:07

A previously classified transcript reveals Congress knows full well that American TikTok users engage in First Amendment protected speech on the platform and that banning the application is an inadequate way to protect privacy—but it banned TikTok anyway.

The government submitted the partially redacted transcript as part of the ongoing litigation over the federal TikTok ban (which the D.C. Circuit just heard arguments about this week). The transcript indicates that that members of Congress and law enforcement recognize that Americans are engaging in First Amendment protected speech—the same recognition a federal district court made when it blocked Montana’s TikTok ban from going into effect. They also agreed that adequately protecting Americans’ data requires comprehensive consumer privacy protections.

Yet, Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

No Indication of Actual Harm, No New Arguments

The members and officials didn’t make any particularly new points about the dangers of TikTok. Further, they repeatedly characterized their fears as hypothetical. The transcript is replete with references to the possibility of the Chinese government using TikTok to manipulate the content Americans’ see on the application, including to shape their views on foreign and domestic issues. For example, the official representing the DOJ expressed concern that the public and private data TikTok users generate on the platform is

potentially at risk of going to the Chinese government, [and] being used now or in the future by the Chinese government in ways that could be deeply harmful to tens of millions of young people who might want to pursue careers in government, who might want to pursue careers in the human rights field, and who one day could end up at odds with the Chinese Government’s agenda.  

There is no indication from the unredacted portions of the transcript that this is happening. This DOJ official went on to express concern “with the narratives that are being consumed on the platform,” the Chinese government’s ability to influence those narratives, and the U.S. government’s preference for “responsible ownership” of the platform through divestiture.

At one point, Representative Walberg even suggested that “certain public policy organizations” that oppose the TikTok ban should be investigated for possible ties to ByteDance (the company that owns TikTok). Of course, the right to oppose an ill-conceived ban on a popular platform goes to the very reason the U.S. has a First Amendment.

Congress banned TikTok anyway, undermining our rights and failing to protect our privacy.

Americans’ Speech and Privacy Rights Deserved More

Rather than grandstanding about investigating opponents of the TikTok ban, Congress should spend its time considering the privacy and free speech arguments of those opponents. Judging by the (redacted) transcript, the committee failed to undertake that review here.

First, the First Amendment rightly subjects bans like this one for TikTok to extraordinarily exacting judicial scrutiny. That is true even with foreign propaganda, which Americans have a well-established First Amendment right to receive. And it’s ironic for the DOJ to argue that banning an application which people use for self-expression—a human right—is necessary to protect their ability to advance human rights.

Second, if Congress wants to stop the Chinese government from potentially acquiring data about social media users, it should pass comprehensive consumer privacy legislation that regulates how all social media companies can collect, process, store, and sell Americans’ data. Otherwise, foreign governments and adversaries will still be able to acquire Americans’ data by stealing it, or by using a straw purchaser to buy it.

It’s especially jarring to read that a foreign government’s potential collection of data supposedly justifies banning an application, given Congress’s recent renewal of an authority—Section 702 of the Foreign Intelligence Surveillance Act—under which the U.S. government actually collects massive amounts of Americans’ communications— and which the FBI immediately directed its agents to abuse (yet again).

EFF will continue fighting for TikTok users’ First Amendment rights to express themselves and to receive information on the platform. We will also continue urging Congress to drop these square peg, round hole approaches to Americans’ privacy and online expression and pass comprehensive privacy legislation that offers Americans genuine protection from the invasive ways any company uses data. While Congress did not fully consider the First Amendment and privacy interests of TikTok users, we hope the federal courts will.

Strong End-to-End Encryption Comes to Discord Calls

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.

Canada’s Leaders Must Reject Overbroad Age Verification Bill

19 septembre 2024 à 13:14

Canadian lawmakers are considering a bill, S-210, that’s meant to benefit children, but would sacrifice the security, privacy, and free speech of all internet users.

First introduced in 2023, S-210 seeks to prevent young people from encountering sexually explicit material by requiring all commercial internet services that “make available” explicit content to adopt age verification services. Typically, these services will require people to show government-issued ID to get on the internet. According to bill authors, this is needed to prevent harms like the “development of pornography addiction” and “the reinforcement of gender stereotypes and the development of attitudes favorable to harassment and violence…particularly against women.”

The motivation is laudable, but requiring people of all ages to show ID to get online won’t help women or young people. If S-210 isn't stopped before it reaches the third reading and final vote in the House of Commons, Canadians will be forced to a repressive and unworkable age verification regulation. 

Flawed Definitions Would Encompass Nearly the Entire Internet 

The bill’s scope is vast. S-210 creates legal risk not just for those who sell or intentionally distribute sexually explicit materials, but also for those who just transmit it–knowingly or not.

Internet infrastructure intermediaries, which often do not know the type of content they are transmitting, would also be liable, as would all services from social media sites to search engines and messaging platforms. Each would be required to prevent access by any user whose age is not verified, unless they can claim the material is for a “legitimate purpose related to science, medicine, education or the arts,” or by implementing age verification. 

Basic internet infrastructure shouldn’t be regulating content at all, but S-210 doesn’t make the distinction. When these large services learn they are hosting or transmitting sexually explicit content, most will simply ban or remove it outright, using both automated tools and hasty human decision-making. History shows that over-censorship is inevitable. When platforms seek to ban sexual content, over-censorship is very common.

Rules banning sexual content usually hurt marginalized communities and groups that serve them the most. That includes organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom.

Promoting Dangerous Age Verification Methods 

S-210 notes that “online age-verification technology is increasingly sophisticated and can now effectively ascertain the age of users without breaching their privacy rights.”

This premise is just wrong. There is currently no technology that can verify users’ ages while protecting their privacy. The bill does not specify what technology must be used, leaving it for subsequent regulation. But the age verification systems that exist are very problematic. It is far too likely that any such regulation would embrace tools that retain sensitive user data for potential sale or harms like hacks and lack guardrails preventing companies from doing whatever they like with this data once collected.

We’ve said it before: age verification systems are surveillance systems. Users have no way to be certain that the data they’re handing over is not going to be retained and used in unexpected ways, or even shared to unknown third parties. The bill asks companies to maintain user privacy and destroy any personal data collected but doesn’t back up that suggestion with comprehensive penalties. That’s not good enough.

Companies responsible for storing or processing sensitive documents like drivers’ licenses can encounter data breaches, potentially exposing not only personal data about users, but also information about the sites that they visit.

Finally, age-verification systems that depend on government-issued identification exclude altogether Canadians who do not have that kind of ID.

Fundamentally, S-210 leads to the end of anonymous access to the web. Instead, Canadian internet access would become a series of checkpoints that many people simply would not pass, either by choice or because the rules are too onerous.

Dangers for Everyone, But This Can Be Stopped

Canada’s S-210 is part of a wave of proposals worldwide seeking to gate access to sexual content online. Many of the proposals have similar flaws. Canada’s S-210 is up there with the worst. Both Australia and France have paused the rollout of age verification systems, because both countries found that these systems could not sufficiently protect individuals’ data or address the issues of online harms alone. Canada should take note of these concerns.

It's not too late for Canadian lawmakers to drop S-210. It’s what has to be done to protect the future of a free Canadian internet. At the very least, the bill’s broad scope must be significantly narrowed to protect user rights.

Human Rights Claims Against Cisco Can Move Forward (Again)

Par : Cindy Cohn
18 septembre 2024 à 18:04

Google and Amazon – You Should Take Note of Your Own Aiding and Abetting Risk 

EFF has long pushed companies that provide powerful surveillance tools to governments to take affirmative steps to avoid aiding and abetting human rights abuses. We have also worked to ensure they face consequences when they do not.

Last week, the U.S. Court of Appeals for the Ninth Circuit helped this cause, by affirming its powerful 2023 decision that aiding and abetting liability in U.S. courts can apply to technology companies that provide sophisticated surveillance systems that are used to facilitate human rights abuses.  

The specific case is against Cisco and arises out of allegations that Cisco custom-built tools as part of the Great Firewall of China to help the Chinese government target members of disfavored groups, including the Falun Gong religious minority.  The case claims that those tools were used to help identify individuals who then faced horrific consequences, including wrongful arrest, detention, torture, and death.  

We did a deep dive analysis of the Ninth Circuit panel decision when it came out in 2023. Last week, the Ninth Circuit rejected an attempt to have that initial decision reconsidered by the full court, called en banc review. While the case has now survived Ninth Circuit review and should otherwise be able to move forward in the trial court, Cisco has indicated that it intends to file a petition for U.S. Supreme Court review. That puts the case on pause again. 

Still, the Ninth Circuit’s decision to uphold the 2023 panel opinion is excellent news for the critical, though slow moving, process of building accountability for companies that aid repressive governments. The 2023 opinion unequivocally rejected many of the arguments that companies use to justify their decision to provide tools and services that are later used to abuse people. For instance, a company only needs to know that its assistance is helping in human rights abuses; it does not need to have a purpose to facilitate abuse. Similarly, the fact that a technology has legitimate law enforcement uses does not immunize the company from liability for knowingly facilitating human rights abuses.

EFF has participated in this case at every level of the courts, and we intend to continue to do so. But a better way forward for everyone would be if Cisco owned up to its actions and took steps to make amends to those injured and their families with an appropriate settlement offer, like Yahoo! did in 2007. It’s not too late to change course, Cisco.

And as EFF noted recently, Cisco isn’t the only company that should take note of this development. Recent reports have revealed the use (and misuse) of Google and Amazon services by the Israeli government to facilitate surveillance and tracking of civilians in Gaza. These reports raise serious questions about whether Google and Amazon  are following their own published statements and standards about protecting against the use of their tools for human rights abuses. Unfortunately, it’s all too common for companies to ignore their own human rights policies, as we highlighted in a recent brief about notorious spyware company NSO Group.

The reports about Gaza also raise questions about whether there is potential liability against Google and Amazon for aiding and abetting human rights abuses against Palestinians. The abuses by Israel have now been confirmed by the International Court of Justice, among others, and the longer they continue, the harder it is going to be for the companies to claim that they had no knowledge of the abuses. As the Ninth Circuit confirmed, aiding and abetting liability is possible even though these technologies are also useful for legitimate law enforcement purposes and even if the companies did not intend them to be used to facilitate human rights abuses. 

The stakes are getting higher for companies. We first call on Cisco to change course, acknowledge the victims, and accept responsibility for the human rights abuses it aided and abetted.  

Second, given the current ongoing abuses in Gaza, we renew our call for Google and Amazon to first come clean about their involvement in human rights abuses in Gaza and, where necessary, make appropriate changes to avoid assisting in future abuses.

Finally, for other companies looking to sell surveillance, facial recognition, and other potentially abusive tools to repressive governments – we’ll be watching you, too.   

Related Cases: 

Senate Vote Could Give Helping Hand To Patent Trolls

Par : Joe Mullin
18 septembre 2024 à 12:33

Update 9/26/24: The hearing and scheduled committee vote on PERA and PREVAIL was canceled. Supporters can continue to register their opposition via our action, as these bills may still be scheduled for a vote later in 2024. 

Update 9/20/24: The Senate vote scheduled for Thursday, Sep. 19 has been rescheduled for Thursday, Sep. 26. 

A patent on crowdfunding. A patent on tracking packages. A patent on photo contests. A patent on watching an ad online. A patent on computer bingo. A patent on upselling

These are just a few of the patents used to harass software developers and small companies in recent years. Thankfully, they were tossed out by U.S. courts, thanks to the landmark 2014 Supreme Court decision in Alice v. CLS Bank. The Alice ruling  has effectively ended hundreds of lawsuits where defendants were improperly sued for basic computer use. 

Take Action

Tell Congress: No New Bills For Patent Trolls

Now, patent trolls and a few huge corporate patent-holders are upset about losing their bogus patents. They are lobbying Congress to change the rules–and reverse the Alice decision entirely. Shockingly, they’ve convinced the Senate Judiciary Committee to vote this Thursday on two of the most damaging patent bills we’ve ever seen.

The Patent Eligibility Restoration Act (PERA, S. 2140) would overturn Alice, enabling patent trolls to extort small business owners and even hobbyists, just for using common software systems to express themselves or run their businesses. PERA would also overturn a 2013 Supreme Court case that prevents most kinds of patenting of human genes.

Meanwhile, the PREVAIL Act (S. 2220) seeks to severely limit how the public can challenge bad patents at the patent office. Challenges like these are one of the most effective ways to throw out patents that never should have been granted in the first place. 

This week, we need to show Congress that everyday users and creators won’t stand for laws that actually expand avenues for patent abuse.

The U.S. Senate must not pass new legislation to allow the worst patent scams to expand and flourish. 

Take Action

Tell Congress: No New Bills For Patent Trolls

Unveiling Venezuela’s Repression: A Legacy of State Surveillance and Control

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part two of a series. Part one on surveillance and control around the July election is here.

Over the past decade, the government in Venezuela has meticulously constructed a framework of surveillance and repression, which has been repeatedly denounced by civil society and digital rights defenders in the country. This apparatus is built on a foundation of restricted access to information, censorship, harassment of journalists, and the closure of media outlets. The systematic use of surveillance technologies has created an intricate network of control.

Security forces have increasingly relied on digital tools to monitor citizens, frequently stopping people to check the content of their phones and detaining those whose devices contain anti-government material. The country’s digital identification systems, Carnet de la Patria and Sistema Patria—established in 2016 and linked to social welfare programs—have also been weaponized against the population by linking access to essential services with affiliation to the governing party. 

Censorship and internet filtering in Venezuela became omnipresent ahead of the recent election period. The government blocked access to media outlets, human rights organizations, and even VPNs—restricting access to critical information. Social media platforms like X (formerly Twitter) and WhatsApp were also  targeted—and are expected to be regulated—with the government accusing these platforms of aiding opposition forces in organizing a “fascist coup d’état” and spreading “hate” while promoting a “civil war.”

The blocking of these platforms not only limits free expression but also serves to isolate Venezuelans from the global community and their networks in the diaspora, a community of around 9 million people. The government's rhetoric, which labels dissent as "cyberfascism" or "terrorism," is part of a broader narrative that seeks to justify these repressive measures while maintaining a constant threat of censorship, further stifling dissent.

Moreover, there is a growing concern that the government’s strategy could escalate to broader shutdowns of social media and communication platforms if street protests become harder to control, highlighting the lengths to which the regime is willing to go to maintain its grip on power.

Fear is another powerful tool that enhances the effectiveness of government control. Actions like mass arrests, often streamed online, and the public display of detainees create a chilling effect that silences dissent and fractures the social fabric. Economic coercion, combined with pervasive surveillance, fosters distrust and isolation—breaking down the networks of communication and trust that help Venezuelans access information and organize.

This deliberate strategy aims not just to suppress opposition but to dismantle the very connections that enable citizens to share information and mobilize for protests. The resulting fear, compounded by the difficulty in perceiving the full extent of digital repression, deepens self-censorship and isolation. This makes it harder to defend human rights and gain international support against the government's authoritarian practices.

Civil Society’s Response

Despite the repressive environment, civil society in Venezuela continues to resist. Initiatives like Noticias Sin Filtro and El Bus TV have emerged as creative ways to bypass censorship and keep the public informed. These efforts, alongside educational campaigns on digital security and the innovative use of artificial intelligence to spread verified information, demonstrate the resilience of Venezuelans in the face of authoritarianism. However, the challenges remain extensive.

The Inter-American Commission on Human Rights (IACHR) and its Special Rapporteur for Freedom of Expression (SRFOE) have condemned the institutional violence occurring in Venezuela, highlighting it as state terrorism. To be able to comprehend the full scope of this crisis it is paramount to understand that this repression is not just a series of isolated actions but a comprehensive and systematic effort that has been building for over 15 years. It combines elements of infrastructure (keeping essential services barely functional), blocking independent media, pervasive surveillance, fear-mongering, isolation, and legislative strategies designed to close civic space. With the recent approval of a law aimed at severely restricting the work of non-governmental organizations, the civic space in Venezuela faces its greatest challenge yet.

The fact that this repression occurs amid widespread human rights violations suggests that the government's next steps may involve an even harsher crackdown. The digital arm of government propaganda reaches far beyond Venezuela’s borders, attempting to silence voices abroad and isolate the country from the global community. 

The situation in Venezuela is dire, and the use of technology to facilitate political violence represents a significant threat to human rights and democratic norms. As the government continues to tighten its grip, the international community must speak out against these abuses and support efforts to protect digital rights and freedoms. The Venezuelan case is not just a national issue but a global one, illustrating the dangers of unchecked state power in the digital age.

However, this case also serves as a critical learning opportunity for the global community. It highlights the risks of digital authoritarianism and the ways in which governments can influence and reinforce each other's repressive strategies. At the same time, it underscores the importance of an organized and resilient civil society—in spite of so many challenges—as well as the power of a network of engaged actors both inside and outside the country. 

These collective efforts offer opportunities to resist oppression, share knowledge, and build solidarity across borders. The lessons learned from Venezuela should inform global strategies to safeguard human rights and counter the spread of authoritarian practices in the digital era.

An open letter, organized by a group of Venezuelan digital and human rights defenders, calling for an end to technology-enabled political violence in Venezuela, has been published by Access Now and remains open for signatures.

The New U.S. House Version of KOSA Doesn’t Fix Its Biggest Problems

An amended version of the Kids Online Safety Act (KOSA) that is being considered this week in the U.S. House is still a dangerous online censorship bill that contains many of the same fundamental problems of a similar version the Senate passed in July. The changes to the House bill do not alter that KOSA will coerce the largest social media platforms into blocking or filtering a variety of entirely legal content, and subject a large portion of users to privacy-invasive age verification. They do bring KOSA closer to becoming law, and put us one step closer to giving government officials dangerous and unconstitutional power over what types of content can be shared and read online. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Reframing the Duty of Care Does Not Change Its Dangerous Outcomes

For years now, digital rights groups, LGBTQ+ organizations, and many others have been critical of KOSA's “duty of care.” While the language has been modified slightly, this version of KOSA still creates a duty of care and negligence standard of liability that will allow the Federal Trade Commission to sue apps and websites that don’t take measures to “prevent and mitigate” various harms to minors that are vague enough to chill a significant amount of protected speech.  

The biggest shift to the duty of care is in the description of the harms that platforms must prevent and mitigate. Among other harms, the previous version of KOSA included anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, “consistent with evidence-informed medical information.” The new version drops this section and replaces it with the "promotion of inherently dangerous acts that are likely to cause serious bodily harm, serious emotional disturbance, or death.” The bill defines “serious emotional disturbance” as “the presence of a diagnosable mental, behavioral, or emotional disorder in the past year, which resulted in functional impairment that substantially interferes with or limits the minor’s role or functioning in family, school, or community activities.”  

Despite the new language, this provision is still broad and vague enough that no platform will have any clear indication about what they must do regarding any given piece of content. Its updated list of harms could still encompass a huge swathe of entirely legal (and helpful) content about everything from abortion access and gender-affirming care to drug use, school shootings, and tackle football. It is still likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech—and important resources—about topics like addiction, eating disorders, and bullying. And it will stifle minors who are trying to find their own supportive communities online.  

Kids will, of course, still be able to find harmful content, but the largest platforms—where the most kids are—will face increased liability for letting any discussion about these topics occur. It will be harder for suicide prevention messages to reach kids experiencing acute crises, harder for young people to find sexual health information and gender identity support, and generally, harder for adults who don’t want to risk the privacy- and security-invasion of age verification technology to access that content as well.  

As in the past version, enforcement of KOSA is left up to the FTC, and, to some extent, state attorneys general around the country. Whether you agree with them or not on what encompasses a “diagnosable mental, behavioral, or emotional disorder,”  the fact remains that KOSA's flaws are as much about the threat of liability as about the actual enforcement. As long as these definitions remain vague enough that platforms have no clear guidance on what is likely to cross the line, there will be censorship—even if the officials never actually take action. 

The previous House version of the bill stated that “A high impact online company shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors.” The new version slightly modifies this to say that such a company "shall create and implement its design features to reasonably prevent and mitigate the following harms to minors.” These language changes are superficial; this section still imposes a standard that requires platforms to filter user-generated content and imposes liability if they fail to do so “reasonably.” 

House KOSA Edges Closer to Harmony with Senate Version 

Some of the latest amendments to the House version of KOSA bring it closer in line with the Senate version which passed a few months ago (not that this improves the bill).  

This version of KOSA lowers the bar, set by the previous House version, that determines  which companies would be impacted by KOSA’s duty of care. While the Senate version of KOSA does not have such a limitation (and would affect small and large companies alike), the previous House version created a series of tiers for differently-sized companies. This version has the same set of tiers, but lowers the highest bar from companies earning $2.5 billion in annual revenue, or having 150 million annual users, to companies earning $1 billion in annual revenue, or having 100 million annual users.  

This House version also includes the “filter bubble” portion of KOSA which was added to the Senate version a year ago. This requires any “public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user-generated content” to provide users with an algorithm that uses a limited set of information, such as search terms and geolocation, but not search history (for example). This section of KOSA is meant to push users towards a chronological feed. As we’ve said before, there’s nothing wrong with online information being presented chronologically for those who want it. But just as we wouldn’t let politicians rearrange a newspaper in a particular order, we shouldn’t let them rearrange blogs or other websites. It’s a heavy-handed move to stifle the editorial independence of web publishers.   

Lastly, the House authors have added language  that the bill would have no actual effect on how platforms or courts interpret the law, but which does point directly to the concerns we’ve raised. It states that, “a government entity may not enforce this title or a regulation promulgated under this title based upon a specific viewpoint of any speech, expression, or information protected by the First Amendment to the Constitution that may be made available to a user as a result of the operation of a design feature.” Yet KOSA does just that: the FTC will have the power to force platforms to moderate or block certain types of content based entirely on the views described therein.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Remains an Unconstitutional Censorship Bill 

KOSA remains woefully underinclusive—for example, Google's search results will not be impacted regardless of what they show young people, but Instagram is on the hook for a broad amount of content—while making it harder for young people in distress to find emotional, mental, and sexual health support. This version does only one important thing—it moves KOSA closer to passing in both houses of Congress, and puts us one step closer to enacting an online censorship regime that will hurt free speech and privacy for everyone.

À partir d’avant-hierElectronic Frontier Foundation

KOSA’s Online Censorship Threatens Abortion Access

Par : Lisa Femia
17 septembre 2024 à 14:32

For those living in one of the 22 states where abortion is banned or heavily restricted, the internet can be a lifeline. It has essential information on where and how to access care, links to abortion funds, and guidance on ways to navigate potential legal risks. Activists use the internet to organize and build community, and reproductive healthcare organizations rely on it to provide valuable information and connect with people in need.

But both Republicans and Democrats in Congress are now actively pushing for federal legislation that could cut youth off from these vital healthcare resources and stifle online abortion information for adults and kids alike.

This summer, the U.S. Senate passed the Kids Online Safety Act (KOSA), a bill that would grant the federal government and state attorneys general the power to restrict online speech they find objectionable in a misguided and ineffective attempt to protect kids online. A number of organizations have already sounded the alarm on KOSA’s danger to online LGBTQ+ content, but the hazards of the bill don’t stop there.

KOSA puts abortion seekers at risk. It could easily lead to censorship of vital and potentially life-saving information about sexual and reproductive healthcare. And by age-gating the internet, it could result in websites requiring users to submit identification, undermining the ability to remain anonymous while searching for abortion information online.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Abortion Information Censored

As EFF has repeatedly warned, KOSA will stifle online speech. It gives government officials the dangerous and unconstitutional power to decide what types of content can be shared and read online. Under one of its key censorship provisions, KOSA would create what the bill calls a “duty of care.” This provision would require websites, apps, and online platforms to comply with a vague and overbroad mandate to prevent and mitigate “harm to minors” in all their “design features.”

KOSA contains a long list of harms that websites have a duty to protect against, including emotional disturbance, acts that lead to bodily harm, and online harassment, among others. The list of harms is open for interpretation. And many of the harms are so subjective that government officials could claim any number of issues fit the bill.

This opens the door for political weaponization of KOSA—including by anti-abortion officials. KOSA is ambiguous enough to allow officials to easily argue that its mandate includes sexual and reproductive healthcare information. They could, for example, claim that abortion information causes emotional disturbance or death, or could lead to “sexual exploitation and abuse.” This is especially concerning given the anti-abortion movement’s long history of justifying abortion restrictions by claiming that abortions cause mental health issues, including depression and self-harm (despite credible research to the contrary).

As a result, websites could be forced to filter and block such content for minors, despite the fact that minors can get pregnant and are part of the demographic most likely to get their news and information from social media platforms. By blocking this information, KOSA could cut off young people’s access to potentially life-saving sexual and reproductive health resources. So much for protecting kids.

KOSA’s expansive and vague censorship requirements will also affect adults. To avoid liability and the cost and hassle of litigation, websites and platforms are likely to over-censor potentially covered content, even if that content is otherwise legal. This could lead to the removal of important reproductive health information for all internet users, adults included.

A Tool For Anti-Choice Officials

It’s important to remember that KOSA’s “duty of care” provision would be defined and enforced by the presidential administration in charge, including any future administration that is hostile to reproductive rights. The bill grants the Federal Trade Commission, majority-controlled by the President’s party, the power to develop guidelines and to investigate or sue any websites or platforms that don’t comply. It also grants the Executive Branch the power to form a Kids Online Safety Council to further identify “emerging or current risks of harms to minors associated with online platforms.”

Meanwhile, KOSA gives state attorneys general, including those in abortion-restrictive states, the power to sue under its other provisions, many of which intersect with the “duty of care.” As EFF has argued, this gives state officials a back door to target and censor content they don’t like, including abortion information.

It’s also directly foreseeable that anti-abortion officials would use KOSA in this way. One of the bill’s co-sponsors, Senator Marsha Blackburn (R-TN), has touted KOSA as a way to censor online content on social issues, claiming that children are being “indoctrinated” online. The Heritage Foundation, a politically powerful organization that espouses anti-choice views, also has its eyes on KOSA. It has been lobbying lawmakers to pass the bill and suggesting that a future administration could fill the Kids Online Safety Council with “representatives who share pro-life values.”

This all comes at a time when efforts to censor abortion information online are at a fever pitch. In abortion-restrictive states, officials have already been eagerly attempting to erase abortion from the internet. Lawmakers in both South Carolina and Texas have introduced bills to censor online abortion information, though neither effort has yet to be successful. The National Right to Life Committee has also created a model abortion law aimed at restricting abortion rights in a variety of ways, including digital access to information.

KOSA Hurts Anonymity Online

KOSA will also push large and important parts of the internet behind age gates. In order to determine which users are minors, online services will likely impose age verification systems, which require everyone—both adults and minors—to verify their age by providing identifying information, oftentimes including government-issued ID or other personal records.

This is deeply problematic for maintaining access to reproductive care. Age verification undermines our First Amendment right to remain anonymous online by requiring users to confirm their identity before accessing webpages and information. It would chill users who do not wish to share their identity from accessing or sharing online abortion resources, and put others’ identities at increased risk of exposure.

In a post-Roe United States, in which states are increasingly banning, restricting, and prosecuting abortions, the ability to anonymously seek and share abortion information online is more important than ever. For people living in abortion-restrictive states, searching and sharing abortion information online can put you at risk. There have been multiple instances of law enforcement agencies using digital evidence, including internet history, in abortion-related criminal cases. We’ve also seen an increase in online harassment and doxxing of healthcare professionals, even in more abortion-protective states.

Because of this, many organizations, including EFF, have tried to help people take steps to protect privacy and anonymity online. KOSA would undercut those efforts. While it’s true that our online ecosystem is already rich with private surveillance, age verification adds another layer of mass data collection. Online ID checks require adults to upload data-rich, government-issued identifying documents to either the website or a third-party verifier, creating a potentially lasting record of their visit to the website.

For abortion seekers taking steps to protect their anonymity and avoid this pervasive surveillance, this would make things all the more difficult. Using a public computer or creating anonymous profiles on social networks won’t keep you safe if you have to upload ID to access the information you need.

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

We Can Still Stop KOSA From Passing

KOSA has not yet passed the House, so there’s still time to stop it. But the Senate vote means that the House could bring it up for a vote at any time, and the House has introduced its own similarly flawed version of KOSA. If we want to protect access to abortion information online, we must organize now to stop KOSA from passing.

Unveiling Venezuela’s Repression: Surveillance and Censorship Following July’s Presidential Election

The post was written by Laura Vidal (PhD), independent researcher in learning and digital rights.

This is part one of a series. Part two on the legacy of Venezuela’s state surveillance is here.

As thousands of Venezuelans took to the streets across the country to demand transparency in July’s election results, the ensuing repression has been described as the harshest to date, with technology playing a central role in facilitating this crackdown.

The presidential elections in Venezuela marked the beginning of a new chapter in the country’s ongoing political crisis. Since July 28th, a severe backlash against demonstrations has been undertaken by the country’s security forces, leading to 20 people killed. The results announced by the government, in which they claimed a re-election of Nicolás Maduro, have been strongly contested by political leaders within Venezuela as well as by the Organization of American States (OAS),  and governments across the region

In the days following the election, the opposition—led by candidates Edmundo González Urrutia and María Corina Machado—challenged the National Electoral Council’s (CNE) decision to award the presidency to Maduro. They called for greater transparency in the electoral process, particularly regarding the publication of the original tally sheets, which are essential for confirming or contesting the election results. At present, these original tally sheets remain unpublished.

In response to the lack of official data, the coalition supporting the opposition—known as Comando con Venezuelapresented the tally sheets obtained by opposition witnesses on the night of July 29th. These were made publicly available on an independent portal named “Presidential Results 2024,” accessible to any internet user with a Venezuelan identity card.

The government responded with repression and numerous instances of technology-supported repression and violence. The surveillance and control apparatus saw intensified use, such as increased deployment of VenApp, a surveillance application originally launched in December 2022 to report failures in public services. Promoted by President Nicolás Maduro as a means for citizens to report on their neighbors, VenApp has been integrated into the broader system of state control, encouraging citizens to report activities deemed suspicious by the state and further entrenching a culture of surveillance.

Additional reports indicated the use of drones across various regions of the country. Increased detentions and searches at airports have particularly impacted human rights defenders, journalists, and other vulnerable groups. This has been compounded by the annulment of passports and other forms of intimidation, creating an environment where many feel trapped and fearful of speaking out.

The combined effect of these tactics is the pervasive sense that it is safer not to stand out. Many NGOs have begun reducing the visibility of their members on social media, some individuals have refused interviews, have published documented human rights violations under generic names, and journalists have turned to AI-generated avatars to protect their identities. People are increasingly setting their social media profiles to private and changing their profile photos to hide their faces. Additionally, many are now sending information about what is happening in the country to their networks abroad for fear of retaliation. 

These actions often lead to arbitrary detentions, with security forces publicly parading those arrested as trophies, using social media materials and tips from informants to justify their actions. The clear intent behind these tactics is to intimidate, and they have been effective in silencing many. This digital repression is often accompanied by offline tactics, such as marking the residences of opposition figures, further entrenching the climate of fear.

However, this digital aspect of repression is far from a sudden development. These recent events are the culmination of years of systematic efforts to control, surveil, and isolate the Venezuelan population—a strategy that draws from both domestic decisions and the playbook of other authoritarian regimes. 

In response, civil society in Venezuela continues to resist; and in August, EFF joined more than 150 organizations and individuals in an open letter highlighting the technology-enabled political violence in Venezuela. Read more about this wider history of Venezuela’s surveillance and civil society resistance in part two of this series, available here

 

The Climate Has a Posse – And So Does Political Satire

16 septembre 2024 à 11:36

Greenwashing is a well-worn strategy to try to convince the public that environmentally damaging activities aren’t so damaging after all. It can be very successful precisely because most of us don’t realize it’s happening.

Enter the Yes Men, skilled activists who specialize in elaborate pranks that call attention to corporate tricks and hypocrisy. This time, they’ve created a website – wired-magazine.com—that looks remarkably like Wired.com and includes, front and center, an op-ed from writer (and EFF Special Adviser) Cory Doctorow. The op-ed, titled “Climate change has a posse” discussed the “power and peril” of a new “greenwashing” emoji designed by renowned artist Shepard Fairey:

First, we have to ask why in hell Unicode—formerly the Switzerland of tech standards—decided to plant its flag in the greasy battlefield of eco-politics now. After rejecting three previous bids for a climate change emoji, in 2017 and 2022, this one slipped rather suspiciously through the iron gates.

Either the wildfire smoke around Unicode’s headquarters in Silicon Valley finally choked a sense of ecological urgency into them, or more likely, the corporate interests that comprise the consortium finally found a way to appease public contempt that was agreeable to their bottom line.

Notified of the spoof, Doctorow immediately tweeted his joy at being included in a Yes Men hoax.

Wired.com was less pleased. An attorney for its corporate parent, Condé  Nast (CDN) demanded the Yes Men take the site down and transfer the domain name to CDN, claiming trademark infringement and misappropriation of Doctorow’s identity, with a vague reference to copyright infringement thrown in for good measure.

As we explained in our response on the Yes Men’s behalf, Wired’s heavy-handed reaction was both misguided and disappointing. Their legal claims are baseless given the satirical, noncommercial nature of the site (not to mention Doctorow’s implicit celebration of it after the fact). And frankly, a publication of Wired’s caliber should be celebrating this form of political speech, not trying to shut it down.

Hopefully Wired and CDN will recognize this is not a battle they want or need to fight. If not, EFF stands ready to defend the Yes Men and their critical work.

NextNav’s Callous Land-Grab to Privatize 900 MHz

Par : Rory Mir
13 septembre 2024 à 10:52

The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. 

Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. 

EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav’s Proposed 'Band-Grab'

NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. 

Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS.

NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. 

What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

Stifling the Future of Open Communication

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band.

One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail.

This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses.

Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

FCC Must Protect the Commons

NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. 

The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it.

You can read our FCC comments here.

❌
❌