Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Strong End-to-End Encryption Comes to Discord Calls

We’re happy to see that Discord will soon start offering a form of end-to-end encryption dubbed “DAVE” for its voice and video chats. This puts some of Discord’s audio and video offerings in line with Zoom, and separates it from tools like Slack and Microsoft Teams, which do not offer end-to-end encryption for video, voice, or any other communications on those apps. This is a strong step forward, and Discord can do even more to protect its users’ communications.

End-to-end encryption is used by many chat apps for both text and video offerings, including WhatsApp, iMessage, Signal, and Facebook Messenger. But Discord operates differently than most of those, since alongside private and group text, video, and audio chats, it also encompasses large scale public channels on individual servers operated by Discord. Going forward, audio and video will be end-to-end encrypted, but text, including both group channels and private messages, will not.

When a call is end-to-end encrypted, you’ll see a green lock icon. While it's not required to use the service, Discord also offers a way to optionally verify that the strong encryption a call is using is not being tampered with or eavesdropped on. During a call, one person can pull up the “Voice Privacy Code,” and send it over to everyone else on the line—preferably in a different chat app, like Signal—to confirm no one is compromising participants’ use of end-to-end encryption. This is a way to ensure someone is not impersonating someone and/or listening in to a conversation.

By default, you have to do this every time you initiate a call if you wish to verify the communication has strong security. There is an option to enable persistent verification keys, which means your chat partners only have to verify you on each device you own (e.g. if you sometimes call from a phone and sometimes from a computer, they’ll want to verify for each).

Key management is a hard problem in both the design and implementation of cryptographic protocols. Making sure the same encryption keys are shared across multiple devices in a secure way, as well as reliably discovered in a secure way by conversation partners, is no trivial task. Other apps such as Signal require some manual user interaction to ensure the sharing of key-material across multiple devices is done in a secure way. Discord has chosen to avoid this process for the sake of usability, so that even if you do choose to enable persistent verification keys, the keys on separate devices you own will be different.

While this is an understandable trade-off, we hope Discord takes an extra step to allow users who have heightened security concerns the ability to share their persistent keys across devices. For the sake of usability, they could by default generate separate keys for each device while making sharing keys across them an extra step. This will avoid the associated risk of your conversation partners seeing you’re using the same device across multiple calls. We believe making the use of persistent keys easier and cross-device will make things safer for users as well: they will only have to verify the key for their conversation partners once, instead of for every call they make.

Discord has performed the protocol design and implementation of DAVE in a solidly transparent way, including publishing the protocol whitepaper, the open-source library, commissioning an audit from well-regarded outside researchers, and expanding their bug-bounty program to include rewarding any security researchers who report a vulnerability in the DAVE protocol. This is the sort of transparency we feel is required when rolling out encryption like this, and we applaud this approach.

But we’re disappointed that, citing the need for content moderation, Discord has decided not to extend end-to-end encryption offerings to include private messages or group chats. In a statement to TechCrunch, they reiterated they have no further plans to roll out encryption in direct messages or group chats.

End-to-end encrypted video and audio chats is a good step forward—one that too many messaging apps lack. But because protection of our text conversations is important and because partial encryption is always confusing for users, Discord should move to enable end-to-end encryption on private text chats as well. This is not an easy task, but it’s one worth doing.

The French Detention: Why We're Watching the Telegram Situation Closely

EFF is closely monitoring the situation in France in which Telegram’s CEO Pavel Durov was charged with having committed criminal offenses, most of them seemingly related to the operation of Telegram. This situation has the potential to pose a serious danger to security, privacy, and freedom of expression for Telegram’s 950 million users.  

On August 24th, French authorities detained Durov when his private plane landed in France. Since then, the French prosecutor has revealed that Durov’s detention was related to an ongoing investigation, begun in July, of an “unnamed person.” The investigation involves complicity in crimes presumably taking place on the Telegram platform, failure to cooperate with law enforcement requests for the interception of communications on the platform, and a variety of charges having to do with failure to comply with  French cryptography import regulations. On August 28, Durov was charged with each of those offenses, among others not related to Telegram, and then released on the condition that he check in regularly with French authorities and not leave France.  

We know very little about the Telegram-related charges, making it difficult to draw conclusions about how serious a threat this investigation poses to privacy, security, or freedom of expression on Telegram, or on online services more broadly. But it has the potential to be quite serious. EFF is monitoring the situation closely.  

There appear to be three categories of Telegram-related charges:  

  • First is the charge based on “the refusal to communicate upon request from authorized authorities, the information or documents necessary for the implementation and operation of legally authorized interceptions.” This seems to indicate that the French authorities sought Telegram’s assistance to intercept communications on Telegram.  
  • The second set of charges relate to “complicité” with crimes that were committed in some respect on or through Telegram. These charges specify “organized distribution of images of minors with a pedopornographic nature, drug trafficking, organized fraud, and conspiracy to commit crimes or offenses,” and “money laundering of crimes or offenses in an organized group.”  
  • The third set of charges all relate to Telegram’s failure to file a declaration required of those who import a cryptographic system into France.  

Now we are left to speculate. 

It is possible that all of the charges derive from “the failure to communicate.” French authorities may be claiming that Durov is complicit with criminals because Telegram refused to facilitate the “legally authorized interceptions.” Similarly, the charges connected to the failure to file the encryption declaration likely also derive from the “legally authorized interceptions” being encrypted. France very likely knew for many years that Telegram had not filed the required declarations regarding their encryption, yet they were not previously charged for that omission. 

Refusal to cooperate with a valid legal order for assistance with an interception could be similarly prosecuted in most international legal systems, including the United States. EFF has frequently contested the validity of such orders and gag orders associated with them, and have urged services to contest them in courts and pursue all appeals. But once such orders have been finally validated by courts, they must be complied with. It is a more difficult situation in other situations such as where the nation lacks a properly functioning judiciary or there is an absence of due process, such as China or Saudi Arabia. 

In addition to the refusal to cooperate with the interception, it seems likely that the complicité charges also, or instead, relate to Telegram’s failure to remove posts advancing crimes upon request or knowledge. Specifically, the charges of complicity in “the administration of an online platform to facilitate an illegal transaction” and “organized distribution of images of minors with a pedopornographic nature, drug trafficking,[and] organized fraud,” could likely be based on not depublishing posts. An initial statement by Ofmin, the French agency established to investigate threats to child safety online, referred to “lack of moderation” as being at the heart of their investigation. Under French law, Article 323-3-2, it is a crime to knowingly allow the distribution of illegal content or provision of illegal services, or to facilitate payments for either. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned.

In particular, this potential “lack of moderation” liability bears watching. If Durov is prosecuted because Telegram simply inadequately removed offending content from the site that it is generally aware of, that could expose most every other online platform to similar liability. It would also be concerning, though more in line with existing law, if the charges relate to an affirmative refusal to address specific posts or accounts, rather than a generalized awareness. And both of these situations are much different from one in which France has evidence that Durov was more directly involved with those using Telegram for criminal purposes. Moreover, France will likely have to prove that Durov himself committed each of these offenses, and not Telegram itself or others at the company. 

EFF has raised serious concerns about Telegram’s behavior both as a social media platform and as a messaging app. In spite of its reputation as a “secure messenger,” only a very small subset of messages  on Telegram are encrypted in such a way that prevents the company from reading the contents of communications—end-to-end encryption. (Only one-to-one messages with the “secret messages” option enabled are end-to-end encrypted) And even so, cryptographers have questioned the effectiveness of Telegram’s homebrewed cryptography. If the French government’s charges have to do with Telegram’s refusal to moderate or intercept these messages, EFF will oppose this case in the strongest terms possible, just as we have opposed all government threats to end-to-end encryption all over the world. 

This arrest marks an alarming escalation by a state’s authorities. 

It is not yet clear whether Telegram users themselves, or those offering similar services to Telegram, should be concerned. French authorities may ask for technical measures that endanger the security and privacy of those users. Durov and Telegram may or may not comply. Those running similar services may not have anything to fear, or these charges may be the canary in the coalmine warning us all that French authorities intend to expand their inspection of messaging and social media platforms. It is simply too soon, and there is too little information for us to know for sure.  

It is not the first time Telegram’s laissez faire attitude towards content moderation has led to government reprisals. In 2022, the company was forced to pay a fine in Germany for not establishing a lawful way for reporting illegal content or naming an entity in Germany to receive official communication. Brazil fined the company in 2023 for failing to suspend accounts of supporters of former President Jair Bolsonaro. Nevertheless this arrest marks an alarming escalation by a state’s authorities.  We are monitoring the situation closely and will continue to do so.  

Now The EU Council Should Finally Understand: No One Wants “Chat Control”

Par : Joe Mullin
1 juillet 2024 à 11:11

The EU Council has now passed a 4th term without passing its controversial message-scanning proposal. The just-concluded Belgian Presidency failed to broker a deal that would push forward this regulation, which has now been debated in the EU for more than two years. 

For all those who have reached out to sign the “Don’t Scan Me” petition, thank you—your voice is being heard. News reports indicate the sponsors of this flawed proposal withdrew it because they couldn’t get a majority of member states to support it. 

Now, it’s time to stop attempting to compromise encryption in the name of public safety. EFF has opposed this legislation from the start. Today, we’ve published a statement, along with EU civil society groups, explaining why this flawed proposal should be withdrawn.  

The scanning proposal would create “detection orders” that allow for messages, files, and photos from hundreds of millions of users around the world to be compared to government databases of child abuse images. At some points during the debate, EU officials even suggested using AI to scan text conversations and predict who would engage in child abuse. That’s one of the reasons why some opponents have labeled the proposal “chat control.” 

There’s scant public support for government file-scanning systems that break encryption. Nor is there support in EU law. People who need secure communications the most—lawyers, journalists, human rights workers, political dissidents, and oppressed minorities—will be the most affected by such invasive systems. Another group harmed would be those whom the EU’s proposal claims to be helping—abused and at-risk children, who need to securely communicate with trusted adults in order to seek help. 

The right to have a private conversation, online or offline, is a bedrock human rights principle. When surveillance is used as an investigation technique, it must be targeted and coupled with strong judicial oversight. In the coming EU council presidency, which will be led by Hungary, leaders should drop this flawed message-scanning proposal and focus on law enforcement strategies that respect peoples’ privacy and security. 

Further reading: 

European Court of Human Rights Confirms: Weakening Encryption Violates Fundamental Rights

In a milestone judgment—Podchasov v. Russiathe European Court of Human Rights (ECtHR) has ruled that weakening of encryption can lead to general and indiscriminate surveillance of the communications of all users and violates the human right to privacy.  

In 2017, the landscape of digital communication in Russia faced a pivotal moment when the government required Telegram Messenger LLP and other “internet communication” providers to store all communication data—and content—for specified durations. These providers were also required to supply law enforcement authorities with users’ data, the content of their communications, as well as any information necessary to decrypt user messages. The FSB (the Russian Federal Security Service) subsequently ordered Telegram to assist in decrypting the communications of specific users suspected of engaging in terrorism-related activities.

Telegram opposed this order on the grounds that it would create a backdoor that would undermine encryption for all of its users. As a result, Russian courts fined Telegram and ordered the blocking of its app within the country. The controversy extended beyond Telegram, drawing in numerous users who contested the disclosure orders in Russian courts. A Russian citizen, Mr Podchasov, escalated the issue to the European Court of Human Rights (ECtHR), arguing that forced decryption of user communication would infringe on the right to private life under Article 8 of the European Convention of Human Rights (ECHR), which reads as follows:  

Everyone has the right to respect for his private and family life, his home and his correspondence (Article 8 ECHR, right to respect for private and family life, home and correspondence) 

EFF has always stood against government intrusion into the private lives of users and advocated for strong privacy guarantees, including the right to confidential communication. Encryption not only safeguards users’ privacy but also protects their right to freedom of expression protected under international human rights law. 

In a great victory for privacy advocates, the ECtHR agreed. The Court found that the requirement of continuous, blanket storage of private user data interferes with the right to privacy under the Convention, emphasizing that the possibility for national authorities to access these data is a crucial factor for determining a human rights violation [at 53]. The Court identified the inherent risks of arbitrary government action in secret surveillance in the present case and found again—following its stance in Roman Zakharov v. Russiathat the relevant legislation failed to live up to the quality of law standards and lacked the adequate and effective safeguards against misuse [75].  Turning to a potential justification for such interference, the ECtHR emphasized the need of a careful balancing test that considers the use of modern data storage and processing technologies and weighs the potential benefits against important private-life interests [62-64]. 

In addressing the State mandate for service providers to submit decryption keys to security services, the court's deliberations culminated in the following key findings [76-80]:

  1. Encryption being important for protecting the right to private life and other fundamental rights, such as freedom of expression: The ECtHR emphasized the importance of encryption technologies for safeguarding the privacy of online communications. Encryption safeguards and protects the right to private life generally while also supporting the exercise of other fundamental rights, such as freedom of expression.
  2. Encryption as a shield against abuses: The Court emphasized the role of encryption to provide a robust defense against unlawful access and generally “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.” The Court held that this must be given due consideration when assessing measures which could weaken encryption.
  3. Decryption of communications orders weakens the encryption for all users: The ECtHR established that the need to decrypt Telegram's "secret chats" requires the weakening of encryption for all users. Taking note again of the dangers of restricting encryption described by many experts in the field, the Court held that backdoors could be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. 
  4. Alternatives to decryption: The ECtHR took note of a range of alternative solutions to compelled decryption that would not weaken the protective mechanisms, such as forensics on seized devices and better-resourced policing.  

In light of these findings, the Court held that the mandate to decrypt end-to-end encrypted communications risks weakening the encryption mechanism for all users, which was a disproportionate to the legitimate aims pursued. 

In summary [80], the Court concluded that the retention and unrestricted state access to internet communication data, coupled with decryption requirements, cannot be regarded as necessary in a democratic society, and are thus unlawful. It emphasized that a direct access of authorities to user data on a generalized basis and without sufficient safeguards impairs the very essence of the right to private life under the Convention. The Court also highlighted briefs filed by the European Information Society Institute (EISI) and Privacy International, which provided insight into the workings of end-to-end encryption and explained why mandated backdoors represent an illegal and disproportionate measure. 

Impact of the ECtHR ruling on current policy developments 

The ruling is a landmark judgment, which will likely draw new normative lines about human rights standards for private and confidential communication. We are currently supporting Telegram in its parallel complaint to the ECtHR, contending that blocking its app infringes upon fundamental rights. As part of a collaborative efforts of international human rights and media freedom organisations, we have submitted a third-party intervention to the ECtHR, arguing that blocking an entire app is a serious and disproportionate restriction on freedom of expression. That case is still pending. 

The Podchasov ruling also directly challenges ongoing efforts in Europe to weaken encryption to allow access and scanning of our private messages and pictures.

For example, the controversial UK's Online Safety Act creates the risk that online platforms will use software to search all users’ photos, files, and messages, scanning for illegal content. We recently submitted comments to the relevant UK regulator (Ofcom) to avoid any weakening of encryption when this law becomes operational. 

In the EU, we are concerned about the European Commission’s message-scanning proposal (CSAR) as being a disaster for online privacy. It would allow EU authorities to compel online services to scan users’ private messages and compare users’ photos to against law enforcement databases or use error-prone AI algorithms to detect criminal behavior. Such detection measures will inevitably lead to dangerous and unreliable Client-Side Scanning practices, undermining the essence of end-to-end encryption. As the ECtHR deems general user scanning as disproportionate, specifically criticizing measures that weaken existing privacy standards, forcing platforms like WhatsApp or Signal to weaken security by inserting a vulnerability into all users’ devices to enable message scanning must be considered unlawful. 

The EU regulation proposal is likely to be followed by other proposals to grant law enforcement access to encrypted data and communications. An EU high level expert group on ‘access to data for effective law enforcement’ is expected to make policy recommendations to the next EU Commission in mid-2024. 

We call on lawmakers to take the Court of Human Rights ruling seriously: blanket and indiscriminate scanning of user communication and the general weakening of encryption for users is unacceptable and unlawful. 

What Apple's Promise to Support RCS Means for Text Messaging

31 janvier 2024 à 16:51

You may have heard recently that Apple is planning to implement Rich Communication Services (RCS) on iPhones, once again igniting the green versus blue bubble debate. RCS will thankfully bring a number of long-missing features to those green bubble conversations in Messages, but Apple's proposed implementation has a murkier future when it comes to security. 

The RCS standard will replace SMS, the protocol behind basic everyday text messages, and MMS, the protocol for sending pictures in text messages. RCS has a number of improvements over SMS, including being able to send longer messages, sending high quality pictures, read receipts, typing indicators, GIFs, location sharing, the ability to send and receive messages over Wi-Fi, and improved group messaging. Basically, it's a modern messaging standard with features people have grown to expect. 

The RCS standard is being worked on by the same standards body (GSMA) that wrote the standard for SMS and many other core mobile functions. It has been in the works since 2007 and supported by Google since 2019. Apple had previously said it wouldn’t support RCS, but recently came around and declared that it will support sending and receiving RCS messages starting some time in 2024. This is a win for user experience and interoperability, since now iPhone and Android users will be able to send each other rich modern text messages using their phone’s default messaging apps. 

But is it a win for security? 

On its own, the core RCS protocol is currently not any more secure than SMS. The protocol is not encrypted by default, meaning that anyone at your phone company or any law enforcement agent (ordinarily with a warrant) will be able to see the contents and metadata of your RCS messages. The RCS protocol by itself does not specify or recommend any type of end-to-end encryption. The only encryption of messages is in the incidental transport encryption that happens between your phone and a cell tower. This is the same way it works for SMS.

But what’s exciting about RCS is its native support for extensions. Google has taken advantage of this ability to implement its own plan for encryption on top of RCS using a version of the Signal protocol. As of now, this only works for users who are both using Google’s default messaging app (Google Messages), and whose phone companies support RCS messaging (the big three in the U.S. all do, as do a majority around the world). If encryption is not supported by either user the conversation continues to use the default unencrypted version. A user’s phone company could actively choose to block encrypted RCS in a specific region or for a specific user or for a specific pair of users by pretending it doesn’t support RCS. In that case the user will be given the option of resending the messages unencrypted, but can choose to not send the message over the unencrypted channel. Google’s implementation of encrypted RCS also doesn’t hide any metadata about your messages, so law enforcement could still get a record of who you conversed with, how many messages were sent, at what times, and how big the messages were. It's a significant security improvement over SMS, but people with heightened risk profiles should still consider apps that leak less metadata, like Signal. Despite those caveats this is a good step by Google towards a fully encrypted text messaging future.

Apple stated it will not use any type of proprietary end-to-end encryption–presumably referring to Google's approach—but did say it would work to make end-to-end encryption part of the RCS standard. Avoiding a discordant ecosystem with a different encryption protocol for each company is desirable goal. Ideally Apple and Google will work together on standardizing end-to-end encryption in RCS so that the solution is guaranteed to work with both companies’ products from the outset. Hopefully encryption will be a part of the RCS standard by the time Apple officially releases support for it, otherwise users will be left with the status quo of having to use third-party apps for interoperable encrypted messaging.

We hope that the GSMA members will agree on a standard soon, that any standard will use modern cryptographic techniques, and that the standard will do more to protect metadata and downgrade attacks than the current implementation of encrypted RCS. We urge Google and Apple to work with the GSMA to finalize and adopt such a standard quickly. Interoperable, encrypted text messaging by default can’t come soon enough.

Protecting Encryption And Privacy In The US: 2023 Year in Review

Par : Joe Mullin
24 décembre 2023 à 12:30

EFF believes you have the right to have a private conversation–in the physical world, and in the digital world. The best technology to protect that right is end-to-end encryption. 

Governments around the world are working hard to monitor online conversations, far beyond the bounds of traditional targeted law enforcement. 2023 has been a year of unprecedented threats to encryption and privacy. 

In the US, three Senate bills were introduced that, in our view, would discourage, weaken, or create backdoors into encryption technology. With your help, we’ve stopped all three from moving forward–and we’ll continue to do so in the year to come. 

EARN IT, S. 1207

Simply put, EARN IT allows providers of secure communications services to be sued or prosecuted. The excuse for EARN IT is to combat online child abuse. EARN IT would allow state attorneys general to regulate the internet, as long as the stated purpose for their regulation is to protect kids from online exploitation.  

There’s no doubt that the purpose of this bill is to scan user messages, photos, and files. In a Q&A document published last year, the bill sponsors even suggested specific software that could be used to monitor users. If you offer your users encrypted services, the bill specifically allows the fact that you offered encryption to constitute evidence against you in court. 

Constantly scanning every internet user is not a reasonable technique for investigating crimes. What’s more, evidence continues to mount that the scanning software used to detect child abuse does not work and creates false accusations. If EARN IT passes, it will push companies to either stop using encryption services or even create a dangerous backdoor to encryption that would weaken privacy and security for everyone. 

We were disappointed that EARN IT passed through a committee vote, although heartened that more senators expressed concerns with the bill’s effects. EARN IT has not seen a vote on the Senate floor, and we’re continuing to express our strong opposition, together with other groups that are concerned about human rights and privacy. 

STOP CSAM, S. 1199

Possessing or distributing child abuse images is a serious crime. Anyone who has actual knowledge of such images on a service they control is required to notify the National Center for Missing and Exploited Children (a government entity), which then forwards reports to law enforcement agencies. 

That’s why we were surprised and disappointed to see some Senators introduced a bill that falsely suggests this existing law-enforcement framework would work better with the addition of mass surveillance. 

The STOP CSAM bill, introduced in April, would create new crimes, allowing those who “knowingly promote or facilitate” the exploitation of children to be prosecuted, based on the very low legal standard of negligence. This is the same legal standard that applies to car accidents and other situations where the defendant did not intend to cause harm. 

At first glance, it may sound good to fight those who “promote” or “facilitate” these crimes, but the bill’s broad terms will likely reach passive conduct like, you guessed it, simply providing an encrypted app. 

STOP CSAM is one more attempt to criminalize and demonize anyone who uses encryption to communicate online. That’s why we’ve opposed it throughout the year. This bill passed out of the Senate Judiciary Committee, but has not received a vote on the Senate floor. 

Cooper Davis, S. 1080

This bill is a misguided attempt to deal with the nation’s fentanyl crisis by turning your smartphone into a DEA informant.  It threatens communications service providers with huge fines if they don’t report to the DEA suspected drug sales on their platforms. 

Faced with massive potential punishments, service providers will inevitably censor a wide variety of communications about drugs–including peoples’ descriptions of their own experiences, and even attempts to support others who are trying to get social or medical help with an addiction problem. 

If S.1080 were to pass Congress, legislators seeking to persecute certain groups will be eager to expand the framework. In many states, politicians and prosecutors have been vocal about their desire to find and prosecute marijuana users and doctors, people who may use abortion pills, or people who want gender-related medication

S. 1080 also has no provision to ensure the DEA deletes incorrect reports, does not properly notify users who get targeted, and does not require law enforcement to get a warrant to preserve the massive troves of private data they will be sent about users. The bill was passed in committee in a 16-5 vote in July, but has not received a vote on the Senate floor. 

EFF will continue to oppose proposals that seek to vacuum up our private communications, or push platforms towards censorship of legitimate content. The thousands of messages we sent to Congress opposing these wrongheaded proposals have stopped them from becoming law. We held the line in 2023 with your help–thank you. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

Without Interoperability, Apple Customers Will Never Be Secure

13 décembre 2023 à 14:18

Every internet user should have the ability to privately communicate with the people that matter to them, in a secure fashion, using the tools and protocols of their choosing.

Apple’s iMessage offers end-to-end encrypted messaging for its customers, but only if those customers want to talk to someone who also has an Apple product. When an Apple customer tries to message an Android user, the data is sent over SMS, a protocol that debuted while Wayne’s World was still in its first theatrical run. SMS is wildly insecure, but when Apple customers ask the company how to protect themselves while exchanging messages with Android users, Apple’s answer is “buy them iPhones.”

That’s an obviously false binary. Computers are all roughly equivalent, so there’s no reason that an Android device couldn’t run an app that could securely send and receive iMessage data. If Apple won’t make that app, then someone else could. 

That’s exactly what Apple did, back when Microsoft refused to make a high-quality MacOS version of Microsoft Office: Apple reverse-engineered Office and released iWork, whose Pages, Numbers and Keynote could perfectly read and write Microsoft’s Word, Excel and Powerpoint files.

Back in September, a 16 year old high school student reverse engineered iMessage and released Pypush, a free software library that reimplements iMessage so that anyone can send and receive secure iMessage data, maintaining end-to-end encryption, without the need for an Apple ID.

Last week, Beeper, a multiprotocol messaging company, released Beeper Mini, an alternative iMessage app reportedly based on the Pypush code that runs on Android, giving Android users the “blue bubble” that allows Apple customers to communicate securely with them. Beeper Mini stands out among earlier attempts at this by allowing users’ devices to directly communicate with Apple’s servers, rather than breaking end-to-end encryption by having messages decrypted and re-encrypted by servers in a data-center.

Beeper Mini is an example of “adversarial interoperability.” That’s when you make something new work with an existing product, without permission from the product’s creator.

(“Adversarial interoperability” is quite a mouthful, so we came up with “competitive compatibility” or “comcom” as an alternative term.)

Comcom is how we get third-party inkjet ink that undercuts HP’s $10,000/gallon cartridges, and it’s how we get independent repair from technicians who perform feats the manufacturer calls “impossible.” Comcom is where iMessage itself comes from: it started life as iChat, with support for existing protocols like XMPP

Beeper Mini makes life more secure for Apple users in two ways: first, it protects the security of the messages they send to people who don’t use Apple devices; and second, it makes it easier for Apple users to switch to a rival platform if Apple has a change of management direction that deprioritizes their privacy.

Apple doesn’t agree. It blocked Beeper Mini users just days after the app’s release.  Apple told The Verge’s David Pierce that they had blocked Beeper Mini users because Beeper Mini “posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”

If Beeper Mini indeed posed those risks, then Apple has a right to take action on behalf of its users. The only reason to care about any of this is if it makes users more secure, not because it serves the commercial interests of either Apple or Beeper. 

But Apple’s account of Beeper Mini’s threats does not square with the technical information Beeper has made available. Apple didn’t provide any specifics to bolster its claims. Large tech firms who are challenged by interoperators often smear their products as privacy or security risks, even when those claims are utterly baseless.

The gold standard for security claims is technical proof, not vague accusations. EFF hasn't audited Beeper Mini and we’d welcome technical details from Apple about these claimed security issues. While Beeper hasn’t published the source code for Beeper Mini, they have offered to submit it for auditing by a third party.

Beeper Mini is back. The company released an update on Monday that restored its functionality. If Beeper Mini does turn out to have security defects, Apple should protect its customers by making it easier for them to connect securely with Android users.

One thing that won’t improve the security of Apple users is for Apple to devote its engineering resources to an arms race with Beeper and other interoperators. In a climate of stepped-up antitrust enforcement, and as regulators around the world are starting to force interoperability on tech giants, pointing at interoperable products and shouting “insecure! Insecure!” no longer cuts it. 

Apple needs to acknowledge that it isn’t the only entity that can protect Apple customers.

Meta Announces End-to-End Encryption by Default in Messenger

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

This Month, The EU Parliament Can Take Action To Stop The Attack On Encryption

Par : Joe Mullin
7 novembre 2023 à 15:10

Update 11/14/2023: The LIBE committee adopted the compromise amendments by a large majority. Once the committee's version of the law becomes the official position of the European Parliament, attention will shift to the Council of the EU. Along with our allies, EFF will continue to advocate that the EU reject proposals to require mass scanning and compromise of end-to-end encryption.

A key European parliamentary committee has taken an important step to defend user privacy, including end-to-end encryption. The Committee on Civil Liberties, Justice and Home Affairs (LIBE) has politically agreed on much-needed amendments to a proposed regulation that, in its original form, would allow for mass-scanning of people’s phones and computers. 

The original proposal from the European Commission, the EU’s executive body, would allow EU authorities to compel online services to analyze all user data and check it against law enforcement databases. The stated goal is to look for crimes against children, including child abuse images. 

But this proposal would have undermined a private and secure internet, which relies on strong encryption to protect the communications of everyone—including minors. The EU proposal even proposed reporting people to police as possible child abusers by using AI to rifle through people’s text messages. 

Every human being should have the right to have a private conversation. That’s true in the offline world, and we must not give up on those rights in the digital world. We deserve to have true private communication, not bugs in our pockets. EFF has opposed this proposal since it was introduced

More than 100 civil society groups joined us in speaking out against this proposal. So did thousands of individuals who signed the petition demanding that the EU “Stop Scanning Me.” 

The LIBE committee has wisely listened to those voices, and now major political groups have endorsed a compromise proposal that has language protecting end-to-end encryption. Early reports indicate the language will be a thorough protection that includes language disallowing client-side scanning, a form of bypassing encryption. 

The compromise proposal also takes out earlier language that could have allowed for mandatory age verification. Such age verification mandates amount to requiring people to show ID cards before they get on the internet; they are not compatible with the rights of adults or minors to speak anonymously when necessary. 

The LIBE committee is scheduled to confirm the new agreement  on November 13. The language is not perfect; some parts of the proposal, while not mandating age verification, may encourage its further use. The proposal could also lead to increased scanning of public online material that could be less than desirable, depending on how it’s done. 

Any time governments access peoples’ private data it should be targeted, proportionate, and subject to judicial oversight. The EU legislators should consider this agreement to be the bare minimum of what must be done to protect the rights of internet users in the EU and throughout the world. 

EFF, ACLU and 59 Other Organizations Demand Congress Protect Digital Privacy and Free Speech

26 septembre 2023 à 16:50

Earlier this week, EFF joined the ACLU and 59 partner organizations to send a letter to Senate Majority Leader Chuck Schumer urging the Senate to reject the STOP CSAM Act. This bill threatens encrypted communications and free speech online, and would actively harm LGBTQ+ people, people seeking reproductive care, and many others. EFF has consistently opposed this legislation. This bill has unacceptable consequences for free speech, privacy, and security that will affect how we connect, communicate, and organize.

TAKE ACTION

TELL CONGRESS NOT TO OUTLAW ENCRYPTED APPS

The STOP CSAM Act, as amended, would lead to censorship of First Amendment protected speech, including speech about reproductive health, sexual orientation and gender identity, and personal experiences related to gender, sex, and sexuality. Even today, without this bill, platforms regularly remove content that has vague ties to sex or sexuality for fear of liability. This would only increase if STOP CSAM incentivized apps and websites to exercise a heavier hand at content moderation.

If enacted, the STOP CSAM Act will also make it more difficult to communicate using end-to-end encryption. End-to-end encrypted communications cannot be read by anyone but the sender or recipient — that means authoritarian governments, malicious third parties, and the platforms themselves can’ read user messages. Offering encrypted services could open apps and websites up to liability, because a court could find that end-to-end encryption services are likely to be used for CSAM, and that merely offering them is reckless.

Congress should not pass this law, which will undermine security and free speech online. Existing law already requires online service providers who have actual knowledge of CSAM on their platforms to report that content to the National Center for Missing and Exploited Children (NCMEC), a quasi-government entity that works closely  with law enforcement agencies. Congress and the FTC have many tools already at their disposal to tackle CSAM, some of which are not used. 

Today The UK Parliament Undermined The Privacy, Security, And Freedom Of All Internet Users 

Par : Joe Mullin
19 septembre 2023 à 15:50

The U.K. Parliament has passed the Online Safety Bill (OSB), which says it will make the U.K. “the safest place” in the world to be online. In reality, the OSB will lead to a much more censored, locked-down internet for British users. The bill could empower the government to undermine not just the privacy and security of U.K. residents, but internet users worldwide

A Backdoor That Undermines Encryption

A clause of the bill allows Ofcom, the British telecom regulator, to serve a notice requiring tech companies to scan their users–all of them–for child abuse content.This would affect even messages and files that are end-to-end encrypted to protect user privacy. As enacted, the OSB allows the government to force companies to build technology that can scan regardless of encryption–in other words, build a backdoor. 

These types of client-side scanning systems amount to “Bugs in Our Pockets,” and a group of leading computer security experts has reached the same conclusion as EFF–they undermine privacy and security for everyone. That’s why EFF has strongly opposed the OSB for years

It’s a basic human right to have a private conversation. This right is even more important for the most vulnerable people. If the U.K. uses its new powers to scan people’s data, lawmakers will damage the security people need to protect themselves from harassers, data thieves, authoritarian governments, and others. Paradoxically, U.K. lawmakers have created these new risks in the name of online safety. 

The U.K. government has made some recent statements indicating that it actually realizes that getting around end-to-end encryption isn’t compatible with protecting user privacy. But given the text of the law, neither the government’s private statements to tech companies, nor its weak public assurances, are enough to protect the human rights of British people or internet users around the world. 

Censorship and Age-Gating

Online platforms will be expected to remove content that the U.K. government views as inappropriate for children. If they don’t, they’ll face heavy penalties. The problem is, in the U.K. as in the U.S., people do not agree about what type of content is harmful for kids. Putting that decision in the hands of government regulators will lead to politicized censorship decisions. 

The OSB will also lead to harmful age-verification systems. This violates fundamental principles about anonymous and simple access that has existed since the beginning of the Internet. You shouldn’t have to show your ID to get online. Age-gating systems meant to keep out kids invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary. 

In the coming months, we’ll be watching what type of regulations the U.K. government publishes describing how it will use these new powers to regulate the internet. If the regulators claim their right to require the creation of dangerous backdoors in encrypted services, we expect encrypted messaging services to keep their promises and withdraw from the U.K. if that nation’s government compromises their ability to protect other users. 

❌
❌