I don't get the rant about VPNs. Many ISPs (especially in not so democratic countries) collect a lot of data, and might sell it or store it for access for the authorities.
Tor is great for privacy, but usability is poor, it's really slow. VPNs are a great middle ground to just route all your traffic through. And yes, there are better and not so good providers. Some of them take privacy very serious, others not so much (do your research which one to trust). Many users also don't care about privacy so much, they just want to circumvent censorship or geoblocking.
Anonymity, privacy, confidentiality, (edited to add) censorship resistance / circumvention are different properties oftentimes perceived with some overlap. Different products and different audiences will provide / desire/need/prefer different subsets.
P.S. also, for some people, privacy from target site/resource/network will be much desirable and more important vs. certain aspects w.r.t their ISP.
And finally, there's a scale (see: threat models) - sometimes not leaking info or identity to some website / entity is enough, and there's no need to avoid to avoid state level actors, global passive adversaries, sophisticated timing correlation attacks and other funky creatures. Simplicity and ~zero maintenance (e.g. using some VPN with killswitch option) can be important factors in themselves.
edit P.P.S. also see this comment of mine: https://news.ycombinator.com/item?id=43096066 ; I legitimately and honestly believe that we should be (re)reading James Mickens rants (the usenix papers) and that this would give us more insight and guidance in relation to such security topics.
All i need to setup with my vpn provider is a long ass random number and monero wallet. Show me a residential or business isp that provides this level of privacy
My argument is: Some/many providers are known to abuse user data, are required by law to store them. Are actively and legally selling the data.
Most VPN providers don't do that. Some might, but a lower probability is still better than knowing that the ISP is legally required to store your data and give them to the authorities.
(this really depends on the country, in some countries ISPs are required not to store connection data, in others they are)
If I ran a spy agency or specialised in blackmail the first thing I'd do would be to run a VPN service. We have no way of auditing their trustworthiness.
Anybody expecting to get enough privacy to commit crimes using a VPN is definitely naive. I think most people doing important privacy stuff like being whistle blowers in authoritarian regimes knows that well enough. As part of a multi-pronged strategy to reduce the surface area exposed to creepy data brokers, though? Works fine for me.
Exactly the point. It's a security measure for regular people, who don't commit crimes and aren't a high value target for government agencies.
With a VPN your employee can't track what websites you're visiting over the WiFi in the break room (looking for a new job, visiting the union website for legal advice).
The hotel can't track what you're doing on their WiFi.
While traveling to not really democratic countries the government can't spy on you (in many muslim countries the typical US/European browsing behavior is probably illegal).
Right. A lot of folks sacrifice the good because it’s not unattainably perfect in tech, especially with security. I think the pressure to improve, on a whole, is a positive thing, but it often makes terrible practical advice.
> regular people, who don't commit crimes and aren't a high value target for government agencies.
> While traveling to not really democratic countries the government can't spy on you (in many muslim countries the typical US/European browsing behavior is probably illegal).
Aren’t these two threat models already completely different?
IMHO if an authoritarian government considers what you’re doing to be a crime, you should not be doing it over a VPN you saw advertised on a billboard
If VPN providers would do that, it would soon be public knowledge.
For some ISPs the abuse of user data is a known fact. So VPN is just the safer bet. But don't get me wrong, I'm not claiming this is impossible, it's just less likely.
How long did Israel’s supply chain attack on Hezbollah’s pager source stay clandestine?
How long did the USA keep it hidden that they owned the Swiss company that manufactured encryption devices for other countries?
They are extremely adept at finding ways to use the secret knowledge without blowing their source. A good example of this is the scene in _The Imitation Game_ after they break the Enigma code for the first time, but Alan Turing explains that they can’t notify the allied ship convoy without leaking that fact.
This may be a useful notion and term to start using when thinking about these things: https://en.wikipedia.org/wiki/Threat_model (not being smirky; I should use apply it more often myself, e.g.)
See Figure 1. He calls it the "Mossad / Not-Mossad threat model" :) (but, joking aside, it's good to identify oneself on a scale, or expectations for some system scale-wise and property-wise (incl. which security properties, esp. privacy vs. anonymity!) before looking closer at it.
As long as you are not a terrorist, all those surveillance programs will never be used against you. It would put those programs to risk, by exposing them.
If you need to protect yourself against the most sophisticated surveillance programs, a lot of opsec will be required.
VPNs can protect you against some creep in a public WiFi spying on your DNS request. It can protect against some low-tech state run mass surveillance.
It might be true. But I know that the shady cop in my neighborhood could easily put in a request to my ISP to get some data about me. But he could never get the same data from the NSA. I'm not a terrorist, so I don't feel threatened by the NSA. The thing I need to protect myself against is corrupt law enforcement.
If you use a VPN provider that only has one set of servers provided by one hosting provider then sure. If you’re connecting to any number of servers then that’s probably less of a problem. I’m much more concerned with Starbucks collecting data on their free WiFi than one of a number of hosting providers or whoever I sometimes exit through. The services you log into selling that data to the same brokers as everyone else is a much bigger vulnerability IMO.
The problem with those reports is they still mostly come down to the auditor taking the provider's word for it.
Maybe the staging environment the auditor looked at doesn't log anything, but what about production? Maybe it doesn't log anything today, but who says it didn't yesterday, and that they won't turn logging (back) on tomorrow?
At least where I live, there is strict legislation governing what ISPs can or must collect, and what they do with that information. Offences under that telecommunications interception legislation come with prison sentences, and so there is a strong incentive for individual employees to comply.
For some people, moving that exposure to a VPN provider mitigates their risk. For others, it increases it.
What VPNs can offer though are censorship resistance and masking your IP from non-governmental (and perhaps governmental if the logs really aren't kept) third parties. Which for many people is what they really need more so than full tor anonymity.
Anything and everything depends on the threat model. Often, a popular and trusted VPN is the first and necessary step to hide in the crowd and drop your most logged geo ID, which is the endpoint IP.
A middle option, and one that I use, is to run DNS-over-HTTPS on my pi-hole machine to tunnel DNS out. Without DNS information the local ISP can tell a lot less about the sites I'm visiting, since many sites are behind the same CDN IP pools.
My one big nitpick with it is that I can't use it from both my Chromebook and phone outside of the 'screencasting' thing where I can access the phone from the Chromebook. But that's kind of clunky.
I detest using a phone to write things if there is a keyboard available.
After Signal got rid of SMS supported, everyone I knew just got rid of Signal. I had on-boarded all my friends and family. But now, it's just me and a few techie friends who still use it.
"SMS support" is not something that most of the people in the world (barring North Americans I guess) cared about. So there were "normal people" using it before and there are "normal people" using it now. I primarily use it for closest friends and families. For everyone else there's WhatsApp which I can go dormant for days.
I use it, my friends use it, my family uses it, and it's often the only chat app I have in common with a stranger when I need to keep in touch. I didn't onboard my family, they use it independently. It's quite popular over here.
Signal now has usernames (not just phone numbers), and has a way to migrate most of your history onto new devices without having to use the terrible backup UX.
If it gets to the point of migrating all your media (not just the last N days) as well, it'd be to the point that I'd start asking everyone I know to use it without reservations.
Then Signal jumped the shark, with the server-side data forced PIN fiasco - the app had a full page, non-dismissable popup on start which you could only get rid of by agreeing to use and supplying a PIN for server-side data, which meant I could not use Signal. After a month of not being able to use, it went away, but at that point I no longer trusted Signal.
I'm also troubled by the year long period where the published source code wasn't updated.
I think in general people use WA, but all the people I talk to have Signal as well. I'll be moving on once I put the time in to find a replacement.
I know in Switzerland, only Signal is used by State employees - teachers writing to parents and so on.
Absolutely exists in Germany. E.g. the parent chat group for day care is Signal. Not as ubiquitous as WhatsApp (which was the chat of the previous day care) but not obscure either.
I’ve been using it a long time. At first only had my wife, but over time got my friend group on it and other random folks.
Interestingly I’ve seen more random people sign up for it it(I get an alert if their phone number is already in my contacts) in the last month than any month ever.
Basically all of my friends use it. I insist. I trust it; I don't trust WhatsApp (not because WA's crypto is weak, just because I don't trust who runs it now).
Huh. Opposite experience here (New England). Signal increasingly feels like the de facto messenger. I met more random people with Signal, than with iMessage, WhatsApp or Messenger.
My German bubble is also maybe half Whatsapp and half Signal. Many contacts have both.
It looks the same, feels the same and does the same. Notifications appear in exactly the same place, regardless of the messenger.
I don't understand why people are so passionate about not using the supposedly more private service next to the trivially less private one.
I also don't think adding messengers is such a big deal in non-technical circles, given the need is there. They just install it and its done.
An example: One of my younger generation family chat groups moved from Facebook Messenger to Signal because Messenger started blocking links to some streaming services in 2018. - No blocked links since.
No idea what normal people do, but it's still by far the dominant messenger among everyone I know. I have a couple contacts on WhatsApp, and half a dozen or so people who are still using SMS, but everyone else - and all the group chats - are on Signal.
I still use it, but it was primarily me and my wife. Since the Messages app on iOS has encryption, we haven't felt the need to continue. I use it with other tech friends. I feel like Signal could have been bigger if they adopted usernames long ago. I don't want to share my number with anyone anymore.
I've had the exact same experience you did. Burned a bit of appearance of having "technical wisdom" on that one, won't be pushing anyone anywhere anytime soon.
Signal used to be a pretty good SMS app, even disregarding the cryptography aspect. That how I treated it, and that's how it became successful among "normies". It was a good SMS app with encryption as a bonus. So much that as Signal discontinued SMS access, I was tempted to take an older version of Signal, strip out all the crypto, making it play well with the Android SMS system, and republish it as a simple SMS app.
But it seems that Signal failed grasp the importance of SMS, instead of fixing the ongoing issues, they let it rot before discontinuing it completely. Yes, SMS is bad for privacy, but they are not in a position to change that, they don't have the critical mass that Whatsapp has.
They thought they could take advantage of the boost they got with the Whatsapp/Facebook controversy and Elon Musk "use Signal" tweet to make people change their ways, probably not realizing that privacy is not that strong an argument for people outside of their bubble and that they have to offer "something else", and good SMS integration is "something else". And yes, having users that have "nothing to hide" is important for a privacy-focused app, because otherwise, the simple fact that someone is using that app can make him a target (see: EncroChat).
SMS was significant danger for an app that tries for the highest security. It was confusing that Signal was super secure unless you used SMS. It was actively dangerous if people got confused and sent private information over SMS. That is something that leads to people "falling out a window".
This is a problem for iMessage, but people don't have the same security expectations for that. The WhatsApp model, of everything going over the app and being secure, is better approach. I would argue that WhatsApp was more secure than Signal with SMS, the focus on security outweighed by danger of sending message in the clear.
But Signal already made some compromises for convenience over privacy, such as the use of phone numbers and contact discovery. It can be a pretty insidious problem because it becomes possible to check if the owner of a phone number is a Signal user. Imagine you had an abusive relationship, you broke up, you blocked the other's number, and you go on with your life. You have each others in your contact lists, on your side, that's for blocking purposes. A few months later, you install Signal, the other did too, Signal then puts you in contact with the person you really don't want to get in contact with...
Sure, Signal does it securely, but it doesn't solve the problem. It is possible to prevent it now by choosing the right options, but the problem is not obvious at all and you are very likely to fall for it if you are in this situation.
The conflict between discovery and privacy is one that Signal acknowledges, but their default choice is not "maximum security", it is a compromise. SMS support is also a compromise, but I believe that it is actually a lesser compromise, because the UI makes it very obvious that your message won't be secure if you send a SMS.
WhatsApp doesn't need to make the SMS compromise because they already have a well-established network, Signal doesn't have the luxury.
Phone numbers are still required for registration, and the option to make it undiscoverable (so that someone who knows your phone number won't find you on Signal) is not the default, and I think you can't set it before you create your account, leaving a small window of opportunity for someone to discover your account.
If Signal was uncompromising on privacy, being undiscoverable by phone number would be the default. In fact, they wouldn't ask for a phone number at all. These are reasonable compromises, but compromises nonetheless.
SMS almost doesn't matter outside of the USA. No one in my circle has once asked if Signal had SMS support, neither have I ever received an SMS since like 2010.
I live in France, SMS is really big here. Not as much as it once was, but it is still one of the most popular way to send text messages, probably not the most popular anymore, but close. In addition, it is still the standard way for businesses to notify you and for 2FA when they don't have a dedicated app.
The big argument for getting non-geek friends and family members to get Signal is "take this app, nothing will change for you, you can still SMS the way you did before, but you will have better privacy". Early versions of Signal could also import SMS, so it really was transparent. Well, mostly, there was still some issues they could have addressed, but didn't.
You can't use that argument anymore, the most likely reaction you will get is "I already have too many apps, I don't want another one, it is too complicated". Or you may install them Signal, and they proceed to send you another SMS, because that's what they always do, and it works, and they still have their SMS app. Or they already have WhatsApp, and so do you, and they are perfectly happy with it.
Really, for me SMS was the little thing Signal had that could help its adoption by the public at large.
The most compelling bit here for me is RCS. Lots of devices will auto upgrade an SMS chat to RCS if they think that both ends support it.
Signal cannot support RCS (there's no API) so you'd end up with a situation where someone uses signal to send an SMS, the recipient device then replies over RCS and signal never gets the reply. Not great for usability.
> I don’t think Signal is perfect, by any means. […] But if you need a tool to communicate privately with your friends and family–even if your chats are boring, mundane, and totally legal–Signal is the best damn choice I can recommend.
> At this point, we could try to review all of curve25519-dalek for implementation flaws, but that would take a long time and make for an excruciatingly dull read
> At some point in the future, I should review curve25519-dalek in detail to ascertain its quality.
I enthusiastically recommend using signal for normal communication. So long as using a private messenger is reserved for "illicit" use cases, that meta information of using it is itself a dangerous to the users.
I'll be blunt and say I don't trust governments to "do the right thing" so the only way past this is to normalize using it for all use cases and doing so protects vulnerable people.
Having good laws is not enough, not even if they are hard to change. The Netherlands had decent legal protections for Jewish people, right up to may 15th 1940. After that, those laws didn't protect any of them.
There are two likely ways I could interpret your parent regarding crime.
One is that they worry that if only criminals use the tool, the tool will be targeted by legislators and simple possession/ usage of the tool will eventually be criminalized.
The other is that there needs to be a crowd of innocent people to hide the guilty person and make the investigators / prosecutors work hard to find other corroborating evidence of the crime (not just using the channel of comms).
A third that has nothing to do with crime is that all people deserve to have strong privacy and we should normalize using a very secure method of comms.
I think you missed the point. WhatsApp and Telegram don't offer the safety or security Signal does.
They removed SMS to preserve that safety and security. It isn't a for profit business. We are the ones motivated to help them get adopted. if donations don't correlate with increased users Signal actually can't tolerate growth.
Do whatsapp or telegram have the same models? Can you tell if you are the customer or the product with them?
With that level of nihilism/skepticism why would paying for the privilege give confidence you are not a product? Only a similar pile of words keeps them from double-dipping and making you both customer and product.
About a decade ago… I was all up in the craze of checking out different chat platforms and going for the most secure one. Back then, Signal was BY FAR the least stable. I kept loosing messages, had loads of problems using it on both my phone and laptop, and I could not trust it to actually deliver chats to the other person.
I know this was a decade ago, I assume things changed, but I couldn’t have tried to get casual folk on an unstable platform. So people moved around, shopped for other platforms, and WhatsApp is the one that stuck. It’s now the default platform in my country.
Signal fucked up their launch by being an unstable piece of software with subpar UX when it mattered. It’s by far the most secure, but I just can’t see it ever gaining critical mass unless other platforms somehow explode gloriously.
The "keyed SHA-256" in key transparency's leaf_hash is ok in its current state, but limits future evolution (or presents a risk if that evolution is not done carefully): SHA-256 is subject to length extension.
I could not follow where the leaf_hash is used carefully enough to figure out exactly how dangerous this is in the broader context and taking future evolution into account. But it's clearly safe as it is used now because all expected inputs have the same length.
Signals threat model is fundamentally flawed. If you distrust Signal infrastructure and need E2E to secure yourself then having a closed client from Google Play or App Store in direct control of the developers ensures nothing. XMPP is much more safe and reliable and uses the same encryption technology as Signal, or even Telegram as it allows non-official clients that also support E2E. Signal is useless. That and the fact Signal doesn't even run on some devices, whereas XMPP or Telegram usually will.
XMPP is great, until you have to explain to granny how to use it.
Signal isn't perfect, but I can just tell Granny ‘install Signal, it's like WhatsApp or all the other messaging apps’.
The best crypto protocol is useless if no one uses it.
I don't know if you're familiar with the Olvid app. Its cryptographic system has been audited and is used by the French state. It allows you to create accounts that are not linked to a phone number or email address, etc. Lot of great features for tech-savvy crypto enthusiasts.
The only problem is that you have to be physically present and scan a QR code on each client to add a contact. So it does prevent spoofing, but it's incredibly impractical. Setting up a group chat is a real challenge because all the participants have to have each other's keys. Nobody wants to revive the PGP parties
Hey cmon.. I do; but, like, mostly for fun, and for the ceremonial readouts of usernames / key fingerprints etc... a bit of demonic summoning vibe, it can be...
But in seriousness: yes again and again, people miss the good chasing the perfect.
Also (to grandparent comment iirc): Signal reproducible builds have existed for a while, fyi!
"Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn !" It's the correct cryptographic fingerprint. However, you don't have any gouvernement issued ID on you Mr Cthulhu ?
My grandmother lives 700 km from my home, I can't be there every time she changes her phone or breaks something.
I can instruct a non-technical relative to install Signal. Installing Conversations and configuring the right account on the right server is another story.
Maybe https://quicksy.im/ would work in that case. Easy to install like Signal, but it uses your phone number as a routable XMPP address. Unsophisticated users get easy onboarding and nobody else gets locked into some walled garden.
It's encrypted by default. I agree that having no option to send unencrypted messages would increase security somewhat, but the unsophisticated user can always be tricked into sending an unencrypted message. They will just use a different app if something doesn't work. Not really fixable on the application level.
The Signal service’s Terms of Service do not prohibit non-official clients, so anyone can build and run a Signal client from source.
The Signal devs claim they don’t want people distributing forks of the foss client that point at the official Signal servers, but they actually release it under a free software license, so you are free to do precisely that, and all they can do is complain.
The software is free software governed by the software license, and the API/service is governed by the ToS. Neither prevents alternative clients.
Didn't they intentionally take down other Signal clients from Google Play? Although to play devils advocate they seem to have stopped being so aggressive. Still scary and concerning.
With XMPP I can run (and audit) eveything myself. You can't do this with Signal. Some proprietary messenger that's hostile to 3rd party clients aswell as also wanting my phone number will never compete with tried and tested open source technology like XMPP. There is a reason military and government orgs use their own infrastructure.
> With XMPP I can run (and audit) eveything myself.
You're saying this in response a blog post where I basically did an over-the-weekend audit of Signal's cryptography. I don't know if you understand the irony of that.
I dislike this as well, but since the introduction of usernames, it's at least no longer required to give strangers your phone number to converse with them.
> will never compete with tried and tested open source technology like XMPP.
The converse is true: XMPP+OMEMO do not qualify as a Signal competitor. I've outlined my reasoning in the linked post.
Until the entire ecosystem uses the latest version of the protocol, encryption is always on, and there is no plaintext fallback, XMPP is a poor solution for private communications.
> There is a reason military and government orgs use their own infrastructure.
Are you insisting that XMPP is "military grade encryption"?
You can't trust the server on Signal. Anything else, especially theoretical navel-gazing about plaintext fallbacks or make-believe audits of the client half of the system is inconsequential against the simple fact: You can't trust the server on Signal.
You don't have to. Reproducible builds, audited E2E and the other technical details combine to remove the need to trust the server for anything other than availability.
Anything else is uninformed navel-gazing about the risk without understanding the subject.
But the server on signal may store who communicates with whom. The most important information of all; and they can even connect that to phone numbers, so they've basically got everbody's real names.
Neither of those have anything to do with that kind of security. The preparation process for sealed sender looks very complicated and iffy, but maybe someone has checked that the protocol is secure-- I don't see any reason for such a complicated procedure, and I see such complicated procedures as something which can only be intended to hide insecurity, but let's assume it is.
You still have connect to them and deliver this to the signal servers. They'll have your IP and will know approximately where you are physically.
There is no way to make this kind of thing secure without some kind of remailer system and without many peer-to-peer connections.
Almost all proper anonymous remailers have been shut down. What remains, Tor for example, are insecure parodies of an anonymous remailer, and this is justified to the users as being for the sake of speed, but makes the system useless for anonymity. Even Mixminion is insecure if you can observe the full network, since you can count the packets received by the clients and there aren't dummy packets.
> The preparation process for sealed sender looks very complicated and iffy, but maybe someone has checked that the protocol is secure-- I don't see any reason for such a complicated procedure, and I see such complicated procedures as something which can only be intended to hide insecurity, but let's assume it is.
I did, in fact, check it when writing TFA.
> You still have connect to them and deliver this to the signal servers. They'll have your IP and will know approximately where you are physically.
Okay, let's actually think this through.
You have millions of people using Signal. When you send a message, the message is encrypted such that only the recipient can decrypt the envelope to verify if you're even permitted to send to them. The only thing the server gets is an encrypted envelope and some 96-bit random string that tells the server where to send the envelope.
This 96-bit random string is derived from your profile key (which is also encrypted and, at minimum, rotates every time you block someone).
Let's assume Signal's server has been maximally pwned by the NSA to slurp as much data and metadata as possible in the most malicious manner logically possible. In this thought experiment, they will now log things that the server currently does not because of a government-mandated implant.
In this scenario, the NSA will learn: someone from an IP address send an encrypted envelope to an unknown recipient.
Later, they will learn that another user (and their IP address) accessed the same envelope when downloading it from the Signal server.
You might be sitting there, rubbing your hands, thinking, "Wow, we sure got them." Except that there are millions of Signal users, many whom share IP addresses over time, and you can't even be reliably sure if a given Delivery Token maps to a targeted recipient.
Furthermore, you can't reliably distinguish between 1:1 messages and group messages, due to how groups are implemented (see: zkgroup).
> There is no way to make this kind of thing secure without some kind of remailer system and without many peer-to-peer connections.
When you say "make this kind of thing secure", are you talking about hiding IP addresses and phone numbers from Signal's servers? Or are you suggesting what Signal is already doing is not possible in the first place?
Cryptography isn't magic. It is comprehensible to mortals.
Yes. So they know the IP of the sender, and of the receiver, and they know the time, and they can contact the ISP and ask them who had these IP addresses at the given time and thus determine with certainty who communicated with whom.
We used a law in the EU requiring ISPs to keep track of this. It was determined to be illegal, so many ISPs probably don't store this information anymore, but I am sure that many do, and there can be other people-- maybe someone from another country that comes to your country gets jobs at ISPs to modify their software and make it do things he wants.
I don't know about this Salt Typhoon, but as I interpret it, there was an excellent secure system, and then people built some kind of front end on top of it so they could enter their orders into it, and then somebody compromised that front end and basically started getting all the data that the government could order to be gathered. I'm thinking that's going to include who has what IP.
I agree though, the complexity is maybe okay. It doesn't sound like a super terrible protocol or that involved. I suppose the description I saw of it felt more convoluted. At the same, I felt that the complexity of the presentation was hiding something, which I think might well be true-- it might even have been this simple fact that the central server still has every opportunity to determine who sends things to whom.
Anyway, people can get IP addresses of phones, public Wifi still tells them where you were and it only takes on mistake of you connecting through your own phone for them to know certain.
This taken together means there's no security with regard to contacts if the server or the network is compromised.
There are excellent solutions to this: dummy messages and remailing. Without those no system can ensure that contact information is kept secret.
But Signal offers optional features so that it does not know the former; nor the latter (usernames, etc.) Re: former - as discussed there's sealed sender (iirc been quite some time since they released it), but also depending on how you mean it - situation may be less scary for you even if not using sealed sender.
> so they've basically got everbody's real names.
Based on the above as well as on the fact that having phone number does not by itself == real name PII / identity - claiming "so they've basically got everbody's real names" is not just overly dramatic but also patently false and disingenuous. :( I'm sure you did not mean to dissuade people, but c'mon. Generalising in such a way and then adding hyperbole does not help, imho.
Yeah, of course you have to ask the phone company, but if you need to use Signal as opposed, then you are already assuming an adversary that does more than just snoop on the public Wifi he offers.
Such an adversary can ask the phone company whose phone number a certain phone number is, and they will tell him.
> but if you need to use Signal as opposed, then you are already assuming an adversary that does more than just snoop on the public Wifi he offers.
No, not necessarily. In fact I'd claim that we should all use Signal so that usage of Signal would not imply any kind of user profile (would not rconstitute any kind of meaningful signal where one could infer what kind of user they are).
I do believe that there's a spectrum of users with a corresponding spectrum of appropriate threat models.
If my own threat model (that I felt I had to adopt) was particularly gnarly, I would (1) use Signal sandboxed / in a VM, tunnelling all traffic through Tor (e.g. Tor listens on socket exposed to VM so that there's no easy way to work around tunnel), and/or (2) if particularly gnarly - would set up pre-shared one time pads with counterparties where possible, and use them to authenticate further, perhaps encrypt further (maybe just encrypting a session key, to save most of OTP) - essentially definitely not deem Signal enough by itself.
Signal correctly focuses on privacy, not on anonymity. The wider your set of claims as to what kinds of properties (from the set of: privacy, anonymity, censorship-circumvention, etc.) you provide, the higher the probability you're going to screw up with one of these, I'd say. Better to combine several tools where possible if your needs require it.
> Are you insisting that XMPP is "military grade encryption"?
I don't think that's the fair implication there, and I'd also hope you'd pull out "military grade" as a red herring, as meaningless marketing speak and ignore it instead of trying to trap someone using it. (unless it was a primary point they made, critical to their position, clearly not the case here)
Sure, if self hosting is the most important criteria for you, you can't really beat XMPP. Same goes for a richer ecosystem of 3rd party clients.
But those two are completely unrelated to security. I mean a self hosted solution can be secure, but it's weird to say that Signal can't compete with XMPP on security because it's not self hostable.
> but it's weird to say that Signal can't compete with XMPP on security because it's not self hostable.
No it's not. If I could conditionally control if your messages are delivered, and when. Even if I can't see the contents, that's a huge security vulnerability!
You can't control when "someones" messages are delivered, only when "all" messages are delivered. I.E Signal servers control general availability, not selective availability.
Moving to a self-hosted service most likely introduces more risk at this point assuming you can keep the service and all related infrastructure up to date and monitored 24/7/365
> You can't control when "someones" messages are delivered, only when "all" messages are delivered. I.E Signal servers control general availability, not selective availability.
this is the first I've heard this, got a source to prove this is actually true?
> Moving to a self-hosted service most likely introduces more risk at this point assuming you can keep the service and all related infrastructure up to date and monitored 24/7/365
That's not how threat models work, it's not 'more' risk. It's additional work. The risk is the same, signal has to do it, or 'you' have to do it. The risk and level of effort is the same. The only thing that changes is the direct responsibile party. Depending on configuration, it might reduce the risk if you can lower the number of parties you have to trust. I.e. I can verify my servers are up to date, but I have to just trust me bro, that signal's servers are always up to date.
> Without authenticating, hand the encrypted envelope to the service along with the recipient’s delivery token.
I'm probably going to need to read the white paper to actually understand this, because currently I'm confused as to how this doesn't result in knowing the destination. Surely I can reverse the delivery token if I'm in control of the servers?
Signal's server doesn't know who sent the message, since that information was Sealed.
Signal's server does know where to deliver the message to, based on a 96-bit random value generated client-side by the recipient (the delivery token). But it cannot know which user has which delivery token. To understand that, you need to look at the zero-knowledge credentials section of the code.
They could randomly drop messages sent to a specific targeted user's delivery token and block messages sent to that targeted user, but they have no way of knowing which user they're targeting. (Also, the client could just enroll a new one if this ever happened and Signal was somehow court-ordered to block delivery to a specific delivery token.)
>That's not how threat models work, it's not 'more' risk. It's additional work.
The "most likely" part does a lot of carrying here, and you made my point. The additional work self-hosting is not trivial in this space. Assuming you can either personally dedicate a LOT of time, or personally pay staff people to keep up vs a dedicated team working on this means that the threat model is inverted now.
I self-host a number of services, I'm very pro self hosting. I'm _VERY_ aware there is always a trade off between "I want this to work", "I don't have time to fix this regularly because I have a Job and Family and hopefully other hobbies besides self-hosting a service" and "I need to constantly update and apply changes that sometimes do not gracefully fall into automation or introduce breaking changes".
Now consider that communication is one service and add on self-hosting other services, and the additional updates and knowledge to support it.
Outside of the small group of people that dedicate themselves to hosting and managing their services, which excludes most USERS (its in the name) the threat model is indeed inverted.
Signal's reproducible builds are not maintained well. Many versions have been released without anyone noticing the mismatches, until a user reports it.
OMEMO was modeled around Signals encryption. And I don't think being "respected" makes a persons statement less or more worthy, assuming we don't take into account this same individual failed to follow responsible disclosure and intentionally harassed aswell as spread FUD about the Matrix project. Something which has been audited by Ukraine, France and other reputable organisations.
> and intentionally harassed aswell as spread FUD about the Matrix project.
[citation needed]
I did not harass anyone. Nor did I "spread FUD."
Years ago, I wrote that I do not recommend Matrix after https://nebuchadnezzar-megolm.github.io was published. This was an opinion, but several people pressed me on it.
I followed it up with a first-pass review of libolm, and disclosed it to the Matrix security team. After 90 days, I blogged about it. I literally followed coordinated disclosure. I even shared the draft before it was public so they could be aware of what was written.
Afterwards, the Matrix devs admitted they knew about the side-channels in libolm for years and never bothered to fix them. That should alarm Matrix users more than my disclosure or criticism.
> Afterwards, the Matrix devs admitted they knew about the side-channels in libolm for years and never bothered to fix them.
For anyone reading: Matrix didn’t bother fixing libolm’s sidechannels because instead we switched to a new Rust implementation at https://github.com/matrix-org/vodozemac - which is what the vast majority (installwise) of Matrix clients use. The mea culpa is that libolm should have been explicitly deprecated much sooner.
The whole point of end-to-end encryption is that you shouldn't have to.
The entire point of Sealed Sender and their use of zero-knowledge proofs for group membership is so the server doesn't know who's talking to who, so they can't even selectively censor messages from one person.
Interesting. Who distributes my keys from me to my recipient? Is it someone in the middle? You can see where I'm going with this.
Signal's way to validate that a session isn't man-in-the-middle'd is the same as XMPP: You have to validate the session's fingerprint in real life, or over another secure channel, by scanning each other's QR code, a procedure we'll refer to as "the QR thing".
Over more than five years of Signal usage, I personally did this exactly twice.
Now, we can start to imagine the typical Signal user.
Either we consider that I'm a minority and at least the vast majority of people do "the QR thing", so most/all sessions are secure from any man-in-the-meddling.
But then, you present the argument that XMPP is insecure because it can send plaintext. So this imaginary Signal user would be careful and privacy-inclined enough to use "the QR thing", but too careless to keep OMEMO on his XMPP client (where it's on by default in the vast majority) !?
I can't imagine this user. I fail.
The other way to reason about this is that, just like me, no one does "the QR thing". The vast majority of sessions are not protected against an MITM. Note that "the QR thing" is identical on XMPP, so all previous criticisms apply, except for a key difference...
On XMPP, servers are small. Trust is diluted, and in a lot of cases, you know personally your administrator, if you're not tech-oriented, because your administrator is the guy who told you about it.
Even when it doesn't matter, the server does matter a bit, eh?
> Signal's way to validate that a session isn't man-in-the-middle'd is the same as XMPP: You have to validate the session's fingerprint in real life, or over another secure channel, by scanning each other's QR code, a procedure we'll refer to as "the QR thing".
Tell me you didn't read the article, without telling me you didn't read the article.
They're adding Key Transparency to keep themselves honest. Their specific implementation today (which is probably not final) was one of the final parts I reviewed:
If you're going to talk about this with profound ignorance, it's probably wisest to not do so while responding to a blog post that significantly spent time on the piece that debunks your whole premise.
Yes, and furthermore, there's already built-in support for ledger monitors to ensure the honest and integrity of their log.
The whole point of Key Transparency is to keep the server honest. Publishing may be centralized, but verification is decentralized. This is literally a problem space I'm working in right now! https://soatok.blog/category/technology/open-source/fedivers...
> Repeat after me: The server matters. A lot. Even if you don't want it to.
The only thing the server can influence is availability:
1. Whether or not you can participate in the network to begin with (which is mostly to prevent spam, and is the only component you still need a phone number for today)
2. Deciding whether messages are delivered or not, to everyone.
Signal can't selectively censor users, they can only stop the operation of the entire service at once. Sealed Sender and zkgroup address this.
With key transparency, Signal couldn't even mislead users about which public keys belong to each user if they wanted to.
There is no other powers, besides basic availability, that the server needs to provide.
Just because you're used to technologies where the server has more power than the clients, and where some clients can continue to use OMEMO 0.3.0 in 2025 while the rest of the ecosystem is on 0.8.3, doesn't mean your experience is necessarily relevant here.
As noted elsewhere, Signal has offered reproducible builds since March 2016. If you care that much about about client security, why not check that yourself and blow the whistle if the binaries mismatch?
Thank you for your work and thanks for your immense patience answering mostly-already-addressed concerns of someone who has not bothered to even read the article. It's bad form; noble of you to answer (and hopefully useful for others / posterity).
_edit_ spent some time on your blog (turns out I've done that before - recognised style as well as furry universe / ontology; nice feeling to return). Reading reasons for disliking AES-GCM (always liked this simplicity + auth-baked-in AEAD approach as a dev/architect/user of applied crypto)...
If you see this _edit_ - do you use any specific tools to generate various sequence / flow diagrams? anything besides mermaid (+ draw.io, I somehow never got rid of using this one in times of urgent need...)? Thanks :)
Wasn't downvoting for the questions, but started downvoting these comments of yours because of your attitude + expectation that OP will answer questions that have in various ways already been addressed if you read the article.
I believe it is bad form / in bad taste to arrogantly presume your questions and points about server trust (and etc.) are unique, not addressed, and create new challenges; without first reading the article which is the bare minimum required for this, IMHO.
Btw, depending on your threat model you may want to validate pubkeys / established session key / etc. using other side channels regardless of software, protocol and medium used.
Tor is great for privacy, but usability is poor, it's really slow. VPNs are a great middle ground to just route all your traffic through. And yes, there are better and not so good providers. Some of them take privacy very serious, others not so much (do your research which one to trust). Many users also don't care about privacy so much, they just want to circumvent censorship or geoblocking.