Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In what way is this not two separate things uselessly duplicating the same functionality? If you can get a CRL you can get a definition update, and they both effectively do the same thing.


They are very much not the same thing. A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified and if it was malicious from the start, it can be disabled. The non-signed up can have zillions of malicious variants which something like Defender may or may not catch. It also gets a shot of circumventing (or even exploiting) AV.


Also, the "disable" in the signed case is much more powerful, since it disables all apps signed by the same key.


> A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified

This is only true if there is some trust in what is signing them. If anyone can get one then anyone can sign the malicious version of the app with their own key, or one they stole from someone else. The user doesn't know who is supposed to be signing the app -- and if they did then you could be using TOFU or importing the expected source's key from a trusted channel without having to pay fees to anyone.

> and if it was malicious from the start, it can be disabled.

In the same way that Defender can block it. Then the attacker makes a new version signed with a different key.

The problem with CA-based signing is that it's a garbage trade off. If you make it easy to get a signing key, the attacker can easily get more and it does nothing. If you make it hard, you're kicking small developers in the teeth.

> The non-signed up can have zillions of malicious variants which something like Defender may or may not catch.

Which is still possible with code signing. The attacker gets their own key, uses it to infect many users, then some of those users are developers with their own signing keys and the attacker can use each of those keys to infect even more people and get even more keys.

Using keys as a rate limiter doesn't really work when one key can get you many more.

> It also gets a shot of circumventing (or even exploiting) AV.

As opposed to a shot at exploiting the signature verification method and the AV.

There is a better version of this that don't require expensive code signing certificates. You have the developer host their code signing key(s) on their website, served over HTTPS. Then the name displayed in the "do you trust them" box is the name of the website -- which is what the user is likely more familiar with anyway. If the program is signed by a key served on the website, and the user trusts the website, then you're done.

The application itself can still be obtained from another source, only the key has to be from the developer's website. Then future versions of the software signed with the same key can be trusted, but compromised keys can be revoked (and then replacements obtained from the website again).

This is better in every way than paying for EV certificates. It doesn't cost the developer anything, because they already have a domain (and if not they're very inexpensive and independently useful). But the attacker can't just register thousands of garbage domains because they're displayed to the user and nobody is going to trust "jdyfihjasdfhjkas.ru" or in principle anything other than the known developer's actual website, which the user is more likely to actually be familiar with than the legal name of the developer or their company.


I think if you don't like code signing for ideological/process reasons, you can argue that, preferably in reply to someone who wants to argue about it. But trying to work backwards from there to technical arguments that show how signing is the same thing as AV is futile, it just makes you type up longer versions of obviously technically inaccurate things.


There are good ideological reasons to not like code signing. But people present technical arguments in favor of it, which then need to be addressed so that people don't erroneously find them convincing.

And the technical arguments in favor of code signing are weak. They started off claiming a major benefit -- globally disable malicious code. Except that AV can do that too. The argument in favor of having code signing on top of that then becomes weaker -- AV can stop identified malicious code but it can't stop other malicious code from the same malware author. Except that code signing can't do that either since the malware author can sign other versions with different keys. So then the argument becomes, well, at least it rate limits how many different versions there are. Except that is only meaningful to the extent that getting a new key is arduous and not a lot of people have them, otherwise the attacker can get arbitrarily many more by either just applying for more under false identities or by compromising a moderate number of machines to capture more keys from the large number of people who have them. Moreover, using domain validation would already capture the case where you want to get the incremental benefit achievable from a minimal imposition on the developer.

Meanwhile the process of obtaining a code signing key has to be sufficiently easy and non-exclusive that even individual developers can reasonably do it, so making it purposely more arduous than that is a directly conflicting requirement.

The explanation is long because the details are relevant, not because anything "obviously technically inaccurate" is there.


Revoking a certificate removes the ability to sign the malicious executable and any future executables.

Blocking a specific executables block that one. Depending on AV used, simply rebuilding may get you through (different hash); some trivial modifications will do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: