This is fallacious reasoning. A well intentioned open source developer who does not earn any money out of a labor of love has no incentive to further spend money to sign his app that he’s giving away for free anyway. On the flip side, a malicious actor that expects to earn money through a scam has every incentive to spend some money making the app look legit, especially if there is no risk involved.
This could be easily achieved by Microsoft running a free signing service. Lowering the cost of signing to zero would significantly increase the proportion of signed apps.
The question was 'is someone who spends money for code signing more trustworthy than someone who doesn't' and it was being treated as if the trust or at least, increase in comfort, somehow comes merely from the act of spending money. It's an opt-in to a service that mitigates the impact of malicious code.
The parent statement was that having signed apps made them easy to disable. If all apps had to be signed, everything would have a reputation hook, and also be easily disabled. It's the hang-up of using the for profit 'verified' code signing ecosystem that makes signing ineffective.
Of course, MSFT/Apple etc will abuse it to kill apps they/govt don't like.
If the only way to play is to go through entrenched gatekeepers, who watches the watchers, hmmm? If anything this should be seen as a power grab by entrenched interests to have a cryptographic lever to pull to shut people out of what should be a user's discretion decision pre-emptively. Walled gardening at it's finest.
Code signing is a bit like gun control. It really doesn't solve the problem at all. It just pushes it up a level, and makes things more difficult for legitimate users.
It also lines up incentives such that the preferred model of software distribution shifts in the grand scheme of things toward for profit code.
While code signing is a neat technical solution, it's still a technical solution parading about as a solution to a social problem. And the social problem it is a solution to (that of untrustworthy folks existing) is not in any way mitigated by the act of signing as mentioned previously.
Well sure, a known-malicious app will be detected by Windows Defender, provided it has updates making it aware of the app. But a known-malicious signed app will also fail the code signature verification, in addition to the virus scan, if its certificate has been revoked.
In what way is this not two separate things uselessly duplicating the same functionality? If you can get a CRL you can get a definition update, and they both effectively do the same thing.
They are very much not the same thing. A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified and if it was malicious from the start, it can be disabled. The non-signed up can have zillions of malicious variants which something like Defender may or may not catch. It also gets a shot of circumventing (or even exploiting) AV.
> A signed app can be distributed from anywhere with the assurance it's the same app - it can't be maliciousified
This is only true if there is some trust in what is signing them. If anyone can get one then anyone can sign the malicious version of the app with their own key, or one they stole from someone else. The user doesn't know who is supposed to be signing the app -- and if they did then you could be using TOFU or importing the expected source's key from a trusted channel without having to pay fees to anyone.
> and if it was malicious from the start, it can be disabled.
In the same way that Defender can block it. Then the attacker makes a new version signed with a different key.
The problem with CA-based signing is that it's a garbage trade off. If you make it easy to get a signing key, the attacker can easily get more and it does nothing. If you make it hard, you're kicking small developers in the teeth.
> The non-signed up can have zillions of malicious variants which something like Defender may or may not catch.
Which is still possible with code signing. The attacker gets their own key, uses it to infect many users, then some of those users are developers with their own signing keys and the attacker can use each of those keys to infect even more people and get even more keys.
Using keys as a rate limiter doesn't really work when one key can get you many more.
> It also gets a shot of circumventing (or even exploiting) AV.
As opposed to a shot at exploiting the signature verification method and the AV.
There is a better version of this that don't require expensive code signing certificates. You have the developer host their code signing key(s) on their website, served over HTTPS. Then the name displayed in the "do you trust them" box is the name of the website -- which is what the user is likely more familiar with anyway. If the program is signed by a key served on the website, and the user trusts the website, then you're done.
The application itself can still be obtained from another source, only the key has to be from the developer's website. Then future versions of the software signed with the same key can be trusted, but compromised keys can be revoked (and then replacements obtained from the website again).
This is better in every way than paying for EV certificates. It doesn't cost the developer anything, because they already have a domain (and if not they're very inexpensive and independently useful). But the attacker can't just register thousands of garbage domains because they're displayed to the user and nobody is going to trust "jdyfihjasdfhjkas.ru" or in principle anything other than the known developer's actual website, which the user is more likely to actually be familiar with than the legal name of the developer or their company.
I think if you don't like code signing for ideological/process reasons, you can argue that, preferably in reply to someone who wants to argue about it. But trying to work backwards from there to technical arguments that show how signing is the same thing as AV is futile, it just makes you type up longer versions of obviously technically inaccurate things.
There are good ideological reasons to not like code signing. But people present technical arguments in favor of it, which then need to be addressed so that people don't erroneously find them convincing.
And the technical arguments in favor of code signing are weak. They started off claiming a major benefit -- globally disable malicious code. Except that AV can do that too. The argument in favor of having code signing on top of that then becomes weaker -- AV can stop identified malicious code but it can't stop other malicious code from the same malware author. Except that code signing can't do that either since the malware author can sign other versions with different keys. So then the argument becomes, well, at least it rate limits how many different versions there are. Except that is only meaningful to the extent that getting a new key is arduous and not a lot of people have them, otherwise the attacker can get arbitrarily many more by either just applying for more under false identities or by compromising a moderate number of machines to capture more keys from the large number of people who have them. Moreover, using domain validation would already capture the case where you want to get the incremental benefit achievable from a minimal imposition on the developer.
Meanwhile the process of obtaining a code signing key has to be sufficiently easy and non-exclusive that even individual developers can reasonably do it, so making it purposely more arduous than that is a directly conflicting requirement.
The explanation is long because the details are relevant, not because anything "obviously technically inaccurate" is there.
Revoking a certificate removes the ability to sign the malicious executable and any future executables.
Blocking a specific executables block that one. Depending on AV used, simply rebuilding may get you through (different hash); some trivial modifications will do.