Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What threat model does this protect against exactly?

Two big threats: 1) insider attacks like the Saudi Twitter infiltration[0], and 2) Overreach by legitimate government process like subpoena[1].

> release the source

Useless. How do you know it's the exact source running on-device?

> Allow me to run my own backup software on their servers

Useless. How do you know your own backup software isn't compromised via a secret deal with Apple?

> Allow me to transparently run my own encryption before I upload stuff to their servers.

Useless. How do you know the OS isn't grabbing the raw files? How do you know your own encryption isn't compromised? How do you know that Xcode isn't inserting backdoors in the encryption you compiled from source?

> And a very long etc.

All useless. Tell me your perfect solution and I promise I can show it's useless (by your standards).

[0] https://en.wikipedia.org/wiki/Saudi_infiltration_of_Twitter

[1] https://ijunkie.com/your-icloud-data-phenomenal-law-enforcem...



> Two big threats: 1) insider attacks like the Saudi Twitter infiltration[0], and 2) Overreach by legitimate government process like subpoena[1].

This does not prevent any of these threats, it does not even necessarily make them more difficult whatsoever. "Insiders" will still have access to the source code doing the encryption and communications, and it is just not possible to protect against government overreach that can literally force you to do anything and keep quiet about it, even in otherwise relative sane countries. Search for NSA letter.

I actually don't expect any corporation to be above the government, fwiw, but this is off-topic.

> Useless. How do you know it's the exact source running on-device?

Because you built it yourself?

> Useless. How do you know your own backup software isn't compromised via a secret deal with Apple?

Because it's YOUR OWN backup software?

> Useless. How do you know the OS isn't grabbing the raw files? How do you know your own encryption isn't compromised? How do you know that Xcode isn't inserting backdoors in the encryption you compiled from source?

Because I have the source of the OS and I built it myself? Because I have literally used the same compiler I use for other platforms and not Facebook's? Because I can then actually monitor the actual communications between the device and the mothership? etc. etc.

The point of this entire thing was to show that _there is_ non-technical policies they can do to actually increase the trust level (or at least have a discussion about it -- as you are), but there is very few technical stuff they can do to increase it, and that's because it would miss the entire point. It's not about "trusting trust perfection" or whatever you think you are trying to argue here. You are trying to protect stuff from Alice by trusting Alice without even being capable of verifying it. It just can't academically work. You need to either be able to verify it or at the very minimum separate both roles.


> This does not prevent any of these threats, it does not even necessarily make them more difficult whatsoever. "Insiders" will still have access to the source code doing the encryption, and it is just not possible to protect against government overreach that can literally force you to do anything and keep quiet about it, even in otherwise relative sane countries. Search for NSA letter.

There you go again :)

You literally just said something that used to take a subpoena from any law enforcement now takes an NSA letter. And that an insider attack that used to mean retrieving a backup file now means inserting back doors in source code that go undetected.

And somehow those aren't even more difficult?

> Because I have literally used the same compiler I use for other platforms

https://www.awelm.com/posts/evil-compiler/

It is literally provable that Apple will never be able to satisfy you. For any mitigation they introduce, you can (rightfully) create a hole in that mitigation.

What you're missing is that the same flaws and attacks appear in all of your "it would be better if" solutions. Once you're invoking NSA letters and malicious source code, all bets are off... including for open source.

> It just can't academically work.

Yes, we agree on that. But it also doesn't work if you're protecting stuff from Alice by trusting Bob, who might be secretly an agent of Alice.


> You literally just said something that used to take a subpoena from any law enforcement now takes an NSA letter

I didn't say that. You said "overreaching government".

> It is literally provable that Apple will never be able to satisfy you

Nothing _technical_, that is, which has exactly been my point.

> Once you're invoking NSA letters and malicious source code, all bets are off... including for open source.

That's not true at all. There's an entire world of difference where "oh the software is just hidden from my eyes, communicating constantly and opaquely with the mothership, changeable at any moment by the same mothership, and all of it running in the same hardware also made by the same mothership" versus "I have these separate components that are only communicating through these channels in these clearly specified ways". The first only allows useless technobabble fake solutions, the second system actually allows discussion about trust and is usually the very minimum expectation of any cryptosystem.

> But it also doesn't work if you're protecting stuff from Alice by trusting Bob, who might be secretly an agent of Alice.

I don't see that as necessarily true either. But anyway, I can now choose between multiple providers for encryption, which _finally_ goes towards measurably increasing trust. Remember, despite the accusations, I have never claimed it had to be 100% trusting trust perfect, I am just claiming this one proposal is 100% useless. If you didn't trust Apple backups before and you would now, I'd question your judgement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: