> This technical failing probably, partially, explains why they are so against allowing sideloading.
This occurred to me the other day. I've always laughed at the idea that Apple blocks sideloading for security purposes, but if the first line of defense is and always has been security through obscurity + manual App Store review (>= 2.0) on iOS, it's very possible that sideloading could cause problems. iOS didn't even have an App Store in release 1.0, meanwhile the Android security model has taken into account sideloaded apps since the very beginning [1]:
> Android is designed to be open. [...] Securing an open platform requires a strong security architecture and rigorous security programs. Android was designed with multilayered security that's flexible enough to support an open platform while still protecting all users of the platform.
> the Android security model has taken into account sideloaded apps since the very beginning
Counterpoint: tech websites have literally warned users that they need to be wary of installing apps from inside Google's walled garden.
> With malicious apps infiltrating Play on a regular, often weekly, basis, there’s currently little indication the malicious Android app scourge will be abated. That means it’s up to individual end users to steer clear of apps like Joker. The best advice is to be extremely conservative in the apps that get installed in the first place. A good guiding principle is to choose apps that serve a true purpose and, when possible, choose developers who are known entities. Installed apps that haven’t been used in the past month should be removed unless there’s a good reason to keep them around
> Counterpoint: tech websites have literally warned users that they need to be wary of installing apps from inside Google's walled garden.
This is not a counterpoint to what I was saying. I'm talking about sideloaded apps, not apps from Google Play. I agree that Google should work to improve their app vetting process, but that's a separate issue entirely, and one I'm not personally interested in.
If your security model is so weak that you can't keep malware out of the inside of your walled garden, the situation certainly isn't going to improve after you remove the Play store's app vetting process as a factor.
I avoided making a claim regarding the relative "security level" of Android vs. iOS because it's not easy to precisely define what that means. All I was saying was that Android's security model explicitly accommodates openness. If your standard for a "strong" security model excludes openness entirely, that's fair I suppose, but I personally find it unacceptable. Supposing we keep openness as a factor for its own sake, I'm not sure how you can improve much on Android's model.
This discussion seems to be headed in an ideological direction rather than a technical one, and I'm not very interested in that.
If your point of view is that you value the ability to execute code from random places on the internet more than security, perhaps that is the point you should have been making from the start.
However, iOS makes the security trade off in the other direction.
All an app's executable code must go through the app vetting process, and additional executable code cannot be added to the app without the app going through the app vetting process all over again.
In contrast, Google has been unable to quash malware like Joker from inside the Play store because the malware gets downloaded and installed after the app makes it through the app vetting process and lands on a user's device.
> Known as Joker, this family of malicious apps has been attacking Android users since late 2016 and more recently has become one of the most common Android threats...
One of the keys to Joker’s success is its roundabout attack. The apps are knockoffs of legitimate apps and, when downloaded from Play or a different market, contain no malicious code other than a “dropper.” After a delay of hours or even days, the dropper, which is heavily obfuscated and contains just a few lines of code, downloads a malicious component and drops it into the app.
iOS not having constant issues with malware like Joker inside their app store has nothing to do with "security through obscurity" and everything to do with making a different set of trade offs when setting up the security model.
all of this malicious code still requires the user to grant the permissions, or exploit bugs in the operating system. same as ios, infinitely better than mac, windows and linux.
apple might pretend they are secure because they usually manage to catch such things during review. this doesn't actually mean they are secure.
end of the day, it's up to the user to choose what software they install, and what permissions they grant.
if your security model includes taking all user choice away, forbidding them from running software that they wish to run, and essentially treating them like unsophisticated toddlers that need your guidance because you know best, then sure, you might view this as a problem. but at that point, you are the problem.
> apple might pretend they are secure because they usually manage to catch such things during review. this doesn't actually mean they are secure.
The easiest way to see that Apple's security model is more robust is that tech websites don't have to warn users that they should fear the apps from inside the app store.
There's little evidence that this isn't simply because Apple is better at policing their store. It probably also helps that an Apple developer license costs $99/year, whereas Google Play has a one-time $25 fee. Keep in mind that the Play Store is just one of Google's many endeavors, whereas the iPhone is Apple's premier product, and as such, one of their top priorities.
Regarding dynamic native code execution, please see saagarjha's comment and my reply.
> Apple's security model
It would be more accurate to say "the iOS security model", because as beeboobaa3 mentioned, macOS fully allows apps from outside the App Store, dynamic native code execution, and most other "insecure" things that are blocked on iOS.
I agree. But it's at least worth noting that Google has taken steps towards blocking this as well, and it will likely be fully blocked in a future release of Android. From what I understand, it's partially to protect apps from themselves rather than the OS from apps, however. Additionally, it breaks legitimate apps like Termux, which many Android users see as a major regression. I personally think it's just another example of Apple-ish security theater, but Google has been known to copy some unfortunate things from Apple in attempt to mirror their success (see also: headphone jack, SafetyNet). Regardless, it goes to show that Android security is still evolving and the referenced 2020 article likely doesn't reflect the current state of things.
Android and iOS have largely the same threat model with it comes to platform security. That is, app review mostly does not exist and the OS itself must protect the user.
I'm not claiming that Apple is perfect, but I think comparing to Android, in terms of malware, security updates, and privacy, it comes out looking pretty good.
Both look pretty similar to me, both in terms of policies and outcome.
While iOS has longer device support, it's also way less modular and updates of system components will typically take longer to reach users than Android, so I'd say both have their issues there.
Got some sources to cite, or is this the typical apple fanboyism of "android bad"?
I've used android for years, never ran into any malware. I've also developed for android and ios. Writing malware is largely impossible due to the functional permission system, at least it's much, much harder than the other operating systems. Apple just pretends it's immune to malware because of the manual reviews and static analysis performed by the store. It's also why they're terrified of letting people ship their own interpreters like javascript engines.
One might argue that Android is targeted more than iPhone because of its larger userbase which certainly may contribute to it, but then MacOS which has a fraction of the userbase is more targeted than iOS - that makes the case that sideloading or lax app store reviews really are at least partly to blame.
Given much of the malware seems to be apps that trick users into granting permissions by masquerading as a legitimate app or pirated software, it's not really too hard to believe that Apple's app store with their draconian review process and no sideloading might be a more difficult target.
Obviously a strict walled garden keeps out bad actors. The question is: Is it worth it? I say no.
People deserve to be trusted with the responsibility of making a choice. We are allowing everyone to buy power tools that can cause severe injuries when mishandled. No one blinks an eye. Just like we allow that to happen, we should allow people to use their devices in the way that they desire. If this means some malware can exist then I consider this to be acceptable.
In the meantime system security can always be improved still.
There is a reputation of Apple being more secure, but it's largely unfounded. It just looks that way because the ecosystem is completely locked down and software isn't allowed to exist without apple's stamp of approval.
Apple drove genuine security improvements in mobile hardware well before Android, including dedicated security chips and encrypted storage. The gap has been closed for a few years now, though, so the reputation is not so much "unfounded" as "out of date".
You're not talking about security that protects end users against malware. You're talking about "security" that protects the device against "tampering", i.e. the owner using it in a way apple does not approve of.
Apple's "security improvements" have always been about protecting their walled garden first and foremost.
- Stores their security credentials for critical sites (banks, HR/payroll, stores, govt services, etc.)
- Even if not, has unfettered access to their primary email account, which means it can autonomously initiate a password reset for nearly any site
- Is their primary 2FA mechanism, which means it can autonomously confirm a password reset for nearly any site
That's an immense amount of risk, both from apps running on the device, and from the device getting stolen. Both of the measures I mentioned are directly relevant to these kinds of threats. And, as I already said, Android has adopted these same security measures as well.
I have no idea what you are trying to say in the context of the thread. Hardware security is important for all of that and security measures have to evolve over time.
This just isn’t true. We have multiple bricked android devices from bootloader infected malware downloaded directly from the Play store. Nothing like that has ever happened on iOS.
This occurred to me the other day. I've always laughed at the idea that Apple blocks sideloading for security purposes, but if the first line of defense is and always has been security through obscurity + manual App Store review (>= 2.0) on iOS, it's very possible that sideloading could cause problems. iOS didn't even have an App Store in release 1.0, meanwhile the Android security model has taken into account sideloaded apps since the very beginning [1]:
> Android is designed to be open. [...] Securing an open platform requires a strong security architecture and rigorous security programs. Android was designed with multilayered security that's flexible enough to support an open platform while still protecting all users of the platform.
[1] https://source.android.com/docs/security/overview
Edit: Language revised to clarify that I'm poking fun of the idea and not the one who believes it.