iOS is the operating system which powers Apple’s iPhone devices, and includes the special version iPadOS which runs on iPads. Apple claims it to be particularly secure, at least partly as a result of the limitations they place on what it is able to run.
But regardless of how secure the OS is, apps built for it still need to be written in a secure way. Getting it wrong will leave your customers vulnerable. Like any other OS, it has its own particular pitfalls and traps, and it sometimes takes a keen eye to spot them.
In general, native iOS apps since 2015 have been written in Swift and those before in Objective-C. It’s still relatively common to see older apps which are a hybrid of the two, given the two languages’ cross-compatibility.
Here is a run-down of some of the issues we look for when performing a code security analysis of iOS apps.
iCloud security issues
Apple devices really, really like storing data in the cloud. By default, almost any data that an app produces is at least backed up to the cloud, potentially making it available to attackers who don’t have physical access to your device.
As some high-profile data leakages (particularly of iCloud photos) have recently shown, this isn’t always a great idea for sensitive data. Apps can mark data as being available for iCloud backup or not, and should use these options carefully.
Storing passwords and secure data
iOS provides a “keychain” system for storing small amounts of highly sensitive data, particularly passwords and security keys. Apps should use this without fail for those purposes. Keychain data is protected in a sophisticated way, which includes of course strong encryption.
There is a reasonably significant security flaw in the Apple keychain system which apps should code to protect against. The issue is that keychain data for your app is retained even after your app is deleted. So attackers can potentially reinstall deleted apps to find any saved credentials still intact. A suitable workaround is to detect a clean install, and wipe the keychain.
Storing app data
Apple defines several classes of protection for app data. The general rule is that apps should protect all data using the highest possible class.
The four classes are:
- Class A: Complete protection. The data can only be decrypted when the device is unlocked.
- Class B: Protected unless open. As for class A, but additionally, the data remains decrypted if the file was still open when the device was locked.
- Class C: Protected until first user authentication. The data is decrypted the first time that the user unlocks their phone, and remains from then on.
- Class D: No protection.
Never use class D - there’s almost never a good reason to choose not to encrypt something. The catch-all argument for leaving some data unencrypted is that encryption comes with a performance overhead. But that barely features in iOS, given that modern iOS hardware all has a dedicated chip for encryption services.
In general apps should use class A unless there is a specific reason why data needs to be accessed whilst the device is locked. An example would be a file downloading in the background.
Apps shouldn’t log sensitive information to the internal system log. Logged information can be obtained by anyone having physical access to the device. It wouldn’t require a leap of imagination to suggest that security flaws exist which could permit log access without even needing the physical access.
Objective-C security issues
In Objective-C - a language almost universally disliked by developers - there is a real potential for accidentally introducing subtle-but-devastating security flaws. Being built on top of C, it has all C’s potential for unsafe pointer usage, including buffer overruns and use after free errors.
Our strong recommendation is to carefully learn all C security practices before working in Objective-C, and avoid using C APIs where possible. Enable Automatic Resource Counting (ARC), a feature which prevents you from having to manage your own object deallocations. Mistakes during deallocation are of course what cause use-after-free errors, and they can allow an attacker to fully take over your app.
Even better, write all new code in Swift.
Unsafe pointers in Swift
Swift can access C APIs using unsafe pointers. This should rarely be necessary, but if it is, it’s important that pointers aren’t used outside of the withUnsafeXXX blocks. And of course, developers using C APIs should be really familiar with C security practices as it’s easy to introduce significant issues.
Jailbreaking is the process of removing Apple’s commercially-driven limitations on what you can do with your iDevice. Sadly, because there is no official specification governing jailbroken devices, it can affect security in both predictable and unpredictable ways. The best defence against this which apps can take is to add jailbreak detection, and refuse to run on jailbroken hardware.
iOS apps need to seek user permission when performing tasks which may compromise privacy or security. For example, an app won’t be able to access the camera unless the user has been asked to allow it.
So it’s important that permissions are sought only when the app actually wants to use those features. We often see apps which “get it over and done with” by asking for permissions right at the beginning. That leaves the permission more easily exploitable by a bad actor.
Masking sensitive information
iOS takes a screenshot of your application when it goes to the background. Depending on what your app was displaying at the time, this could contain sensitive information, which you should blur or mask off.
The same is true when the user takes a screenshot of your app, manually.
The above is just a small selection of the security flaws we look for when analysing iOS app code. There are many more, some of which are noted in this article on the UK Government’s National Cyber Security Centre site.
If you need any help securing your iOS app, let us know!