- Implementing a multi-layered defense that is broad and deep is critical for mobile app security, but nearly impossible to achieve using traditional approaches.
- A broad defense covers the many different categories of attack a hacker can employ to compromise a mobile app.
- A deep defense employs multiple means to detect and protect against each category of threat.
- No third-party library, commercial SDK, or specialized compiler can provide a sufficient broad and deep defense across both iOS and Android, plus the multitude of different devices — the complexity grows exponentially.
- Automation must be built into the development process to implement broad and deep security defenses for apps across operating systems and devices.
It’s been held as common knowledge for some time that everyone “knows” Android is less secure than iOS as a mobile platform. Everyone except for consumers, it seems. A global survey of 10,000 mobile consumers from August 2021 found that the security expectations of iOS and Android users are essentially the same.
However, despite consumer expectations, while one mobile platform is not necessarily inherently less secure than the other, mobile apps rarely achieve security feature parity for Android and iOS. In fact, many mobile apps lack even the most basic security protections. Let’s examine why.
Mobile App Security Requires a Multi-Layered Defense
Most security professionals and 3rd party standards organizations would agree that mobile app security requires a multi-layered defense consisting of multiple security features in the following core areas:
- Code Obfuscation & Application Shieldingto protect the mobile app binary and source code against reverse engineering
- Data Encryption to protect the data stored in and used by the app.
- Secure Communicationto protect data as it moves between the app and the app’s backend, including ensuring the authenticity and validity of the digital certificates that are used to establish trusted connections.
- OS Protectionto protect the app from unauthorized modifications to the operating system, such as rooting and jailbreaking.
Developers should implement a balanced mix of these features in both iOS and Android versions of their app to form a consistent security defense. And they should add these features early in the development cycle – a concept known as “shift-left” security. Sounds easy enough right? In theory, yes, in practice, it’s actually quite difficult to achieve a multi-layered mobile app security defense when using ‘traditional’ approaches.
For years, mobile developers have attempted to implement in-app mobile app security using the traditional collection of tools available to them, including 3rd party open-source libraries, commercial mobile app security SDKs, or specialized compilers. The first major challenge is that mobile app security is never achieved via a ‘silver bullet’. Because mobile apps operate in unprotected environments and store and handle lots of valuable information, there are many ways to attack them. Hackers have an endless supply of freely available and very powerful toolsets at their disposal, and all the time in the world to study and attack the app undetected.
Mobile security requirements
So to build a robust defense, mobile developers need to implement a multi-layered defense that is both ‘broad’ and ‘deep’. By broad, I’m talking about multiple security features from different protection categories, which complement each other, such as encryption + obfuscation. By ‘deep’, I mean that each security feature should have multiple methods of detection or protection. For example, a jailbreak-detection SDK that only performs its checks when the app launches won’t be very effective because attackers can easily bypass the protection.
Or consider anti-debugging, which is an important runtime defense to prevent attackers from using debuggers to perform dynamic analysis – where they run the app in a controlled environment for purposes of understanding or modifying the app’s behavior. There are many different types of debuggers – some based on LLDB – for native code like C++ or objective C, others that inspect at the Java or Kotlin layer, and a lot more. Every debugger works a little bit differently in terms of how it attaches to and analyzes the app. Therefore, for the anti-debugging defense to be effective, your app needs to recognize among the multiple debugging methods being used and dynamically engage the correct defense, since hackers will continue trying different debugging tools or methods until they find one that succeeds.
The list of security requirements doesn’t stop there. Every app needs anti-tampering features like checksum validations, protection against binary patching, and app repackaging, re-signing, emulators and simulators, etc. It would not be a stretch to assume that researching and implementing each one of these discrete features or protection methods alone would require at least several man-weeks of development, per operating system. And that’s being very generous in assuming that the mobile developer already possesses expertise in the specific security domain, which is often not the case. This can get complicated quickly, and so far we are only talking about a single protection category – runtime or dynamic protections. Imagine if each of the features mentioned required one or two weeks of development.
Next, you also need OS-level protections like jailbreak/rooting prevention to protect the app if the mobile operating system has been compromised. Jailbreaking/rooting makes mobile apps vulnerable to attacks because it allows full administrative control over the OS and file system, and thus compromises the entire security model. And just detecting jailbreak/rooting is no longer enough, because hackers are constantly evolving their tools. The most advanced jailbreak and rooting tools are Checkra1n for iOS, Magisk for Android – and many others. Some of these tools are also used for hiding or concealment of activity and managing superuser permissions – often granted to malicious apps. Net net, if you implemented jailbreak or rooting detection using an SDK or 3rd party library, there’s a good chance the protection may already be obsolete or easily bypassed, especially if the app’s source code is not sufficiently obfuscated.
If you use an SDK or 3rd party library to implement a security protection, it’s pretty much useless inside an un-obfuscated app – why? Because hackers can simply decompile or dis-assemble the app to find the source code for the SDK using open source tools like Hopper, IDA-pro, or use a dynamic binary instrumentation toolkit like Frida to inject their own malicious code, modify the app’s behavior, or simply disable the security SDK.
Code obfuscation prevents attackers from understanding mobile app source code. And it’s always recommended to use multiple obfuscation methods including obfuscating native code or non-native code and libraries, as well as obfuscating the application’s logical structure or flow control. This can be accomplished, for example by using control flow obfuscation or renaming functions, classes, methods, variables, etc. And don’t forget to obfuscate debug information as well.
It’s clear from real-world data that most mobile apps lack sufficient obfuscation, obfuscating only a small portion of the app’s code, as this research study of over 1 million Android apps clearly illustrates. As the study suggests, the reason for this is that traditional obfuscation methods that rely on specialized compilers are simply too complex and time-consuming for most mobile developers to implement comprehensively. Instead, many developers implement a single obfuscation feature or only obfuscate a small fraction of the codebase. In the referenced research, the researchers found that most apps implemented class-name obfuscation only, which by itself is very easy to defeat. To use a book metaphor, class name obfuscation by itself would be like obfuscating the “table of contents” of a book, but leaving all of the book’s actual pages and content un-obfuscated. Such superficial obfuscation can be very easily bypassed.
Data protection and encryption
Moving on to data protection, you also need encryption to protect the app and user data – there are lots of places where data is stored in mobile apps, including the sandbox, in memory, and inside the code or strings of the app. To implement encryption on your own there are lots of tricky issues to navigate: there’s key derivation, cipher suite, and encryption algorithm combos, key size, and strength. Many apps use multiple programming languages, each of which would require different SDKs or introduce incompatibilities or dependencies on code you may not control or have access to. And data-type differences can also increase complexity and the risk of performance degradation.
Then, there is the classic problem of where you store the encryption keys. If keys are stored inside the app, they could be discovered by attackers who reverse engineer it, and once found they could be used to decrypt the data. This is why dynamic key generation is such an important feature. With dynamic key generation, encryption keys are generated only at runtime and never stored in the app or on the mobile device. Further, the keys are only used once, preventing them from being discovered or intercepted by attackers.
And what about data in transit? TLS alone isn’t sufficient, as there are lots of ways to compromise an app’s connection. It’s important to inspect and validate TLS Sessions and certificates to ensure that all certificates and CAs are valid and authentic, protected by industry-standard encryption. This prevents hackers from gaining control over TLS sessions. And then there’s also certificate pinning to prevent connections to compromised servers or to protect the server-side against connections from compromised apps (for instance if your app has been turned into a malicious bot).
Fraud, Malware, Piracy Prevention
And finally, there’s anti-fraud, anti-malware, and anti-piracy protections that you can layer on top of the aforementioned baseline protections to protect against highly advanced or specialized threats. These protections may include features that prevent app overlay attacks, auto-clickers, hooking frameworks, and dynamic binary instrumentation tools, memory injection, keyloggers, key injection, or abuse of accessibility features, all of which are common weapons used in mobile fraud or by mobile malware.
Just think about the sheer amount of time and resources required to implement even a subset of the above features. And so far, I’ve only talked about feature and function coverage required for a strong security defense. Even if you had the resources and required skill sets in-house (you don’t, but humor me), what about the operational challenges of cobbling together a defense. Let’s explore some of the implementation challenges your dev team will likely encounter.
Implementation differences between platforms and frameworks
The next problem developers would face is how to implement each of those security features for Android and iOS given the endless number of framework differences and incompatibilities between SDKs/libraries and the native or non-native programming languages used by developers to build mobile apps. While software development kits (SDKs) are available for some standard security features, no SDK covers all platforms or frameworks universally.
A major challenge developers face when attempting to implement mobile app security using SDKs or open-source libraries stems from the fact that these methods all rely on source code and require changes to the application code. And as a result, each of these methods is explicitly bound to the specific programming language that the application is written in, and are also exposed to the various programming language or package ‘dependencies’ between those languages and frameworks. Let’s double-click on that for a moment.
iOS apps are typically built in Objective-C or Swift, while Android apps are typically written in Java or Kotlin, along with C and C++ for native libraries. For example, let’s say you wanted to encrypt the data stored in your Android and iOS apps. If you found some 3rd party Android encryption libraries or SDKs for Java or Kotlin, they won’t necessarily work for the portion of your app that uses C or C++ code (native libraries).
In iOS, same deal. You might visit StackOverflow and find that the commonly used Cryptokit framework for Swift won’t work for Objective C.
So how many developers do you think it would take to implement even a fraction of the features I just described? How long would it take? Do you have enough time to implement the required security features in your existing mobile app release process?
DevOps is agile & automated, traditional security is monolithic & manual
Mobile apps are developed and released in a fast-paced, flexible, and highly automated agile paradigm. To make build and release faster and easier, most Android and iOS DevOps teams have optimized pipelines built around CI/CD and other automated tools. Security teams, on the other hand, do not have access to or visibility into DevOps systems, and most security tools are not built for agile methodologies because they rely heavily on manual programming or implementations, where an individual security feature may take longer to implement than the release schedule allows.
In an attempt to bridge these shortfalls, some organizations use code scanning and pen testing before publishing apps to public app stores to provide insight into vulnerabilities and other mobile application concerns. When vulnerabilities are discovered, organizations are faced with a difficult decision: release the app without the necessary protections or delay the release to give the developers time to address the security issues. When this happens, it’s all too often that the recommended security protections often get overlooked.
Developers aren’t lazy. The systems and tools they use for security implementation simply cannot match the rapid cadence of modern Agile / DevOps development.
Five steps for strong mobile app security and platform parity
Automation is the key to achieving security parity and strong mobile app security, in general. Here’s a five-step playbook for building mobile app security into apps during the app’s release cycle:
Step 1: Understand clearly what security outcome is desired
The development, operations, and security teams must all agree on their expectations for mobile security. There needs to be a common understanding of the security goals that organizations can use as a starting point, such as the OWASP Mobile Top 10, the TRM Guidelines for Mobile App Security, and the Mobile AppSec Verification Standard (MASVS). Once the goals are set and the standards are chosen, all team members need to know how they will affect their workflows.
Step 2: Mobile App Security implementations must be automated
Security is immensely complex, and coding it manually is slow and error-prone. Evaluate and take advantage of automated systems that leverage AI and machine learning (ML) to integrate security into a mobile app. Typically, these are no-code platforms, which can build security into mobile apps automatically, commonly known as a security-build system.
Step 3: Include security as part of the development cycle – Shift-Left-Security
The shift left in the mobile app security model says that mobile developers need to build the security features at the same time as they are building the app.
Once an automated security implementation platform is chosen, it should be integrated into the team’s continuous integration (CI) and continuous delivery (CD) processes, which will speed up the development lifecycle, and all teams — development, operations, and security — should continue to collaborate closely throughout the sprint. Additionally, organizations can come closer to achieving platform parity by creating reusable mobile security templates for the specific security features required in each Android and iOS app.
Step 4: Ensure instant validation and verification
Without a means to instantly verify that the required security features are included in the release, conflicts can arise at release meetings that may delay the publication of the app or its update. Verification and validation should be documented automatically to prevent last-minute release confusion.
Step 5: Keeping security development to a fixed cost
Development teams need predictability and budget certainty. By taking an automated approach to security, app development teams can reduce unexpected changes in headcount and development expenses, because it eliminates the uncertainty inherent in coding security into mobile apps manually.
The problem of security parity is a big one, but it’s part of a larger problem: a general lack of security in mobile apps, period. By embracing automation for security implementation to the same or greater degree than it has been adopted for feature and function development, mobile app development organizations can ensure that every app they release for every platform will protect end-users and the publishers themselves from hackers, fraudsters, and cybercriminals.