• Kaushal Patel

OWASP Top 10 Mobile Risks and Threats

Updated: Aug 30, 2020

The mobile Top 10 list items are labeled M1-M10 and are similar in character to their web application.

The Mobile Top 10 helps enumerate common vulnerabilities based on the particulars and nuances of mobile environments: OS, hardware platforms, security schema, execution engines, etc. Each vulnerability type is discussed and defined on the OWASP website but it does not take much of a developer to recognize the basic forms of a given Top 10 element.



M1 – Improper Platform Usage


Misuse or failure to use basic, platform guidelines, security features, and common conventions. That could be key storage, liberal or lax permissions, poorly engineered use of device bio-metric controls, etc.


Mobile applications are developed through mobile app development platforms that offer various security features for developers to embed in their apps. Sometimes, developers simply choose not to use any of these features, while some others risk misconstruing them by not entirely following the related documentation. As a result, security breaches often occur from the failure to use or the misuse of Android intents, iOS Touch-ID, iOS Key-chain and other security functionalities. To prevent those, developers are advised to practice secure coding and configuration on the server-side of the mobile application.


Case Studies: M1 - CITRIX WORX APPS , THREE IOS APPS


Best Practice to Prevent: 

  1. The developer must not allow Keychain encryption through server route and keep the keys in one device only, so that it’s impossible to get exploited on other servers or devices. 

  2. The developer must secure the app through Keychain to store the app’s secret that has a dedicated access control list. 

  3. The developer must take permission to limit which apps are allowed to communicate with their application. 

  4. The developer must control the first of OWASP Mobile Top 10 list by defining the explicit intents and thus blocking all other components to access information present in the intent. 



M2 – Insecure Data Storage


This concerns protections for “data at rest” (or weaknesses). It is a threat for rogue apps or a lost device that has unprotected data at rest to be viewed, sniffed, or cracked.


This second category includes vulnerabilities that can result in data leakage. From a security point of view, applications should not be designed to store sensitive information on the end-user side such as SDCards, applications files or local SQLite databases, especially when unencrypted. Besides, this category also encompasses bad practices such as having sensitive information written in the application logs, in the applications memory or its decompiled code, which of course should never be done.


Case Studies: M2 - TINDER


Best Practices to Prevent: 

  1. For iOS, OWASP security practices recommends using purposely made vulnerable apps like iGoat to threat model their development framework and apps. This will help the ios app developers understand how APIs deal with the app processes and information assets. 

  2. The Android app developers can use the Android Debug Bridge shell for checking the file permissions of targeted app and DBMS to check database encryption. They should also use Memory Analysis Tool and Android Device Monitor to ensure device memory doesn’t have unintended data.


M3 – Insecure Communication


This concerns “data in transit” protections (or weaknesses). Many mobile apps fit well into client-server models and many threat analyses will make sense here. Data could be defined as an audio or video stream (call and facetime tapping) as well as “traditional” data streams. There are multiple channels (“physical layers”) as well: an IP-type channel in addition to the RF-based voice and data channels.


A mobile application usually exchanges data with several servers. When these communications over the network are not encrypted nor correctly authenticated (poor handshaking, incorrect SSL versions, weak negotiation, clear text communication, etc.), they can be intercepted by third parties. In the case of applications handling personal data (banking, health, public service…), these vulnerabilities represent a failure to comply to data privacy laws. On the other hand, when found in apps related to connected objects (home automation, security camera, smart cars…), they can lead to control takeover.


Case Studies: M3 - MISAFE SMART WATCHES , KID'S SMARTWATCHES


Best Practices to Prevent:

  1. Developers should not only look for leakages over traffic communicated between app and server but also device that holds the app and other device or local network. 

  2. Applying TLS/SSL for transporting channels is also one of the mobile app security best practices to consider when it comes to transmitting sensitive information and other sensitive data.

  3. Use certificates given by trusted SSL chain verification. 

  4. Do not send sensitive data over alternate channels like MMS, SMS, or push notifications. 

  5. Apply separate encryption layer to sensitive data before giving to the SSL channel.


M4 – Insecure Authentication


Authentication is the check to see that you are who you say you are. This can be hacked via credential stuffing and session hijacking. Mobile use cases and UI/UX seem to favor shorter passwords/pins and biometric controls with an underlying assumption that the device is always under the primary user/owners’ control, but that is simply very often not the case.


This category covers the authentication of end-users and bad session management. In mobile apps unlike in web apps, users are not always online. Hence mobile apps must be able to identify the user and maintain its identification along its session, when both online and offline.


When cybercriminals identify inexistent or weak authentication scheme in mobile apps, they create malwares that will bypass them. Strong user authentication that leverages multiple factor prevents them from accessing users’ data.


Case Studies: M4 - GRAB ANDROID APP


Best Practices to Prevent:

  1. The app security team must study the app authentication and test it through binary attacks in offline mode for determining if it can be exploited. 

  2. The OWASP web application testing security protocols must match those of mobile apps. 

  3. Use online authentication methods as much as possible, just like that in case of web browser.

  4. Do not enable app data loading until the server has authenticated the user sessions. 

  5. The places where local data us eventual, ensure that it is encrypted through encrypted key derived from users login credentials. 

  6. The persistent authentication request must also be stored on the server. 

  7. The security team should be careful with device-symmetric authorization tokens in the app, since if the device gets stolen, the app can get vulnerable. 

  8. Since the unauthorized physical access of devices is common, the security team must enforce regular user credential authentication from server end. 


M5 – Insufficient Cryptography


With the existence of commonly-used cryptographic algorithms, like SHA-1 and MD4/5, and widespread knowledge of the importance of encryption, it is questionable how this threat is still so high on the list. 


Insecure use of cryptography is common among mobile apps leveraging encryption. It can come from flawed process or encryption algorithms that are weak by nature. In both cases, someone can exploit the vulnerability to decrypt sensitive data handle by the apps. Developers should make sure to apply the latest cryptographic standards that will withstand the test of time.


Case Studies: M5 - APP SENDING UNENCRYPTED USER DATA , PHILIPS HEALTH ANDROID APP


Best Practices to Prevent:

  1. To solve this one of the most commonly occurring OWASP Top 10 Mobile risks, developers must choose modern encryption algorithms for encrypting their apps. The choice of algorithm takes care of the vulnerability to a great extent. 

  2. If the developer is not a security expert, they must refrain from creating own encryption codes. 


M6 – Insecure Authorization


With respect to mobile specifically, this is commonly talked about when an app on your phone wants access to everything on your phone, such as a game wanting access to your contacts, or a Snapchat-type app wanting access to your GPS, contacts, and keychain. Some authorization requests may make sense, but for many apps, you may not want to give them complete access to everything on your phone. 


Some apps, after authenticating users, grant them some authorizations by default. These authorizations are sometimes mistakenly too extended, providing users with rights they should not have. If a cybercriminal gets access to privileged rights in an application, it can result in unlawful access to sensitive information, the deletion of entire systems or even the takeover of connected objects. The spectrum of authorizations granted to users should be assessed prior apps are released.


Case Studies: M6 - VIPER SMART START , PANDORA


Best Practices to Prevent:

  1. The QA team must regularly test the user privileges by running low privilege session tokens for the sensitive commands. 

  2. The developer must note that the user authorization schemes go wrong in the offline mode.

  3. The best way to prevent this risk is to run authorization checks for permissions and roles of an authenticated user at server, instead of the mobile device. 


M7 – Client Code Quality


This is what most of us think of as AppSec (but hopefully you are detecting that AppSec and DevSecOps are MUCH more than this). All the application security testing lives here.


This category includes vulnerabilities like buffer overflows, format string vulnerabilities, and various other code-level mistakes that allow code to be executed on mobile devices. In case of a buffer overflow for instance, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Most code issues can be fixed with good practices. Having code patterns across your organization that are easy to read and come with clear documentation is a good start to reduce this risk


Case Studies: M7 - WHATSAPP


Best Practices to Prevent:

  1. According to the OWASP secure coding practices, the code should be rewritten in the mobile device instead of fixing them at the server side. The developers must note that bad coding at the server side is very different than poor coding at client level. Meaning, both weak server side controls and client side controls should be given separate attention.

  2. The developer must use third party tools for static analysis to identify buffer overflows and memory leaks. 

  3. The team must create a third-party libraries list and check it for newer versions periodically. 

  4. Developers should see all the client input as untrusted and validate them irrespective of whether they come from users or the app. 


M8 – Code Tampering


This is a cousin to Supply Chain weakness and covers things like reverse engineering your app to allow it to be manipulated into alternate use cases. It also includes malware (Google Play and Apple App Store) and root-kitted devices. This specific class of issues will enjoy a long and persistent/pernicious run on the Mobile T10 (and IoT and any use case that doesn’t have a strong DevOps-CI/CD refresh component).


Tampering consists of duplicating an application, adding one or several back doors to its code, re-signing it and publishing it to third-party app stores. Tampered apps are often referred to as malicious clones, and usually target banking and very popular apps. Thanks to the back doors, hackers are able to intercept data and even sometimes impersonate official apps to communicate with companies’ servers. To prevent this risk, developers should use anti-tampering solutions and enable apps with capabilities to detect tampering


Case Studies: M8 - POSTBANK FINANZASSISTENT APP


Best Practices to Prevent:

  1. The developers must make sure that the app is able to detect code changes at runtime. 

  2. The build.prop file must be checked for the presence of unofficial ROM in Android and to find out if the device is rooted. 

  3. The developer must use checksum and evaluate the digital signatures to see if file tampering has taken place. 

  4. The coder can make sure that the app keys, code, and data are removed once tampering is found. 


M9 – Reverse Engineering


In the future, this may become part of everything else on the list. You have seen this in “M8-Code Tampering.” You may have controls in place via DevOps (Assembla, Git, etc.) and physical security/data exfiltration programs (cleared workspaces, NDA’s, policies, etc.). You will talk about it in Supply Chain meetings. At some level, in some form, it is a precursor or fundamental starting point for all exploit and vulnerability efforts. Maybe not a deep code review, but at some level, the “bad guys” will be looking at your work in a black/white/grey box type of way.


To reverse engineer an app, an attacker analyzes its binary code to determine its source code, libraries, algorithms and any other asset. By providing deeper knowledge on the app, this technique enables hackers to identify its flaws and exploit it more easily. Reverse engineering can result in the theft of intellectual property, information about backend servers, cryptographic ciphers, etc. To minimize this risk, developers must write complex code and use obfuscation.


Case Studies: M9 - POKEMON GO


Best Practices to Prevent:

  1. The best way to safeguard an app against the risk, according to OWASP mobile security, is to use the same tools as the hackers would use for reverse engineering. 

  2. The developer must also obfuscate the source code so that it gets difficult to read and then reverse engineer. 


M10 – Extraneous Functionality


Think the principle of least privilege here. Lock down and deny access to everything except what is absolutely and minimally needed to get the job done. This one will probably be known to you as developers’ back doors (walk through walls cheats) or maybe security controls bypass (SE Linux OFF type things), chatty logs, or port 22/23 up, that accidentally get left in production builds.


During development cycles, developers often include hidden backdoor or security controls to their apps to detect and correct flaws. These functionalities are not supposed to remain in production environment, but sometimes accidentally get forgotten. When identified by hackers, these features can be exploited to access sensitive data or escalate privileges. Before releasing an application, developers need to review configurations and should disable debug logs.


Case Studies: M10 - WI-FI FILE TRANSFER


Best Practices to Prevent:

  1. Ensure that there is no test code in final build

  2. Ensure there is no hidden switch in the configuration settings 

  3. Logs must not contain any back-end server process description

  4. Ensure that the full system logs are not exposed to apps by the OEMs

  5. The API endpoints must be well documented.

  • Twitter
  • Facebook
  • LinkedIn

This blog is for those who are beginner in Cyber Security . ​Please Subscribe for more update

© 2021 by CyberMetrix