software exposure The world runs on code. We secure it. Wed, 30 Oct 2024 12:18:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp software exposure 32 32 How Attackers Could Hijack Your Android Camera to Spy on You https://checkmarx.com/blog/how-attackers-could-hijack-your-android-camera/ Tue, 19 Nov 2019 07:17:09 +0000 https://www.checkmarx.com/?p=29842 In today’s digitally-connected society, smartphones have become an extension of us. Advanced camera and video capabilities in particular are playing a massive role in this, as users are able to quickly take out their phones and capture any moment in real-time with the simple click of a button. However, this presents a double-edged sword as these mobile devices are constantly collecting, storing, and sharing various types of data – with and without our knowing – making our devices goldmines for attackers.
In order to better understand how smartphone cameras may be opening users up to privacy risks, the Checkmarx Security Research Team cracked into the applications themselves that control these cameras to identify potential abuse scenarios. Having a Google Pixel 2 XL and Pixel 3 on-hand, our team began researching the Google Camera app [1], ultimately finding multiple concerning vulnerabilities stemming from permission bypass issues. After further digging, we also found that these same vulnerabilities impact the camera apps of other smartphone vendors in the Android ecosystem – namely Samsung – presenting significant implications to hundreds-of-millions of smartphone users.
In this blog, we’ll explain the vulnerabilities discovered (CVE-2019-2234), provide details of how they were exploited, explain the consequences, and note how users can safeguard their devices. This blog is also accompanied by a proof-of-concept (PoC) video, as well as a technical report of the findings that were shared with Google, Samsung, and other Android-based smartphone OEMs.

Google & Samsung Camera Vulnerabilities

After a detailed analysis of the Google Camera app, our team found that by manipulating specific actions and intents [2], an attacker can control the app to take photos and/or record videos through a rogue application that has no permissions to do so. Additionally, we found that certain attack scenarios enable malicious actors to circumvent various storage permission policies, giving them access to stored videos and photos, as well as GPS metadata embedded in photos, to locate the user by taking a photo or video and parsing the proper EXIF data [3]. This same technique also applied to Samsung’s Camera app.
In doing so, our researchers determined a way to enable a rogue application to force the camera apps to take photos and record video, even if the phone is locked or the screen is turned off. Our researchers could do the same even when a user was is in the middle of a voice call.

The Implications

The ability for an application to retrieve input from the camera, microphone, and GPS location is considered highly invasive by Google themselves. As a result, AOSP created a specific set of permissions that an application must request from the user. Since this was the case, Checkmarx researchers designed an attack scenario that circumvents this permission policy by abusing the Google Camera app itself, forcing it to do the work on behalf of the attacker.
It is known that Android camera applications usually store their photos and videos on the SD card. Since photos and videos are sensitive user information, in order for an application to access them, it needs special permissions: storage permissions. Unfortunately, storage permissions are very broad and these permissions give access to the entire SD card. There are a large number of applications, with legitimate use-cases, that request access to this storage, yet have no special interest in photos or videos. In fact, it’s one of the most common requested permissions observed.
This means that a rogue application can take photos and/or videos without specific camera permissions, and it only needs storage permissions to take things a step further and fetch photos and videos after being taken. Additionally, if the location is enabled in the camera app, the rogue application also has a way to access the current GPS position of the phone and user.
Of course, a video also contains sound. It was interesting to prove that a video could be initiated during a voice call. We could easily record the receiver’s voice during the call and we could record the caller’s voice as well.

A PoC of a Worst-Case Scenario

To properly demonstrate how dangerous this could be for Android users, our research team designed and implemented a proof-of-concept app that doesn’t require any special permission beyond the basic storage permission. Simulating an advanced attacker, the PoC had two working parts: the client-part that represents a malicious app running on an Android device, and a server-part that represents an attacker’s command-and-control (C&C) server.
The malicious app we designed for the demonstration was nothing more than a mockup weather app that could have been malicious by design. When the client starts the app, it essentially creates a persistent connection back to the C&C server and waits for commands and instructions from the attacker, who is operating the C&C server’s console from anywhere in the world. Even closing the app does not terminate the persistent connection.
The operator of the C&C console can see which devices are connected to it, and perform the following actions (among others):

  • Take a photo on the victim’s phone and upload (retrieve) it to the C&C server
  • Record a video on the victim’s phone and upload (retrieve) it to the C&C server
  • Parse all of the latest photos for GPS tags and locate the phone on a global map
  • Operate in stealth mode whereby the phone is silenced while taking photos and recording videos
  • Wait for a voice call and automatically record:
    • Video from the victim’s side
    • Audio from both sides of the conversation

Note: The wait for a voice call was implemented via the phone’s proximity sensor that can sense when the phone is held to the victim’s ear. A video of successfully exploiting the vulnerabilities was taken by our research team and can be viewed here. Our team tested both versions of Pixel (2 XL / 3) in our research labs and confirmed that the vulnerabilities are relevant to all Google phone models.

Android Vulnerability: Watch the Explainer Video

Summary of Disclosure and Events

When the vulnerabilities were first discovered, our research team ensured that they could reproduce the process of easily exploiting them. Once that was confirmed, the Checkmarx research team responsibly notified Google of their findings.
Working directly with Google, they notified our research team and confirmed our suspicion that the vulnerabilities were not specific to the Pixel product line. Google informed our research team that the impact was much greater and extended into the broader Android ecosystem, with additional vendors such as Samsung acknowledging that these flaws also impact their Camera apps, and began taking mitigating steps.

Google’s Response

“We appreciate Checkmarx bringing this to our attention and working with Google and Android partners to coordinate disclosure. The issue was addressed on impacted Google devices via a Play Store update to the Google Camera Application in July 2019. A patch has also been made available to all partners.”

Mitigation Recommendation

For proper mitigation and as a general best practice, ensure you update all applications on your device.

Timeline of Disclosure

  • Jul 4, 2019 – Submitted a vulnerability report to Android’s Security team at Google
  • Jul 4, 2019 – Google confirmed receiving the report
  • Jul 4, 2019 – A PoC “malicious app” was sent to Google
  • Jul 5, 2019 – A PoC video of an attack scenario was sent to Google
  • Jul 13, 2019 – Google set the severity of the finding as “Moderate”
  • Jul 18, 2019 – Sent further feedback to Google
  • Jul 23, 2019 – Google raised the severity of the finding to “High”
  • Aug 1, 2019 – Google confirms our suspicion that the vulnerabilities may affect other Android smartphone vendors and issues CVE-2019-2234
  • Aug 18, 2019 – Multiple vendors were contacted regarding the vulnerabilities
  • Aug 29, 2019 – Samsung confirmed they are affected
  • Nov 2019 – Both Google and Samsung approved the publication

Note: This publication was coordinated with Google and Samsung after their confirmation of a fix being released. Please refer to Google for information regarding the fixed version of the Android OS and Google Camera app.

Final Words

The professionalism shown by both Google and Samsung does not go unnoticed. Both were a pleasure to work with due to their responsiveness, thoroughness, and timeliness.
This type of research activity is part of our ongoing efforts to drive the necessary changes in software security practices among vendors that manufacture consumer-based smartphones and IoT devices, while bringing more security awareness amid the consumers who purchase and use them. Protecting privacy of consumers must be a priority for all of us in today’s increasingly connected world.
References

[1] https://en.wikipedia.org/wiki/Google_Camera
[2] https://developer.android.com/guide/components/intents-filters
[3] https://en.wikipedia.org/wiki/Exif

]]>
Breaking Down the OWASP API Security Top 10 (Part 1) https://checkmarx.com/blog/breaking-down-owasp-api-security-top10-part1/ Wed, 06 Nov 2019 07:41:20 +0000 https://www.checkmarx.com/?p=29756 As a result of a broadening threat landscape and the ever-increasing usage of APIs, the OWASP API Security Top 10 Project was launched. From the start, the project was designed to help organizations, developers, and application security teams become more aware of the risks associated with APIs. This past September, the OWASP API Security Top 10 release candidate (RC) was finalized and published on OWASP.
In my previous blog, I provided a high-level view of the interaction between API endpoints, modern apps, and backend servers, in addition to how they’re different from their traditional browser-based counterparts. I also discussed why this project was so important to the contributors and industry overall. In this blog, I aim to clarify the first five (5) risks by highlighting some of the possible attack scenarios to help organizations and end-users understand the dangers associated with deficient API implementations. The following discussion follows the same order as found in the OWASP API Security Top 10.
API1:2019 – Broken Object Level Authorization: Attackers can exploit API endpoints that are vulnerable to broken object level authorization by manipulating the ID of an object that is sent within the client request. What this means is that the client can request information from an API endpoint that they are not supposed to have access to. This attack normally leads to unauthorized information disclosure, modification, or destruction of data.
Example Attack Scenario:
Say for instance there is an e-commerce platform that provides financial and hosted services to a group of different online stores (shops). The platform provides an API used to gain access to revenue charts for each of their hosted stores, and each store should only have access to their own revenue charts. However, while inspecting the client request from a single store who wants to gain access to their own revenue charts, an attacker can identify (find) the API endpoint for those revenue charts and identify the URL in use, for example /shops/{shop name}/revenue_data.json. Using the names of other stores being hosted on the e-commerce platform, an attacker can create a simple script to modify the {shop name} ID object in subsequent requests, and gain access to the revenue charts of every other store.
API2:2019 – Broken Authentication: Being different than Authorization discussed above, Authentication on the other hand is a complex and confusing mechanism concerning APIs. Since authentication endpoints are exposed to anyone by design, the endpoints that are responsible for user-authentication must be treated differently from regular API endpoints and implement extra layers of protection for credential stuffing attempts, in addition to brute force password and token guessing attacks.
Example Attack Scenario:
Suppose that an attacker obtained a list of leaked username/password combinations as the result of a data breach from another organizations. If the API endpoint handling authentication does not implement brute force or credential stuffing protections like CAPTCHA, rate-limiting, account lockout, etc., an attacker can repeatedly attempt to gain access using the list of username/password combinations to determine what combination(s) work.
API3:2019 – Excessive Data Exposure: By design, API endpoints often expose sensitive data since they frequently rely on the client app to perform data filtering. Attackers exploit this issue by sniffing the traffic to analyze the responses, looking for sensitive data that should not be exposed. This data is supposed to be filtered on the client app, before being presented to the user.
Example Attack Scenario:
Image that an IoT-based camera surveillance system allows administrators to add a newly-hired security guard as a system user, and the administrator wants to ensure the new user should only have access to certain cameras. These cameras are accessible via a mobile app that the security guard uses while at work. The newly hired security guard’s mobile app makes an API request to an endpoint in order to receive data about the cameras, and relies on the mobile app to filter which cameras the guard has access to. Although the mobile app only shows the cameras the guard can access, the actual API response contains a full list of all the cameras. Using the sniffed traffic, an attacker can manipulate the API request to show all cameras, bypassing the filtering on the mobile app.
API4:2019 – Lack of Resources & Rate Limiting: It is common to find API endpoints that do not implement any sort of rate limiting on the number of API requests, or they do not limit the type of requests that can consume considerable network, CPU, memory, and storage resources. The amount of resources required to satisfy a request greatly depends on the user input and endpoint business logic. Attackers exploit these issues causing denial-of-service attacks and associated endpoint outages.
Example Attack Scenario:
Let’s say that an attacker wants to cause a denial-of-service outage to a certain API that contains a very large list of users. The users’ list can be queried, but the application limits the number of users that can be displayed to 100 users. A normal request to the application would look like this: /api/users?page=1&size=100. In this case, the request would return with the first page and the first 100 users. If the attacker changed the size parameter from 100 to 200000, it could cause a performance issue on the backend database, since the size parameter in use is so large. As a result, the API becomes unresponsive and is unable to handle further requests from this or any other client.
API5:2019 – Broken Function Level Authorization: Although different than API1 above, exploitation of this issue requires the attacker to send API requests to endpoints that they should not have access to, yet are exposed to anonymous users or regular, non-privileged users. These types of flaws are often easy to find and can allow attackers to access unauthorized functionality. For example, administrative functions are prime targets for this type of attack.
Example Attack Scenario:
To illustrate this further, imagine that during the registration process to a certain application that only allows invited users to join, the mobile app triggers an API request to GET /api/invites/{invite_guid}. GET is a standard HTTP method used to request information from a particular resource. In this case, the response to the GET contains details about the invite, including the user’s role and email address.
Now, say that an attacker duplicated the request and manipulated the HTTP method by changing GET to POST. POST is an HTTP method used to send information to create or update a resource. The URL would look like this: POST /api/invites/new/{“email”:”hugo@malicious.com”,”role”:”admin”}. In this case, the attacker easily exploits this issue and sends himself an email invite to create an admin account.
In the context of the five risks above, one could easily image many similar attack scenarios. Those provided were just examples of the nearly unlimited possibilities when attacking vulnerable API implementations. Hopefully you can see, the risks above are primarily caused by errors or oversights. I believe these risks could easily be managed or nearly eliminated when organizations improve their secure coding practices, especially when it comes to the way they’re utilizing APIs.

]]>
The Hacker vs. Struts 2 Game – It Appears it has No Ending https://checkmarx.com/blog/hacker-vs-struts2-game-has-no-ending/ Wed, 30 Oct 2019 13:40:15 +0000 https://www.checkmarx.com/?p=29650 If you’re active in the cybersecurity industry, you have likely heard the buzz about Struts 2 Java framework in 2017. In short, hackers were able to exploit a vulnerable application based on Struts 2 and stole hundreds of millions of PII records.
The vulnerability (CVE-2017-5638) made a lot of noise, but like almost any critical vulnerability, it was patched and everyone moved on. However, this ultimately ended up being the tip of the iceberg since there are other, lesser-known vulnerabilities with equally-serious consequences. The root cause of these vulnerabilities are still there as a result, and even the latest patch does not guarantee the complete safety of a Struts 2-based application.

In this post, I will show you why Struts 2 is a dangerous toy to play with and where to poke to break it.
In the graphic below, the fourth column from the left shows the number of code execution vulnerabilities in Struts, accounting for nearly 33% of the issues discovered. No other Java framework has this many. What’s more, all of them share the same root cause—the framework’s architecture itself. Let’s take a closer look.

Source: CVE Details

Struts Architecture

There are two major architectural solutions stitching together model, views, and controllers:  Value Stack and OGNL. Value Stack is the stack of all the objects used by an application to respond to a user request (e.g. application configuration, security settings, data, etc.). Objects in the Value Stack are manipulated using Object Graph Navigation Language (OGNL), which is an expression language handy for getting and setting Java object properties.

OGNL

What is the OGNL library capable of? With the following example, OGNL calls a method of a given class, as well as doing something worse.

As you can see, an OGNL expression can contain an operating system command (e.g., cmd.exe, calc.exe, etc.). Therefore, if you allow the execution of malicious input as OGNL expressions, then you are in trouble.
Luckily, Struts 2 doesn’t expose OGNL directly. It has several layers above it a shown below.

At the top of the pyramid resides an entrance point to the OGNL expression. The example below shows a view that retrieves the username from the session object using “#session.user.name” OGNL expression.  It is an intentional and legitimate way of using OGNL in Struts 2. However, malicious data may be evaluated as an OGNL expression unintentionally because of a vulnerability in the application or framework itself.

The next layer of the pyramid is the framework’s implementation of OGNL library. The most interesting part for us is the security mechanism because it defines which OGNL expressions are allowed to be evaluated.
Finally, after passing security checks, the expression is evaluated by Java OGNL library.

Recipe of a Successful Attack

Therefore, you need two key components to evaluate arbitrary OGNL expressions in a Struts 2-based application:

  1. Find an injection point leading to the evaluation
  2. Bypass the security mechanism

Injection Points

How can a malicious OGNL expression be evaluated? There are two options:
The direct evaluation method happens when the user input ends in an argument of a method that evaluates the input as an OGNL expression. The following framework code took part in the notorious CVE-5638-2017 by evaluating an error message tainted with the malicious OGNL expression.

The double evaluation method consists of two evaluation calls, making it harder to find. As demonstrated in the following example, the first call populates a variable with a malicious expression but does not evaluate it yet. Meanwhile, the second call gets the tainted variable value and evaluates it. The double evaluation is usually unintentional. For example, it may combine the evaluation introduced by a developer and the evaluation inside the framework code.

Security Mechanism Bypasses

Now that you know what injection points look like, let us imagine that the payload is evaluated, allowing us to play with the security mechanism.
You will see its evolution from an attacker perspective starting from Struts 2 version 2.3.14.1 (2013) up to the current state in versions 2.5.20 or 2.3.37. The security mechanism in versions before 2.3.14.1 is trivial; afterward, it was continuously upgraded.
Now comes the best part where we are going to review the payloads that bypass the security mechanism and upgrades made against them. The general approach of all the following payloads is to first remove the existing restrictions, and then run an OS command.

The first payload

The security mechanism in the initial state restricted access to static method calls. For some reason, the mechanism itself was accessible to OGNL expressions. The following payload first permits static method calls, and then calls a static method getRuntime().exec with the OS command.

The fix in 2.3.14.2 made allowStaticMethodAccess immutable. What if instead of a static method, we try to generate an object dynamically? The next payload does exactly this to bypass the security mechanism.

The second payload


The fix in 2.3.20 prohibits usage of constructors and introduces blacklists of classes and packages restricted for OGNL.

The third payload

The security mechanism disallows static methods, but allows static objects. The researchers found a static object in the OGNL library containing the security mechanism in its default state (= zero security settings). The following payload sets the current security mechanism to the default state and runs a command.

The fix in 2.3.30 and 2.5.2 finally deprived OGNL expressions of access to the security mechanism by blacklisting ognl.MemberAccess and ognl.DefaultMemberAccess classes.

The fourth payload

OGNL can call methods of any object in the given context. It occurred that the blacklists are in the context of an evaluated expression, which can be reached through a chain of several objects and flushed using clear() method. When blacklists are empty, we can use the trick from the previous payload to run a command.

The fix in 2.3.32 and 2.5.10.1 blocked the path to OgnlUtils by making com.opensymphony.xwork2.ActionContext unavailable for OGNL. In addition, blacklists were made immutable, making clear() method ineffective.

The fifth payload

ValueStack is a complex set of intertwined objects where you can reach the same object in several ways, starting from the Value Stack root. The next payload accesses OgnlUtils through a different set of objects. Blacklists were made immutable incorrectly by the previous fix and they can be updated using a setter. The payload consists of two consequent requests, the first of which request flushes blacklists, and the second of which runs a command. It’s possible to break the payload into two requests because OgnlUtils is a singleton and keeps its value until the application restart.

No More CVEs, but the Vulnerabilities May Still Be There

The fix in 2.3.35 and 2.5.17 fixed the flaws and put Struts 2 OGNL injections on pause. The blacklists are destined to be bypassed, so we can wait for the next episode of the cat and mouse game between the framework developers and attackers.
Currently, the framework does not have known injection points and code execution exploits. However, it does not make the Struts 2-based application safe because developers may introduce an injection point themselves. For example, this can occur by unintentionally chaining a benign evaluation inside framework code, with one more evaluation in their own code. The following code snippet is an example of how the application based on the latest version of Struts 2 is still exploitable when there is an injection point.

At first, the “createUser” action creates an object of class User and saves it to the session variable. The default value of the “isAdmin” property of the “User” class is false.
Later, the “inject” action loads the “User” object from the session, evaluates user input as OGNL expression, and saves the “User” object back to the session. The “getText()” method is an injection point inserted for demonstration purposes.
By calling these actions sequentially, the attackers can manipulate non-blacklisted objects. For example, set “User.isAdmin” to true, thus bypassing an authorization check.
Given these points, Struts 2 developers should keep yet another vulnerability type in mind. The recommendation is to use SAST or IAST tools like Checkmarx extensively, to find possible code execution via OGNL injection vulnerabilities. Without proper application security testing processes in place, the likelihood of overlooking an exploitable injection vulnerability caused by a developer is quite high.

]]>
NFC False Tag Vulnerability – CVE-2019-9295 https://checkmarx.com/blog/nfc-false-tag-vulnerability/ Thu, 24 Oct 2019 08:00:23 +0000 https://www.checkmarx.com/?p=29086 Introduction

Security Aspects of Android

Android is a privilege-separated operating system, in which each application runs with a distinct system identity (Linux user ID and group ID). Parts of the system are also separated into distinct identities. Linux isolates applications from each other and from the system. Additional finer- grained security features are provided through a “permission” mechanism that enforces restrictions on the specific operations that a particular process can perform, and per-URI permissions for granting ad-hoc access to specific pieces of data.
A central design point of the Android security architecture is that no application, by default, has permission to perform any operations that would adversely impact other applications, the operating system, or the user. This includes reading or writing the user‘s private data (such as contacts or e- mails), reading or writing another application‘s files, performing network access, keeping the device awake, etc.
Because Android sandboxes applications from each other, applications must explicitly share resources and data. They do this by declaring the permissions they need for additional capabilities not provided by the basic sandbox. Applications statically declare the permissions they require, and the Android system prompts the user for consent at the time the application is installed.
The application sandbox does not depend on the technology used to build an application. In particular the Dalvik VM is not a security boundary, and any app can run native code (see the Android NDK). All types of applicationsJava, native, and hybrid—are sand-boxed in the same way and have the same degree of security from each other.

Executive Summary

This report details a vulnerability submitted to Google about the Tags application (app), shipped with the Android OS that is responsible for reading NFC (Near Field Communication) tags, parsing them, and forwarding the results to the relevant application according to its contents.
This vulnerability, with CVE-2019-9295 assigned, allows for a rogue application to trick the Tags app into thinking a new NFC tag was just read. It then shows the user a list of actions they can perform with the tags, so the user has to always interact with the app, which is not ideal from an attacker perspective. There are some scenarios where user interaction is not a problem, in fact, it is actually expected like it would be, explained in the attack scenarios.
While not a critical vulnerability, this is still something that Android OS users – particularly ones not yet operating on Android 10 yet – should be aware of since the different behaviors for different tags cannot be predicted in the future, which might introduce more serious bugs.
According to Google, “As a moderate-severity issue, this vulnerability was addressed in Android 10 only; the fix was not backported to prior versions of Android.”
This type of research activity is part of the Checkmarx Security Research Team’s ongoing efforts to drive the necessary changes in software security practices among organizations who develop consumer-based products while bringing more security awareness amid the consumers who use them. Protecting privacy of consumers must be a priority for all of us in today’s increasingly-connected world.

Use cases and attack scenarios

In this section, a practical use case is given for the different vulnerabilities found. Sometimes it’s not just about the CVSS score. In some cases, the way a vulnerability can be used or chained augment its real value and usage in the real world.

False Tag

This vulnerability allows a malicious app to simulate the receiving of an NFC tag, as if the victim’s phone has just read a tag. It can simulate any type of tags (more exactly an NDEF Record), for example, a URI with a telephone number – tel:123456789.
The drawback from an attacker’s perspective is that the user has to interact and click on the shown tag in order to take the corresponding action. In the case of the previous URI, the user would have to click on the screen so that the call is placed.

Showing a random “New tag scanned” pop-up window is something very strange that will probably get most users‘ attention. On the right, we see a spoofed tag with a URL. The Android Tag app will ask the user if they want to open it with Chrome (or any other registered browser on the device).

Spoofed Tag

To make things less “phishy”, an application was developed that can spoof tags that have been read. The app registers to listen for specific actions, like android.nfc.action.NDEF_DISCOVERED via an intent filter. When a user actually tries to read an NFC tag, the malicious app reads the tag, changes its contents and then calls the default Android Tag viewer. In this scenario, the user is actually expecting to take action after the card has been read, so they have no reason to suspect foul-play.
Default Behavior:                                                        With malicious app installed:

In the above example, a NFC tag with a phone number was used. The real phone number on the NFC tag is “555-111-12222”. When the malicious app is not installed, the real number appears. But if the malicious app is installed, it grabs the NFC tag contents, changes them, and spoofs a false tag calling the Android tag viewer. The user only sees the new number “666-666-666” and has no reason to suspect that it’s not the number on the NFC tag.
It’s important to note that the malicious application does not need any particular permission, since it is just registering for NFC intents and not using the actual NFC hardware.

Caveats

This spoofing method is not without caveats. There is a reason to show the phone number tag and not, let’s say, a URL tag. In a Samsung S8, for example, if we registered an application to receive URL-containing tags, and the user scanned for a NFC card containing a URL, the following screen would appear:

That’s because Chrome is also registered to handle NFC URL type tags. So the user would have to choose the malicious (rogue) app to complete the action in order for it to access and change its content. It is not an impossible scenario, but again it would require user interaction.
In some circumstances, the rogue app can intercept and change the content of the tag before it is handled by the default OS app. Example: a user scans a tag that is a business card and contains a phone number. The rogue app can change the phone number to another one on the fly and the user has no way to know it was altered.
In other cases, the rogue app cannot directly intercept the tag. The user will be presented with a choice to choose the app to handle the tag. In the previous example, the OS presents a menu to the user to choose the handler for that tag, which would be Chrome and the rogue app. If the user chooses the rogue app, it can then modify the URL to direct the user to a malicious site. It’s a more unlikely scenario, but the rogue app could be a Chrome look-alike to trick the user into choosing it as the handler.
This happens because of the way the tag dispatch system works in Android. The tag dispatch system uses the Type Name Format record and type fields to try to map a MIME type or URI to the NDEF message. When the tag dispatch system is done creating an intent that encapsulates the NFC tag and its identifying information, it sends the intent to an application that filters for the intent. If more than one application can handle the intent, the Activity Chooser is presented so the user can select the Activity.
There are 3 intents:

  • ACTION_NDEF_DISCOVERED
  • ACTION_TECH_DISCOVERED
  • ACTION_TAG_DISCOVERED

If there is an app that filters for the ACTION_NDEF_DISCOVERED, the system tries to start it before any other intent. If not, it searches for apps that filter for ACTION_TECH_DISCOVERED and finally for ACTION_TAG_DISCOVERED.
Our rogue app always filters for ACTION_NDEF_DISCOVERED. Since the default phone app is not filtering for this, which is a high priority, our rogue app can effectively grab the tag, change it, and use the tagviewer to pass it down to the default phone app, invisibly to the user.
Chrome, by comparison, also filters for ACTION_NDEF_DISCOVERED. This means that when a URL tag is scanned, the system sees that two different apps can handle the intent and presents the pop up message. So, depending on the app, the behavior might be different. That being said, there are many types of NFC tags possible and many are not being handled by any specific app or the handler app does not filter for ACTION_NDEF_DISCOVERED. Whenever that happens, the tag spoofing is completely transparent, like in the phone number example.
Different smartphones might also have different handlers for different NFC tag types, which creates an exhaustive list of which NFC Tag type, to which App, in which model, a challenging task to say the least.

The NFC Tools Android app is a good place to start searching for tag types:

These are just some examples, there are more types of NFC tags supported.

Research results

com.android.apps.tag

The default app that handles new NFC tags in the android operating system is the Tag Viewer, in the package com.android.apps.tags.

Information


The android Tag viewer handles NFC tag reading (not all, but many types of tags). The class responsible for handling tags is: …/packages/apps/Tag/src/com/android/apps/tag/TagViewer.java
Analysis of the source allows one to conclude that this activity can be ‘fooled’ to show any tag that a user wants, since this activity is exported and not protected by any permissions and also the actions used are not protected by the OS.

TagViewer.java



In order to exploit this situation, a malicious app just has to build an Intent with action NfcAdapter.ACTION_TAG_DISCOVERED or NfcAdapter.ACTION_TECH_DISCOVERED and provide the proper extra Parcelable NfcAdapter.EXTRA_NDEF_MESSAGES.
This can be done with, for example:

Which starts the TagViewer with an HTTP Tag. Several URI schemes are possible, like tel: or sms:, but user confirmation is needed to call the associated application, limiting this as a practical attack.
An NdefRecord can actually be many different objects, not just URIs. In fact, the message parser checks for the following types:

NdefMessageParser.java

Conclusion

In summary, there are two main attack scenarios:

  1. A popup window that could randomly appear alerting the user that an NFC tag was scanned (generated by a rogue app). The user has to interact with it to choose an application to deal with the
  2. The user scans a real The rogue app can intercept and change the content of the tag before it is handled by the default OS app. Example: A user scans a tag of a business that contains a phone number. The rogue app can change the phone number to another one on the fly, and the user has no way to know it was altered (except by scanning the tag in an ‘uninfected’ phone).

In scenario 1 or 2, users could eventually be tricked to click on a link leading them to a malicious landing page, calling/texting the wrong number, or any other action possible with NFC tags.
Although the flaw in itself allows for the spoofing of any tag, it requires user-interaction to actually start an application. This requirement lowers the severity of the flaw, in our opinion. That being said, and as we’ve shown, there are attack scenarios where the user interaction is actually expected.
Most would agree that no external app should be able to fool the operating system default NFC tag resolver into thinking a new tag was just scanned. The results can be completely unpredictable.
Recommendation: Updating to Android 10, which was initially released on September 3rd, will mitigate this issue. However, please note that the Android 10 update is still being rolled out by many phone manufacturers and carriers. Find out when you can expect the Android 10 update for your phone here.

]]>
Kotlin Guide: Why We Need Mobile Application Secure Coding Practices https://checkmarx.com/blog/kotlin-guide-mobile-app-secure-coding-practices/ Thu, 03 Oct 2019 08:12:02 +0000 https://www.checkmarx.com/?p=28298 October is the annual National Cybersecurity Awareness Month (NCSAM), which is promoted by the U.S. Department of Homeland Security and the National Initiative for Cybersecurity Careers and Studies (NICCS). According to the NICCS, “Held every October, NCSAM is a collaborative effort between government and industry to raise awareness about the importance of cybersecurity and to ensure that all Americans have the resources they need to be safer and more secure online. This year’s overarching message – Own IT. Secure IT. Protect IT. – will focus on key areas including citizen privacy, consumer devices, and e-commerce security.”

NSCAM Findings

In light of NCSAM, there is little doubt that the origins of today’s data breaches (that certainly affect citizen privacy) are repetitive in nearly every case. Vulnerable people, processes, or software are almost always the facilitators.

Unfortunately, vulnerable people will continue to fall prey to phishing attacks and vulnerable processes will often remain in place. However, vulnerable software is something that can easily be fixed when developers understand and fully implement secure coding practices.

Organizations that aren’t completely vetting their software applications before releasing them are putting themselves and their users at unnecessary risk, and these organizations may face the consequences when targeted by attackers.

Since this year’s overarching message focuses on citizen privacy, consumer devices, and e-commerce security, there is one area of concern that is often overlooked and should be discussed. The security of today’s mobile applications (apps), running on consumer devices, and interacting with e-commerce and other sites, needs to be prioritized now more than ever before. Without applying secure coding practices to mobile app development, organizations are likely releasing vulnerable apps that are ripe for exploitation.

Clearly, there is a growing need for secure coding practices among developers, resulting in more-secure mobile apps.

Another Good Project with a Noble Cause

Understanding the need, the Checkmarx Security Research Team released the Kotlin Guide – Mobile Application Secure Coding Practices today to help spread awareness around the most common coding errors when building mobile apps using the Kotlin Language.

For those who may be unfamiliar, Kotlin is a programming language for modern multiplatform applications, 100% interoperable with Java™ and Android™. It is now fully supported by Google as an alternative to the Android standard Java compiler. Since May 7, 2019, Kotlin is Google’s preferred language for Android app development.

Therefore, it is important for developers to familiarize themselves with this new language and understand secure coding practices for mobile apps when using Kotlin.
The Checkmarx Research Team recently considered how a cyber-attacker might approach attacking Kotlin-based mobile apps. The authors of the Kotlin Guide mapped the OWASP Mobile Top 10 security weaknesses to Kotlin on a weakness-by-weakness basis while providing examples, recommendations, and fixes to help developers avoid common mistakes and pitfalls.

After reading the Kotlin Guide and referring to it often, developers and AppSec teams will learn how to ensure they are developing and releasing more-secure mobile apps when using Kotlin. This is one of the first publications ever to be accompanied by a deliberately vulnerable Kotlin app called Goatlin, which is publicly accessible to those who would like to learn more.

Links to Goatlin are provided in Kotlin Guide.
This type of research activity is part of the Checkmarx Research Team’s ongoing efforts to drive the necessary changes in software security practices among organizations who develop and heavily rely on mobile apps, while bringing more security awareness amid the consumers who use them. Protecting privacy of consumers must be a priority for all of us in today’s increasingly-connected world. Being software security and programming language experts, the Checkmarx Research Team felt compelled to create the Kotlin Guide to be shared with developers and AppSec teams worldwide in the hope of improving security for everyone.

Why This Guide is Important

Even the U.S. Government recognizes that mobile application security is a serious concern. In 2017, a Study on Mobile Device Security was performed through the joint effort of the Department of Homeland Security (DHS) in consultation with the National Institute of Standards and Technology (NIST) via the National Cybersecurity Center of Excellence. In the study, it stated that vulnerabilities in applications are usually the result of the failure to follow secure coding practices and vulnerabilities typically result in some sort of compromise to a user’s data.
In the effort to move developers away from using Java when building Android apps, Google offers guided, tutorial, and hands-on coding lessons for Kotlin developers. Google Codelabs recently updated some of its training modules this past September which include Android Kotlin Fundamentals, Kotlin Bootcamp for Programmers, and Refactoring from Java to Kotlin. Using these training modules, in addition to understanding the vulnerabilities highlighted in the Kotlin Guide, developers should have a better understanding of the tools required to begin developing more-secure mobile apps for Android-based mobile devices when using Kotlin.
Download the Kotlin Guide – Mobile Application Secure Coding Practices here.

]]>