CxSCA The world runs on code. We secure it. Tue, 22 Oct 2024 19:13:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp CxSCA 32 32 APACHE LOG4J RCE – Variants and Updates https://checkmarx.com/blog/apache-log4j-rce-variants-and-updates/ Wed, 29 Dec 2021 16:52:00 +0000 https://checkmarx.com/?p=73009 This is the MOST RECENT update to our previous research blog:

APACHE LOG4J REMOTE CODE EXECUTION – CVE-2021-44228

On December 9th the most critical zero-day exploit in recent years was disclosed, affecting most of the biggest enterprise companies. This critical 0-day exploit was discovered in the extremely popular Java logging library log4j which allows RCE (Remote code execution) by logging a certain payload.

The vulnerability was given the nickname “Log4Shell”, which has a CVSS (Common Vulnerability Scoring System) score of 10 – the highest risk possible and was published by GitHub advisory with a critical severity level.

EXPLOIT SCOPE

Log4Shell was being exploited for a few days before its public disclosure. Furthermore, log4shell scanning attempts were discovered up to two weeks beforehand. Attackers were able to install cryptominers, create botnets, and steal sensitive data and system credentials. As of today, it is estimated to have affected over a million machines.

RELEVANT CVES

Since its disclosure, and up to the creation of this article, five CVEs (Common Vulnerabilities and Exposures) concerning Log4j2 and Log4j1 were published:

LOG4J2: CVE-2021-44228

Log4j2 versions 2.0-beta9 through 2.15.0 (excluding 2.12.x after 2.12.1) are vulnerable to remote code execution using its LDAP (Lightweight Directory Access Protocol) JNDI parser. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled. The initial vulnerability designated CVE-2021-44228 was supposedly fixed in versions 2.12.2 and 2.15.0. The fix includes disabling JNDI by default and by restricting LDAP access via JNDI in log4j2’s named object lookup and JNDI manager

This vulnerability has received the highest CVSS score possible – 10 and it affects the following packages, which are available through Maven Package Manager:

  • org.apache.logging.log4j:log4j-api
  • org.apache.logging.log4j:log4j-core

notes – The vulnerability itself is in log4j-core. The Logger class itself, which is used to trigger the exploit, as used in POCs (proof of concept) by calling Logger.error(), is defined in log4j-api. To detect such a usage with the exploitable path, and to secure our customers as much as possible, we added the Logger’s methods as vulnerable methods (which eventually trigger the vulnerability according to the research). This approach is reflected in Github’s Advisory page for this vulnerability.

LOG4J2: CVE-2021-45046

On December 11th, 2021, it was discovered that CVE-2021-44228’s fix was incomplete in certain non-default configurations, which could allow attackers with specifically crafted malicious input data using a JNDI lookup pattern resulting in information leak and RCE in some environments and LCE (Local Code Execution) in all environments. RCE is also possible in some macOS environments. This complementary vulnerability was designated CVE-2021-45046 and was fixed in versions 2.12.2 and 2.16.0 by disabling JNDI by default, and by removing message Lookup.

As this vulnerability was initially regarded as allowing for only DOS (Denial of Service) attacks, the CVSS score assigned for this vulnerability was 3.7. Since it was later discovered as a much more severe threat (RCE), the CVSS score was raised to 9.0.

This vulnerability is an extension of CVE-2021-44228, thus the affected packages are the same.

Mitigation options for CVE-2021-44228 and CVE-2021-45046:

  • Users requiring Java 8 (or later) should upgrade to release 2.16.0 or above.
  • Users requiring Java 7 should upgrade to release 2.12.2 or above.
  • Remove the JndiLookup class from the classpath:

zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class

LOG4J2: CVE-2021-45105

It was discovered on December 15th, 2021, that log4j2 versions 2.0-alpha1 through 2.16.0 (excluding 2.12.x from 2.12.3) is vulnerable to DOS attacks, since it does not protect from uncontrolled infinite recursion of self-referential lookups. These in turn result in a Stack Overflow error that will terminate the process. This vulnerability was published at NVD on December 18th, 2021, under CVE-2021-45105 and was fixed in version 2.17.0 by fixing string substitution recursion and limiting JNDI to only java protocol. According to GitHub Advisory and previous fixes for log4Shell’s variants, the fix for Java7 users should be released in upcoming version 2.12.3. The CVSS score assigned to it by Apache is 5.9.

Mitigation options for CVE-2021-45105

  • Users requiring Java 8 (or later) should upgrade to release 2.17.0 or above.
  • Users requiring Java 7 should upgrade to release 2.12.3 or above.
  • Users requiring Java 6 should upgrade to release 2.3.1 or above.
  • In PatternLayout in the logging configuration, replace Context Lookups like ${ctx:loginId} or $${ctx:loginId} with Thread Context Map patterns (%X, %mdc, or %MDC).
  • in the configuration, remove references to Context Lookups like ${ctx:loginId} or $${ctx:loginId} where they originate from sources external to the application such as HTTP headers or user input.

LOG4J2: CVE-2021-44832

The Checkmarx Security Research Team publicly disclosed a new vulnerability they recently discovered on December 28th, 2021. This vulnerability allows for ACE (Arbitrary Code Execution) in versions 2.0-beta7 through 2.17.0 (excluding security fix releases 2.3.2 and 2.12.4).

When an attacker gains control over the logging configuration (via MITM attack since there is a feature to load a remote config file in log4j) can construct a malicious configuration using JDBC Appender with a data source referencing a JNDI URI, which can then execute remote code.

This vulnerability was fixed in version 2.17.1 by limiting JNDI data source names to the java protocol and it was assigned the CVSS score of – 6.6, a slightly lower severity score because it is more complex to exploit than previous log4Shell variants.  CVE-2021- 44832 solely affects the log4j-core package.

Mitigation options for CVE-2021-44832

  • Users requiring Java 8 (or later) should upgrade to release 2.17.1 or above.
  • Users requiring Java 7 should upgrade to release 2.12.4 or above.
  • Users requiring Java 6 should upgrade to release 2.3.2 or above.
  • In Prior releases confirm that if the JDBC Appender is being used it is not configured to use any protocol other than Java.

Important note for log4j2 vulnerabilities: only the log4j-core JAR file is impacted by these vulnerabilities. Applications using only the log4j-api JAR file without the log4j-core JAR file are not impacted. Apache Log4j is the only Logging Services subproject affected. Other projects like Log4net and Log4cxx are not impacted by these vulnerabilities.

LOG4J1: CVE-2021-4104

Disclosed on December 13th, 2021, and published on December 14th, 2021, on NVD under CVE-2021-4104, it was discovered that log4j1 was also vulnerable for log4Shell vulnerability – previously believed to only affect log4j2.

The root cause of this vulnerability is in the org.apache.log4j.net.JMSAppender class that is vulnerable to deserialization of untrusted data when the attacker has Write access to the Log4j configuration. The attacker can provide malicious payloads to the configuration parameters causing JMSAppender to perform JNDI requests that result in remote code execution. This affects non default configurations of Log4j 1.2 since the JMSAppender configuration is disabled by default.

The CVSS score assigned for this vulnerability is 6.6, which is lower than CVE-2021-44228 since attacker must have write access to log4j configuration to exploit.

The vulnerability affects the log4j:log4j package, which is available through Maven Package Manager:

Mitigation options for CVE-2021-4104:

  • Users should upgrade to Log4j2, either 2.12.4, 2.3.2, 2.17.1, or above, as it addresses numerous other issues from the previous versions.
  • Ensure to not expose to untrusted callers any mechanism that might allow access to JMSAppender class or make any changes and configuration to instances of it.
  • Comment out or delete the JMSAppender in log4j configuration if it is used
  • Delete the JMSAppender classpath:

 zip -q -d log4j-*.jar org/apache/log4j/net/JMSAppender.class

Important note: Apache Log4j 1.2 reached end of life in August 2015.

Detecting Log4Shell with Checkmarx SCA

Checkmarx SCA provides fast and easy detection of the above mentioned Log4Shell vulnerabilities in open source dependencies. The following screenshots display our SCA scan results of code with vulnerable 3rd party dependencies.

Figure 1 – SCA’s overview page, with a list of detected packages and risks

Today, it’s clear. Software Composition Analysis (SCA) solutions are a requirement for organizations that consume open source software. Checkmarx SCA enables your organization to address open-source security issues earlier in the SDLC to identify and manage risk more effectively.

To learn more about Checkmarx SCA, you can request a live demo here, or download our Ultimate Guide to SCA here.

]]>
MicrosoftTeams-image-2-1024×437-1
SBOM: How to Create One Using Checkmarx SCA https://checkmarx.com/blog/sbom-how-to-create-one-using-checkmarx-sca/ Mon, 15 Nov 2021 15:44:27 +0000 https://checkmarx.com/?p=71210 In the first post in this SBOM series, we discussed what an SBOM is and why you should care. As previously mentioned, generating an SBOM report may sound relatively simple, but in most cases, it’s not. As you likely know, modern software projects make use of a long list of third-party open source packages, each of which often calls on many other packages as dependencies. This can create an extensive tree of direct dependencies, dependencies of dependencies, and so on. Simply put, trying to create and manage an SBOM using a spreadsheet is nearly impossible, and if you try to manage your open source usage this way, it will likely get out of hand very quickly. 

Another Caveat

The next caveat to consider is that SBOM reports should follow a standard format that includes detailed information about each involved component. At a minimum, it needs to give the component’s name, supplier name, version, hashes and other unique identifiers, dependency relationship, author of SBOM data, and a timestamp. The report also needs to cover every software modification and update to reflect the current status of the project. An SBOM report is best accomplished using an automated process that is integrated into your CI/CD pipeline.

SBOM Methodology That Actually Enhances Security

The first and most fundamental task in generating an SBOM is analyzing the software dependencies, which is a natural undertaking for software composition analysis (SCA) solutions such as Checkmarx SCA. However, the ultimate purpose of an SBOM is not just providing a list of ingredients, but to identify potential risk. A standard SBOM provides a list of ingredients but no simple way to detect and measure risks associated with third-party dependencies. So, what else do you need to enhance software security? Simple: vulnerability and license risk information.

To meet the need for a more comprehensive SBOM, Checkmarx SCA leverages our existing infrastructure for identifying vulnerabilities, in addition to license and supply chain risks, to supplement the standard SBOM info. This creates an SBOM that provides valuable insight into the risks associated with your third-party components instead of just a list of ingredients. This methodology exceeds the requirements for what a simple SBOM contains.

The SBOM reports generated from Checkmarx SCA use the existing CycloneDX SBOM format, and SPDX and SWID formats will be added soon. The reports also provide additional “property” fields showing important risk data that organizations need to know about. The reports can be exported in XML or JSON format, making them easy for organizations to consume, track, and update.

How to Generate an SBOM from Checkmarx SCA

Using the Checkmarx SCA User Interface

  • Navigate to the Scan Results screen for the most recent scan of the desired project.
  • Click on the “SBOM” button. The SBOM configuration dialog is shown below:
  • Select the SBOM standard. Currently, only CycloneDX is available.
  • Select the output format: XML or JSON.
  • Click “Generate SBOM.”

The SBOM report will be downloaded and can be viewed on any standard XML/JSON viewers.

How to Add CI/CD Integration

Checkmarx SCA provides plugins and CLI tools for various CI/CD pipelines. One method for running Checkmarx SCA scans via CLI commands is the CxSCA Resolver, which is an on-premises utility for resolving and extracting dependencies. The following section describes how to export SBOM reports using the CxSCA Resolver.

How to Generate SBOM Using SCA Resolver

An SBOM report can be exported via the CxSCA Resolver CLI using –report-extension and report-type arguments.

Example:

“./ScaResolver -s /home/jack/src/MyApp -n MyApp -a Checkmarx -u jack -p ‘demo123!’ –report-extension Xml / Json –report-type CycloneDx”

SBOM Content

Below is a view of the SBOM content, which is part of the SBOM Checkmarx SCA generates.

The standard SBOM fields are ID (purl), Component Name, Version, License, and Hashes. All of these are included in every Checkmarx SCA SBOM as required fields.

In addition, we add a Properties section with extended information  about the risks associated with each library.

SBOM Component Dependencies

Below is a view of the component dependencies, which is part of the SBOM Checkmarx SCA generates.

Each component contains its dependent components, and each dependency section contains a set of required fields and a Properties section.

Conclusion

Checkmarx is dedicated to helping organizations secure the software they develop, one line of code at a time. In response to the proliferation of open source usage, recent supply chain attacks, and the executive order mentioned in the previous post, you can use Checkmarx SCA to easily create and maintain an SBOM of your own. Plus, you’ll get real-time risk data about the open source found in your codebase to help you manage your own risk better.

In the next blog in this SBOM/Software Supply Chain series, we’ll discuss the top three software supply chain risks you need to know about.  

To see an SBOM being created live, don’t hesitate to request a demo.

Download our Ultimate Guide to SCA Here.

]]>
Picture1-1-1 Picture2-1-1 Picture3-1-1 Picture4-1
Exploitable Path – Advanced Topics https://checkmarx.com/blog/exploitable-path-advanced-topics/ Wed, 24 Mar 2021 07:58:14 +0000 https://www.checkmarx.com/?p=46616 This is the third and final blog on Exploitable Path – a unique feature that allows our customers to prioritize vulnerabilities in open-source libraries. In the first blog, we introduced the concept of Exploitable Path and its importance. The conclusion was that a vulnerability in a library is considered exploitable when:

  • The vulnerable method in the library needs to be called directly or indirectly from a user’s code.
  • An attacker needs a carefully crafted input to reach this method and trigger the vulnerability.

In the second blog, we discussed some of the challenges in developing such a feature, and our unique approach. Mainly:

  • Using a query language over the CxSAST engine for the abstraction of queries over source code. This allows a more language-agnostic approach, so that Exploitable Path works for every programming language supported by CxSAST.
  • We walked through the various CxSAST queries that are required to build a full call graph of a user’s source code and its libraries’ source code. By crossing it with vulnerability data, we can know if a vulnerability is exploitable or not.

In this last blog in the series, we will cover more advanced topics we faced during the development of Exploitable Path.

Challenge no. 1 – Supporting Multiple Library Versions

The public data on a CVE usually contains affected versions, but how can we use this information to support Exploitable Path across versions? Meaning, if the source code of a library changes between various versions, how can we have the required data for Exploitable Path for each of those versions?
Let’s assume we have a user’s source code that uses a single open-source library. This library contains a vulnerability, and using Mitre, we can figure out the affected versions.
To be able to assess if the vulnerability is exploitable, we need the following for each version on the library:

  • A call graph of the library’s code. This can be done automatically using CxSAST.
  • Is the current version vulnerable?
    • If it is, the inner method in which the exploitation occurs is required.

Now the question is, “how can we find this inner method for each vulnerable version”? Going over each version manually is not practical, especially since a library can have hundreds of versions.
The first part of the solution is to find the inner method that’s vulnerable. Usually, a vulnerability goes together with a specific method (or methods) that are responsible for a certain logic. Pull requests and commits for the relevant CVE, help our Analysts uncover the relevant method.
Next, we generate a fingerprint of the fix – if a version contains the fix, we can mark it as not vulnerable to this CVE. This is where our powerful static code analysis tool comes into play again, making it easy to re-assess hundreds of library versions for the vulnerability.
Re-assessing the affected versions of a vulnerability is crucial. As it turns out, this data on public websites like Mitre is often not precise. Versions that are marked as vulnerable can be safe and vice versa. It can be the result of human error, or even a slight difference in the version tags between the public registry and the git repository on which the library is developed. By searching for the fingerprint of the fix, we can ensure the quality and accuracy of our vulnerabilities data.
Using the in-depth analysis process, the vulnerable method is marked for every affected version, eventually resulting in a very accurate Exploitable Path scan.

Challenge no. 2 – Data Flow

Just because your code calls a vulnerable method, that doesn’t mean you are automatically at risk.  To assess the risk properly (and avoid false positives), it’s crucial to have both a call graph and a DFG (Data Flow Graph) of a code to assess its exploitability
Let’s start with an example, and assume that a method called parse(content) has a DoS (Denial of Service) vulnerability given the right input. If parse() is only called with a constant value, meaning parse(CONSTANT_VALUE), there is no attack surface for an attacker to exploit it and cause a DoS. On the other hand, if a user of the application controls the input parameter of parse(), it’s a different story. For example, this input can be a comment or other data provided by the user. In such a case, the attacker can easily exploit the vulnerability and craft the required input.
The reality is more complex, as there are various ways data can be transferred in code:

  • Input parameters
  • Global or class members
  • The return value of another method invocation

Also, not all data options are necessary for exploitation. For example, a method parseRequest(HttpRequest request, Config config) can be vulnerable for exploitation using only the  HttpRequest.Content member in the request parameter.
Now we understand the importance, but how do you incorporate DFG in the process of assessing a vulnerability? To be more specific, how can we know that a vulnerability is exploitable from a data flow point of view?
First, we use CxSAST to build a DFG. We start at the vulnerable method and trace back the origins of data point. Eventually we’ll reach one of the following cases:

  • A constant value. This is not exploitable, of course.
  • An input parameter of a method that is not called by other methods. This is a potential data flow compromise, as in the context of the static code scan, we don’t know how the method is invoked.
  • An internal method of the language is called, such as fopen() in Python.
  • A method of a different library is called, and its source code is not available.

The last two cases are the most interesting ones, and have two complementary approaches:

  • As a rule of thumb, mark those methods as a potential for data flow compromise since the inner implementation is unknown.
  • Mark specific methods as definite data flow compromises. For example, reading contents from a database pipe file. The same goes for parsing HTTP packets, pulling a message from a message queue, etc.

These two approaches are the basis for DFG support in assessing a vulnerability for exploitability.

Summary

In this blog we covered two additional advanced topics in Exploitable Path. We started with the problem of supporting various library versions, and how this is solved using the in-depth analysis process. Then, we discussed the integration of DFG in the vulnerability evaluation process, and how to backtrack the flow of data in the code.
With CxSCA, Checkmarx enables your organizations to address open source vulnerabilities earlier in the SDLC and cut down on manual processes by reducing false positives and background noise, so you can deliver secure software faster and at scale. For a free demonstration of CxSCA, please contact us here.
CHECKMARX ULTIMATE GUIDE - Download the eBook

]]>
Checkmarx-SCA-Cookbook-PaidMediaAds-GDN-1200×628-2
Exploitable Path – How to Solve a Static Analysis Nightmare https://checkmarx.com/blog/exploitable-path-how-to-solve-a-static-analysis-nightmare/ Wed, 03 Feb 2021 15:10:36 +0000 https://www.checkmarx.com/?p=45628 In my previous blog, I walked you through the reasoning and importance of the Exploitable Path feature in the Checkmarx CxSCA solution. We discussed the challenges of prioritizing vulnerabilities in open source dependencies and defined what it means for a vulnerability to be exploitable:

  • The vulnerable method in the library needs to be called directly or indirectly from a user’s code.
  • An attacker needs a carefully crafted input to reach the method to trigger the vulnerability.

Now that we know the scope of the problem, let’s dive into how uncovering an exploitable path is done.

Prerequisites

1.     A SAST Engine

Every programming language has its set of quirks and features. Some use brackets; some don’t. Some are loosely typed; others are strict. To be able to develop an Exploitable Path, we needed a certain level of abstraction for example, a “common language.” This is particularly hard when high level concepts like “imports” behave differently across languages.
To solve this issue, Checkmarx uses its powerful CxSAST engine. CxSAST breaks down the code of every major language into an Abstract Syntax Tree (AST), which provides much of the needed abstraction. Imports, call graphs, method definitions, and invocations all become a tree.

2.     An AST Query Language

Having an AST, the next step is having a query language capable of even further abstractions. Checkmarx uses CxQuery that can run queries to answer various questions, for example:

  • What are all the import statements in a codebase?
  • Which methods have no definition but only usage?
  • What’s the namespace of every file?

With a tool like CxQuery, you can get results in a unified format regardless of the programming language, such as, C#, Java, Python, etc.

Assumptions

1.     Vulnerable Methods Are Known

Usually, the public data on a CVE provides a CVSS score, affected products, and versions, etc. However, the inner method in which the vulnerability is triggered is usually unknown. To help with this dilemma, the CxSCA Research Team has application security analysts on board who are responsible for analyzing CVEs and finding the method in which the vulnerability occurs. So, for the rest of the post we can assume that for every CVE, we know the method that triggers it.

2.     A SAST Scan Is Limited to One Project

You can think of a project as a folder containing all source code without the third-party package’s code. This makes life easier since there’s a clear distinction between a user’s code and the dependency’s code.
For example, in case there’s a user code that requires a single third-party package, two scans can be made:

  • A scan on the user code.
  • A scan on the third-party package.

Static Analysis Steps

Now that we’ve covered the prerequisites and assumptions, let’s understand the challenge itself by looking at the following example, written in Python.
Here’s a simple code, importing an open source library and calling a method in it. This method in turn calls a vulnerable method.

The code of OSLib will be:

Here are the steps:

1.     Find Unresolved Methods in User’s Code

The user code is parsed with CxSAST and a query is run to detect all methods that are called and are missing a definition – hence unresolved and belong to a third-party package. In our example, there are two calls:

  • foo() – is defined in the user code and hence resolved.
  • lib_foo() – is defined in OSLib and hence an unresolved method must be imported.

In our case, there’s a single import to OSLib, so it’s obvious where the method was imported from.
Usually, there will be multiple imports, in which case a signature of the method is collected and searched across imported libraries. Assuming the code is functional and works, there will always be a single match.

2.     Find Exported Methods in Package Code

The code of package OSLib is also parsed with CxSAST, and a query is run to find all exported methods. In languages like C# and Java, an exported method is a public method in a public class that can be used by the user’s code. In Python, all methods are public so the exported methods in our example will be lib_foo() and inner_vuln_method().
This data is essential since it’s used to match unresolved methods in the step above.

3.     Call Graph

A query for a call graph is run on both user’s code and package code.
For the user’s code, the graph is:

For the package code, the result is similar:

4.     Find Exploitable Path

Using all the data collected so far, a full call graph is built:

All methods in the graph are checked for exploitability. In our example, inner_vuln_method() is the exploitable method, and so an Exploitable Path is found.

Further Topics

The example above provided a simple demonstration of how Exploitable Path is analyzed, but in reality, this problem is much harder. Some other research questions we faced, which are not discussed in this blog post, are:

  • Detecting Exploitable Path in a dependency of a dependency
  • Matching challenges between user’s code and package code
  • Integration of DFG (Data Flow Graph)

Summary

By using CxSAST with queries written in CxQuery, we created an abstraction layer to statically detect vulnerabilities that are exploitable. A single algorithm can detect Exploitable Path across multiple programming languages, and unlike other solutions on the market, CxSCA can easily extend support for more languages. Currently, Java and Python are already supported, with many more languages to follow.
With CxSCA, Checkmarx enables your organizations to address open source vulnerabilities earlier in the SDLC and cut down on manual processes by reducing false positives and background noise, so you can deliver secure software faster and at scale. For a free demonstration of CxSCA, please contact us here.

In the next post in this series, we’ll look at some of the challenges we faced as we developed the Exploitable Path feature.
CHECKMARX ULTIMATE GUIDE - Download the eBook

]]>
Checkmarx-SCA-Cookbook-PaidMediaAds-GDN-1200×628-2
Software Composition Analysis: Why Exploitable Path Is Imperative https://checkmarx.com/blog/software-composition-analysis-why-exploitable-path-is-imperative/ Wed, 20 Jan 2021 07:20:38 +0000 https://www.checkmarx.com/?p=45122 If you look at the way code is written today vs. a few years back, one of the major changes is the transition to open source. What was once considered an unsafe methodology has grown and matured, and now almost every software project uses open source libraries. Today, software engineers prefer to use existing open source code instead of writing everything themselves.

Open source code’s benefits are significant

  • Code development can be faster:
    It’s now more about welding existing pieces together, rather than building them yourself. Open source libraries solve fundamental engineering problems, allowing engineers to focus their time on more complex tasks.
  • Tools like package managers make it easy to manage and add third-party dependencies:
    Every programming language or IDE comes with an integrated package manager support.
  • Over time, the way APIs are exported and used becomes clearer and simpler:
     Open source maintainers offer clear APIs, simple documentation, and code samples.

Every new technology has its risks, though, and attackers can exploit weak points in software that uses open source. An attacker can gain information about open source libraries used by an application, and in other cases, can simply maintain an arsenal of exploits for popular open source packages and attempt to use these until one succeeds. In the case of open source packages, attackers have full access to:

  • Its code, which they can scan for zero-day vulnerabilities.
  • Issues and security tickets that are managed on GitHub, GitLab, etc., which can help find vulnerable areas for exploitation.
  • Current and past vulnerabilities, which can be very helpful when the library in use is not up to date. These vulnerabilities have detailed descriptions and advisories, and even the patches themselves are open source. An attacker can utilize those vulnerabilities and attempt to attack the application, and if the library uses an old version, the attack will succeed.

To manage such risks, a software composition analysis (SCA) tool such as Checkmarx CxSCA detects your third-party libraries and versions in use and informs you of existing vulnerabilities. It’s important to recognize that not all libraries in a project may apply since some may not be in use.

Prioritizing Vulnerabilities

Tracking existing vulnerabilities is important, but it’s not enough. The average project has dependencies that in turn have their own dependencies. Overall, there can be hundreds or thousands of libraries with hundreds of vulnerabilities in your project.

Nowadays, solving those vulnerabilities can take lots of time, while developers need to put efforts into developing new features as well. Managing security vulnerabilities of third-party packages is often not a one-time thing, but rather an on-going process, so it’s important for an SCA tool to prioritize the risks. This way, developers know what the most crucial risks to solve are.

But how do you prioritize a vulnerability?

The popular method is to prioritize vulnerabilities by the CVSS—a score given to a vulnerability based on the impact, how easy it is to exploit, etc. Every vulnerability that is made public has this score. However, this methodology is too simplistic, since exploitability is the most crucial aspect.

Exploitability of a Vulnerability

Let’s assume that a vulnerability is triggered by a foo() method in a library you’re using. If your code doesn’t call foo() in any flow, either directly or indirectly, the vulnerability is in fact not exploitable. If so, the priority of fixing it is low and efforts should be redirected to exploitable vulnerabilities instead.

Looking at it from an attacker’s perspective, for a vulnerability to be exploitable:

  • The method foo() needs to be called. This can require a carefully crafted input, the processing of which will trigger a call for foo().
  • The attacker needs to control the data flow for foo(). Usually, calling a method with “regular input” won’t trigger any unwanted behavior. Unwanted behavior is triggered when a carefully crafted input from the attacker reaches foo(), meaning the vulnerable method needs to be callable and its input controlled.

Developers today can use an entire library for a single API method out of dozens of APIs. Also, libraries they use have their own third-party libraries, with only a partial use of available APIs. This means that given a vulnerability in one of your dependencies, the probability of exploiting it can be below 5%. This has serious implications:

  • Current prioritizations of vulnerabilities are defocusing. Instead of fixing exploitable vulnerabilities first, efforts are put into risks that may be irrelevant.
  • They may be considered false positives. You would assume that a critical risk is a top priority, but if the relevant code flow can’t be reached, there’s nothing critical here.
  • The true number of vulnerabilities that need to be addressed is actually much lower than assumed, and that’s good news for developers. Fewer vulnerabilities means far less effort to remediate them.

By using our SAST scan (CxSAST) to statically analyze the project’s source code and the source code of all its used packages, when examining the call graphs and data flows, the exploitability and risk can be evaluated.

With CxSCA, Checkmarx enables your organization to address open source vulnerabilities earlier in the SDLC, and cut down on manual processes by reducing false positives and background noise, so you can deliver secure software faster and at scale. For a free demonstration of CxSCA, please contact us here.

In the next blog post, we’ll dig deeper into the research behind Exploitable Paths, sharing challenges and insights we collected along the way.

CHECKMARX ULTIMATE GUIDE - Download the eBook

]]>
CHECKMARX ULTIMATE GUIDE - Download the eBook
Apache Unomi CVE-2020-13942: RCE Vulnerabilities Discovered https://checkmarx.com/blog/apache-unomi-cve-2020-13942-rce-vulnerabilities-discovered/ Tue, 17 Nov 2020 09:00:14 +0000 https://www.checkmarx.com/?p=42583 “Apache Unomi is a Java Open Source customer data platform, a Java server designed to manage customers, leads and visitors’ data and help personalize customers experiences,” according to its website. Unomi can be used to integrate personalization and profile management within very different systems such as CMSs, CRMs, Issue Trackers, native mobile applications, etc. Unomi was announced to be a Top-Level Apache product in 2019 and is made with high scalability and ease of integration in mind.
Given that Unomi contains an abundance of data and features tight integrations with other systems, making it a highly desired target for attackers, the Checkmarx Security Research Team analyzed the platform to uncover potential security issues. The findings are detailed below.

Executive Summary: CVE-2020-13942

What We Found

Apache Unomi allowed remote attackers to send malicious requests with MVEL and OGNL expressions that could contain arbitrary classes, resulting in Remote Code Execution (RCE) with the privileges of the Unomi application. MVEL and OGNL expressions are evaluated by different classes inside different internal packages of the Unomi package, making them two separate vulnerabilities. The severity of these vulnerabilities is heightened since they can be exploited through a public endpoint, which should be kept public by design for the application to function correctly, with no authentication, and no prior knowledge on the attacker’s part.
Both vulnerabilities, designated as CVE-2020-13942, have a CVS Score of 10.0 (Critical) as they lead to complete compromise of the Unomi service’s confidentiality, integrity, and accessibility, in addition to allowing access to the underlying OS.

Details

Previous RCE Found in Unomi

Unomi offers a restricted API that allows retrieving and manipulating data, in addition to a public endpoint where applications can upload and retrieve user data. Unomi allows complex conditions in the requests to its endpoints.
Unomi conditions rely on expression languages (EL), such as OGNL or MVEL, to allow users to craft complex and granular queries. The EL-based conditions are evaluated before accessing data in the storage.
In the versions prior to 1.5.1, these expression languages were not restricted at all—leaving Unomi vulnerable to RCE via Expression Language Injection. An attacker was able to execute arbitrary code, and OS commands on the Unomi server by sending a single request. This vulnerability was classified as CVE-2020-11975 and was fixed. However, due to further investigation by the Checkmarx Security Research Team, we discovered that the fix is not sufficient and can be trivially bypassed.

Patch Not Sufficient – New Vulnerabilities Discovered

The patch for CVE-2020-11975 introduced SecureFilteringClassLoader, which checks the classes used in the expressions against an allowlist and a blocklist. The SecureFilteringClassLoader relies on the assumption that every class in both MVEL and OGNL expressions is loaded using the loadClass() method of the ClassLoader class. The SecureFilteringClassLoader overrides the ClassLoader loadClass method and introduces the allowlist and blocklist checks. This assumption happened to be incorrect. There are multiple ways of loading a class other than calling the loadClass() method, which leads to the security control bypass and leaves Unomi open to RCE.
First, the MVEL expressions in some cases use already instantiated classes, like Runtime or System, without calling loadClass(). This results in the latest version of Unomi (1.5.1) allowing the evaluation of MVEL expressions inside the condition, which contains arbitrary classes.
The following HTTP request has a condition with a parameter containing a MVEL expression (script::Runtime r = Runtime.getRuntime(); r.exec(“touch /tmp/POC”);). Unomi parses the value and executes the code after script:: as an MVEL expression. The expression in the example below creates a Runtime object and runs a “touch” OS command, which creates an empty file in /tmp directory.

Vulnerability #1


Second, there is a way to load classes inside OGNL expressions without triggering the loadClass() call. The following HTTP request gets Runtime and executes an OS command using Java reflections API.

Vulnerability #2


The payload may look scary but it’s simply Runtime r = Runtime.getRuntime(); r.exec(“touch /tmp/POC”); written using reflection API and wrapped into OGNL syntax.
Both presented approaches successfully bypass the security control introduced in version 1.5.1, making it vulnerable to RCE in two different locations.

Possible Attack Scenarios

Unomi can be integrated with various data storage and data analytics systems that usually reside in the internal network. The vulnerability is triggered through a public endpoint and allows an attacker to run OS commands on the vulnerable server. The vulnerable public endpoint makes Unomi an ideal entry point to corporate networks. Its tight integration with other services also makes it a steppingstone for further lateral movement within an internal network.

Summary of Disclosure and Events

After discovering and validating the vulnerabilities, we notified Apache of our findings and worked with them throughout the remediation process until they informed us everything was appropriately patched.
To learn more about these types of vulnerabilities, OWASP and CWE have descriptions, examples, consequences, and related controls, as shown in the following links:

Additionally, read the code, analyze the fix, and learn how to mitigate similar issues via our interactive CxCodebashing lesson here.

Timeline of Disclosure

June 24, 2020 – Vulnerability disclosed to Apache Unomi developers
August 20, 2020 – Code with the mix merged to master branch
November 13, 2020 – version 1.5.2 containing the fixed code is released
November 17, 2020 – public disclosure

Recommendations

The evaluation of user-defined expression language statements is dangerous and hard to constrain. Struts 2 is an excellent example of how hard it is to restrict dynamic OGNL expressions and avoid RCE. These attempts to impose usage restrictions from within/on the EL, rather than restricting tainted EL usage for general purposes, is an iterative approach, rather than a definitive one. Instead, a more reliable means to prevent RCE is to remove the support of arbitrary EL expressions entirely, creating a set of static expressions that rely on dynamic parameters instead.
Static Application Security Testing solutions, like CxSAST, can detect OGNL injections in source code and prevent this sort of vulnerability from making its way into production. Meanwhile, software composition analysis (SCA) solutions, such as CxSCA, will have the necessary data about the vulnerable package and will update CxSCA users as soon as the vulnerability is publicly disclosed. To learn how to mitigate similar issues, visit our CxCodebashing lesson here.

Final Words/Summary

This type of research is part of the Checkmarx Security Research Team’s ongoing efforts to drive the necessary changes in software security practices among all organizations. Checkmarx is committed to analyzing open source software to help development teams build and deploy more-secure applications. Our database of open source libraries and vulnerabilities is cultivated by the Checkmarx Security Research Team, empowering CxSCA customers with risk details, remediation guidance, and exclusive vulnerabilities that go beyond the NVD.
To learn more about this type of RCE vulnerabilities, read our blog about Struts 2. For more information or to speak to a Checkmarx expert about how to detect, prioritize, and remediate open source risks in your code, contact us.

]]>