Application Security Awareness The world runs on code. We secure it. Tue, 22 Oct 2024 19:16:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp Application Security Awareness 32 32 Solidity Top 10 Common Issues https://checkmarx.com/blog/solidity-top-10-common-issues/ Wed, 13 May 2020 11:19:45 +0000 https://www.checkmarx.com/?p=31307 In 2018, we performed our initial research about the current state of security in the context of Smart Contracts, focusing on those written in Soliditya contract-oriented, high-level language for implementing smart contracts“. At that time, we compiled a Top 10 list of the most common Smart Contracts security issues based on publicly available Smart Contracts source code. The time has come to update that research and evaluate how Smart Contracts security has evolved since then.
Although Top 10 lists are nice to have, they tend to not highlight additional interesting details, since some of the details don’t exactly align with the Top 10 list. Before digging into the updated Smart Contracts Top 10 list, here are some highlights from our original research:

  • In 2018, Denial-of-Service by External Contract, and Reentrancy, were the top 2 issues. However, the issues were are now remedied. You can learn more about Reentrancy in our recent research blog: Checkmarx Research: Solidity and Smart Contracts from a Security Standpoint. When Solidity v0.6.x was released that introduced a lot of breaking changes, 50% of scanned Smart Contracts were not even ready for Solidity compiler v0.5.0. This becomes even more relevant since 30% of the Smart Contracts use deprecated constructions (e.g., sha3, throw constant, etc.), and 83% had issues with the compiler version specification (pragma).
  • Although visibility issues didn’t make it in the 2018 Top 10 list, nor in this updated version, they have increased by 48%. You can read more about this issue in our previous blog and the official documentation.

The table below compares the changes between the 2018 and 2020 Top 10 Common Issues lists. The issues were sorted by severity and prevalence.

Solidity Top 10 Common Issues

S1 – Unchecked External Call

This was the third most-common issue on our previous Top 10 list. Since the top 2 issues are now resolved, Unchecked External Call has moved up to the most common issue in the 2020 updated list.
Solidity low-level call methods (e.g., address.call()) do not throw an exception. Instead, they return false if the call encounters an exception. On the other hand, contract calls (e.g., ExternalContract.doSomething()) automatically propagate a throw if doSomething() throws.
Transferring Ether using addr.send()is a good example where unsuccessful transfers should be handled explicitly by checking the return value, but this is also valid for other external calls.

S2 – Costly Loops

Costly loops moved from forth on the Top 10 list to second. Despite the fact that the top 2 issues from our previous list are resolved, the number of affected Smart Contracts increased by almost 30%.
Computational power on Ethereum environments is paid (using Ether). Thus, reducing the computational steps required to complete an operation is not only a matter of optimization, but also cost efficiency.
Loops are a great example of costly operations: as many elements an array has, more iterations will be required to complete the loop. As you may expect, infinite loops exhaust all available gas.

If an attacker is able to influence the elements array length, then they will be able to cause a denial of service, preventing the execution to jump out of the loop. Although it was far from the Top 10 common issues, array length manipulation was found in 8% of the scanned Smart Contracts.

S3 – Overpowered Owner

This is a new entry in the Top 10 list, affecting approximately 16% of the scanned Smart Contracts.
Some contracts are tightly coupled to their owner, making some functions callable only by the owners address, as in the example below.

Both doSomething() and doSomethingElse() functions can only be called by the contract owner: the former uses the onlyOwner modifier, while the later enforces it explicitly. This poses a serious risk: if the private key of the owner gets compromised, then an attacker can gain control over the contract.

S4 – Arithmetic Precision

Solidity data types are cumbersome due to the 256 bits Virtual Machine (EVM). The language does not offer a floating point representation, and data types shorter than 32 bytes are packed together into the same 32 bytes slot. With this in mind, you should expect precision issues.

When division is performed before the multiplication, as in the example above, you should expect huge rounding errors.

S5 – Relying on tx.origin

Contracts should not rely on tx.origin for authentication, since a malicious contract may play in the middle, draining all the funds: msg.sender should be used instead.

You’ll find a detailed explanation of Tx Origin Attacks on Solidity’s documentation. Long story short, tx.origin is always the first account in the call chain, while msg.sender is the immediate caller. If the last contract in the chain relies on tx.origin for authentication, then the contract in the middle will be able to drain the funds, since no validation is performed on who’s calling (msg.sender).

S6 – Overflow / Underflow

Solidity’s 256 bits Virtual Machine (EVM) brought back overflow and underflow issues as demonstrated here. Developers should be extra careful when using uint data types in for-loop condition, since it may result in infinite loops.

In the example above, the next value for i when its value is 0 will be 2256-1, that makes the condition always true. Developers should prefer <, >, != and == for comparison.

S7 – Unsafe Type Inference

This issue moved up two positions, now affecting more than 17% of Smart Contracts then before.
Solidity supports Type Inference, but there are some quirks with it. For example, the literal 0 type-infers to byte, not int as we might expect.
In the example below, the type of i is inferred to uint8: the smallest integer type sufficient to store the right-hand side value. If elements has more than 256 elements, we should expect an overflow.

Explicitly declaring data types is recommended to avoid unexpected behaviors and/or errors.

S8 – Improper Transfer

This issue dropped from sixth to eighth in the Top 10 list, affecting now less than 1% of the scanned Smart Contracts.
There is more than one way to transfer Ether between contracts. Although calling the addr.transfer(x) function is the recommended way, we still found contracts using send() function instead.

Note that addr.transfer(x) automatically throws an exception if the transfer is unsuccessful, mitigating the Unchecked External Call issues previously discussed: S1.

S9 – In-Loop Transfers

When Ether is transferred in a loop, if one of the contracts cannot receive it, then the whole transaction will be reverted.

An attacker may take advantage of this behavior to cause a denial-of-service, preventing other contracts to receive Ether.

S10 – Timestamp dependence

This was fifth in the previous version of the Top 10 list.
It’s important to remember that Smart Contracts run on multiple nodes on a different time. The Ethereum Virtual Machine (EVM) does not provide clock time and the now variable, commonly used to obtain a timestamp, is in fact an environment variable (an alias of block.timestamp) which miners can manipulate.

Since miners can manipulate environment variables currently, its value should only be used in inequalities >, <, >=, and <=.
When looking for randomness, consider the RANDAO contract, which is based on a Decentralized Autonomous Organization (DAO) that anyone can participate in, being the random number generated by all participants together.

Conclusion

When comparing the 2018 and 2020 Top 10 Common Issues lists, we can observe some progress concerning development best practices, especially those impacting security. Seeing the 2018 top 2 issues, Denial-of-Service by External Contract, and Reentrancy, moving away from the top 10 is a positive sign, but there’s still important steps to take to avoid common mistakes.
Remember that Smart Contracts are immutable by design, meaning that once created, there’s no way to patch the source code. This poses a great challenge concerning security and developers should take advantage of the available application security testing tools to ensure source code is well-tested and audited before deployment.
Solidity is a very recent programming language that is still maturing. Solidity v0.6.0 introduced a few breaking changes and more are expected in the upcoming versions.
Discovering issues and risks like the ones mentioned herein is why the Checkmarx Security Research team performs investigations. This type of research activity is part of their ongoing efforts to drive the necessary changes in software security practices among organizations worldwide.

References

]]>
Deliver Secure Software from Home: Checkmarx Offers Free 45-Day Codebashing Trial https://checkmarx.com/blog/deliver-secure-software-from-home-checkmarx-offers-free-codebashing-trial/ Mon, 13 Apr 2020 07:05:21 +0000 https://www.checkmarx.com/?p=31074 For the past few weeks and the foreseeable future, COVID-19 has forced organizations around the world to adopt work from home models. This can be a difficult transition, impacting productivity, workflows, and overall cybersecurity. And, with software development teams now “developing from home,” and in some cases being asked to meet even more aggressive delivery timelines, organizations may need to look at new ways to ensure the security of the software they develop.
Checkmarx is here to help.
Starting today, we are offering 45 days of free access to our secure coding solution – Codebashing. Codebashing empowers security teams to provide developers with the skills and tools they require to write secure code, wherever they are.

How it Works

With the full Codebashing offering, AppSec managers are able to cultivate a culture of software security that empowers developers to think and act securely in their day-to-day work. Organizations can engage their remote development teams to participate in:

  • Gamified Training – Developers take online, gamified training courses while learning and coding from home.
  • Team Challenges – Add a spirit of competition by running challenges that test developers’ AppSec knowledge. Developers will go head-to-head with one another to move up the leaderboard.
  • Assessments – With everyone working from home, it’s the perfect time to understand where each developer stands when it comes to their AppSec skills.

To get started, sign up between now and May 31, 2020, and view our FAQ below for more information. Developers can also visit free.codebashing.com to get a quick start on some of the lessons Codebashing offers, free of charge and with no commitment – play now!
Supporting our customers, partners, and the broader AppSec community remains a top priority, and we hope these proactive steps will help organizations continue to deliver secure software throughout these difficult times.
*Terms and conditions apply.

Additional Resources

  • Learn more about Checkmarx Codebashing here.
  • Follow our social channels (LinkedIn, Twitter, Facebook) for more COVID-19, Codebashing, and AppSec related updates.
  • Learn more about how Checkmarx is ensuring business continuity for our customers throughout COVID-19.

FAQ

❯ What is Codebashing?

Codebashing empowers security teams to create and sustain a software security culture that puts AppSec awareness in the face of developers, at the front-and-center. Through the use of communication tools, gamified training, competitive challenges, and ongoing assessments, Codebashing helps organizations eliminate the introduction of vulnerabilities in the source code.

❯ Who is Codebashing for?

Any organization that develops software – from technology to healthcare to financial services businesses – and any developer who would benefit from secure coding training. Codebashing is also used within universities as a teaching platform to train up-and-coming developers on secure coding skills.

❯ Who is eligible for this offer?

Organizations with development teams of 10+ developers can take advantage of our offer.

❯ How long is this offer valid for?

Our free Codebashing offer is valid for 45 days after sign-up, with the last day for sign-up being May 31, 2020.

❯ How do I get started?

Visit here and fill out the corresponding form. A Checkmarx representative will reach out within 24 hours to qualify you and start the setup process.

❯ What are the terms and conditions?

  • One offer per company
  • Must have at least 10 developers participate
  • No more than 100 developer seats per company
  • Last sign up date is May 31, 2020
  • Not open to existing Codebashing customers
]]>
Checkmarx Research: Smart Vacuum Security Flaws May Leave Users Exposed https://checkmarx.com/blog/checkmarx-research-smart-vacuum-security-flaws-leave-users-exposed/ Wed, 26 Feb 2020 09:00:34 +0000 https://www.checkmarx.com/?p=30618 There is little doubt that today’s consumers have a tendency to choose convenience over security. When a shiny new gadget designed to make our lives easier finds its way to the consumer market, buyers often jump at the opportunity to purchase it and put it into action. Unfortunately, every new internet-connected gadget opens users up to a host of possible security issues and privacy concerns.

As part of the ongoing research performed by the Checkmarx Security Research Team, recently, they were investigating several IoT devices, including the Ironpie M6 smart vacuum cleaner by Trifo. Since the device has a video camera, the team was interested in testing the security and privacy of the vacuum.

According to Trifo, the Ironpie is “An AI-powered robot vacuum that vacuums up dirt, dust, crumbs – even sand – like no one’s business” and it claims that its “mission is to clean and protect your home, so you can do more important things. I keep your home safe from dirt, dust, crumbs, sand and more; and also use my advanced vision system to keep intruders out. I am always alert and never sleep on the job.”
The Trifo can be connected to the internet via WiFi, and be controlled remotely for vacuuming, as well as for remote video stream viewing, since it incorporates a video camera. The security concerns of connecting video cameras to the internet should be obvious, and that was one of the motivators behind this research.

As a result of research team’s investigation, several high- and medium-severity security vulnerabilities were discovered. A summary of the vulnerabilities can be seen in table below. These vulnerabilities may put Ironpie’s users at risk and should be fixed as soon as possible. A video of our team exploiting the discovered vulnerabilities can be found here.

Vulnerability CVSS 3.0 Vector CVSS Score
Trifo Home Android App Insecure Update AV:A/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:N 8.5
MQTT Remote Access AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N 8.2
MQTT Insecure Encryption AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H 8.1
RTMP Remote Video Access AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N 7.5
Ironpie Local Video Access AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N 6.5
Vacuum Denial-of-Service AV:A/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H 6.5

In this research, several vulnerabilities and bad coding practices were identified. Some of them were weak security implementations with no practical use cases, while others show profound misguidance regarding a serious security stance on a self-proclaimed security product, as was the case with Trifo.
There are 3 areas of potential fault that are important to understand. Issues can be found in each component that makes the Ironpie ecosystem work:

  • the vacuum itself,
  • the Android mobile app,
  • and its supporting backend servers.

Summary of the Issues Discovered

Trifo Home Android App Insecure Update

The Trifo Android app, called Trifo Home, is mostly secure in terms of common Android programming mistakes, except for a critical procedure: the update procedure. The update is made in a non-standard way, (e.g., not via the Google Play Store.)
Since the Trifo app uses an HTTP request when the application starts to query the update server for a new APK (android package, .apk), an attacker can monitor and easily change the request in transit and force the application to update itself to a malicious version – controlled by an attacker.

MQTT Remote Access

MQTT is a machine-to-machine (M2M), IoT connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport. In the case of Trifo, the supporting MQTT servers are a bridge between the Trifo vacuum, the backend servers, and the Trifo Home app. The servers are used to provide and receive events from the vacuums deployed, which are then passed along to the graphical user interface (GUI) of the appropriate Trifo Home app.
Lacking a proper authentication mechanism, an attacker can connect to the MQTT servers impersonating any client ID, which are easily predictable.

MQTT Insecure Encryption

While the Android app uses MQTT over SSL, the Ironpie vacuum connects to the MQTT servers via an unencrypted connection, exchanges some packets, and after that the MQTT payload is encrypted. This basically lets an attacker to calculate any client ID. With this knowledge, it is possible for:

  • A remote attacker to monitor traffic coming into the Ironpie since he can subscribe and get traffic to any MAC address, which is easily guessable. This includes the dev_key which can be used to decrypt all traffic.
  • A local attacker can also impersonate the MQTT server, hence taking full control of the vacuum.

RTMP Video Feed Access

It is possible for a remote attacker to access information via MQTT, such as the SSID of the network the vacuum is connected to, obtain the internal vacuum IP address, its MAC address, and other info. With this information, an attacker can derive a key that allows them to gain access to the video feed of all connected, working, Ironpie vacuums, regardless of where they are located.

Summary of Disclosure and Events

When the vulnerabilities were first discovered, our research team ensured that they could reproduce the process of exploiting them. Once that was confirmed, the Checkmarx research team responsibly notified Trifo of their findings. After multiple attempts by the Checkmarx Security Research Team to open up a line of communication with Trifo pertaining to the discovered vulnerabilities, Trifo has not responded to any of our efforts. The research team initially contacted Trifo on 16-Dec-2019 and openly shared the full report of their findings with them.
As far as the Checkmarx Research Team knows, the vulnerabilities still exist in the Trifo Ironpie ecosystem. As a result, the team is not releasing any additional technical information about the vulnerabilities at this time – to ensure Checkmarx is not putting Trifo Ironpie users at unnecessary risk. If and when Trifo patches the vulnerabilities, Checkmarx will publish a more robust technical report outlining how we were able to exploit these issues, as we believe there is great learning value within to help pave the way for safer device development.

Final Words

This type of research activity is part of our ongoing efforts to drive the necessary changes in software security practices among vendors that manufacture consumer-based IoT devices, while bringing more security awareness amid the consumers who purchase and use them. Protecting the privacy of consumers and organizations must be a priority for all of us in today’s increasingly connected world.

]]>
Checkmarx Research: Apache Dubbo 2.7.3 – Unauthenticated RCE via Deserialization of Untrusted Data (CVE-2019-17564) https://checkmarx.com/blog/apache-dubbo-unauthenticated-remote-code-execution-vulnerability/ Wed, 19 Feb 2020 10:00:56 +0000 https://www.checkmarx.com/?p=30540 Executive Summary

Having developed a high level of interest in serialization attacks in recent years, I’ve decided to put some effort into researching Apache Dubbo some months back. Dubbo, I’ve learned, deserializes many things in many ways, and whose usage world-wide has grown significantly after its adoption by the Apache Foundation.

Figure 1 – Dubbo Architecture
According to a mid-2019 press-release, Dubbo is “in use at dozens of companies, including Alibaba Group, China Life, China Telecom, Dangdang, Didi Chuxing, Haier, and Industrial and Commercial Bank of China, among others”. In the same press-release, Apache announced Dubbo being promoted into an Apache Top-Level Project.


Figure 2 – Dubbo Users, According to Apache Dubbo Website
I discovered that Apache Dubbo providers and consumers using versions <= 2.7.3 of Dubbo, when configured to accept the HTTP protocol, allows a remote attacker to send a malicious object to the exposed service, which would result in Remote Code Execution. This occurs with no authentication, and minimal knowledge on an attacker’s part is required to exploit this vulnerability. Specifically, only the exploit described herein and a URL is required to successfully exploit it on any Dubbo instance with HTTP enabled. A proof of concept video also accompanies this report.
An attacker can exploit this vulnerability to compromise a Dubbo provider service, which is expecting remote connections from its consumers. An attacker can then replace the Dubbo provider with a malicious Dubbo provider, which could then respond to its consumers with a similar malicious object – again resulting in Remote Code Execution. This allows an attacker to compromise an entire Dubbo cluster.
The root cause for this issue is due to the use of a remote deserialization service in Spring Framework, whose documentation explicitly recommends not to use it with untrusted data, in-tandem with an outdated library, which contains a lesser-known gadget chain that enables code execution. A combination of unsafe deserialization of untrusted data, and a gadget chain, is what bridges the gap between remote access and remote unauthenticated code execution.
Credits are in order to Chris Frohoff and Moritz Bechler for their research and tools (ysoserial and marshalsec), as some of their code was used in the gadget chain, and their research laid the foundation for this exploit.

Severity

Checkmarx considers this vulnerability to have a CVS Score of 9.8 (Critical), since it is an unauthenticated remote code execution vulnerability that provides privileges at the Dubbo service’s permission level, allowing complete compromise of that service’s confidentiality, integrity, and accessiblity.
While not all Dubbo instances are configured to use the HTTP protocol, instances with known vulnerable versions that are configured to use this protocol would be trivially vulnerable, given minimal and readily available information, which is the URL to the vulnerable service. This service URL would be publically available within the network, via services such as a registry (e.g. Zookeeper), and is not considered secret or confidential.
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/E:F/RL:U/RC:C/CR:H/IR:H/AR:H

Specifications

What’s Going On?

Unsafe deserialization occurs within a Dubbo application which has HTTP remoting enabled. An attacker may submit a POST request with a Java object in it to completely compromise a Provider’ instance of Apache Dubbo, if this instance enables HTTP.
The Dubbo HTTP instance attempts to deserialize data within the Java ObjectStream, which contains a malicious set of classes, colloquially referred to as a gadget chain, whose invocation results in the execution of malicious code. In this instance, the malicious code in question allows arbitrary OS commands, and the invocation of the gadget chain occurs when an internal toString call is made in the Dubbo instance on this gadget chain, during exception creation.

Recreating the Issue

An attacker can submit a POST request with a malicious object to a bean URL for an Apache Dubbo HTTP Service, which would result in remote code execution. The bean, in this case, is the interface implementation class bound by Spring to a given Dubbo protocol endpoint. The bean is wired to a URL, and the request body for the bean contains an HTTP Remote Invocation used to determine which bean method is invoked, and with what parameters.
Once an attacker has the bean’s URL, all they have to do to exploit this vulnerability is to submit a malicious gadget chain via a standard POST request.
A new gadget chain which allows remote OS command execution was found in the scope of vanilla Apache Dubbo with Dubbo-Remoting-HTTP, if the HTTP service and protocol are enabled.

Recreating a Victim Dubbo HTTP Instance for PoC

Follow this guide:

  1. Follow the Official Apache Dubbo Quick-Start guide until a functioning provider and registry are successfully created
  2. Enable Dubbo HTTP service – Edit dubbo-demo-provider.xml – change dubbo:protocol name to “http”

Targeting a Vulnerable Instance

To trigger this vulnerability, an attacker must identify a URL to the Dubbo HTTP bean. URL addresses are generally not confidential or privileged, since they can be obtained from Dubbo service registries (e.g., Zookeeper), multicasts, and, in the absence of a well-deployed HTTPS pipeline, allow Man-in-the-Middle attacks.

Triggering Vulnerability PoC

Review Appendix 1 for functioning POC code. Note that variables such as the IP address of the Dubbo instance require modification inside this code.
An attacker requires the same dependencies as the Dubbo HTTP Service, stated above. In this PoC, com.nqzero:permit-reflect for reflection features required during serialization, and org.apache.httpcomponents.httpclient was used to send the malicious gadget to the HTTP service. To trigger the vulnerability, a new gadget chain was engineered using means available within the class space of Apache Dubbo and JDK.
This gadget chain uses the following components:

  • springframework.remoting.httpinvoker.HttpInvokerServiceExporter – this is the deserialization entry point, deserializing the request body. Deserialization of HashMaps, and Java Collections in general, invokes their value insertion methods. In this case, this will invoke HashMap.putVal(h,k,v).
  • A HashMap of two org.springframework.aop.target.HotSwappableTargetSource objects, one containing a JSONObject as a target, and another containing a com.sun.org.apache.xpath.internal.objects.XString object as a target
    • HotSwappableTargetSource objects always return the same hashcode (class.hashCode()), which forces the HashMap.putVal(h,k,v) into running deeper equality checks on HashMap keys, trigger equals() on its contents – the two HotSwappableTargetSource member objects
    • HotSwappableTargetSource equality checks validate if the target objects inside HotSwappableTargetSource are equal; in this case – an XString and a JSONObject
    • The XString.equals(object) triggers a call equivalent to this.toString().equals(object.toString()) call, which would trigger JSONObject.toString()
  • JSONObject – org.apache.dubbo.common.json.JSONObject, which is a deprecated class within Dubbo, is used to handle JSON data. If a JSONObject.toString() is invoked, the super method JSON.toJSONString() will be invoked
    • A toJSONString() call will attempt to serialize the object into JSON using JSONSerializer, which invokes a serializer. This serializer is generated using the ASMSerializerFactory. This factory attempts to serialize all getter methods in objects stored inside JSONObject.
  • TemplatesImpl partial gadget – this known gadget is utilized by many gadget chains in ysoserial and marshalsec. This partial gadget generates a malicious com.sun.org.apache.xalan.internal.xsltc.trax.TemplatesImpl object. If this object’s newTransformer() method is invoked, the chain will execute java.lang.Runtime.getRuntime().exec(command)
    • Since JSONObject.toJSONString attempts to serialize all getter methods, the method TemplatesImpl.getOutputProperties() is also invoked
    • Internally, TemplatesImpl.getOutputProperties() method invokes newTransformer() to get the properties from a generated transformer


Figure 3 – Exploit Bytecode
Once newTransformer is invoked, a flow is complete between deserialization at HttpInvokerServiceExporter.doReadRemoteInvocation(ObjectInputStream ois) and java.lang.Runtime.getRuntime().exec(command), thus enabling remote code execution.
The final gadget chain’s structure is:

Once the vulnerability is triggered, and malicious code is executed (and, in the PoC, an instance of calc.exe pops on the server), an exception will be thrown. However, the application will continue to function as intended otherwise, resulting in stable exploitation for the given gadget chain.

Figure 4 – PoC Outcome

Why Is This Happening?

Apache Dubbo using HTTP remoting occurs when a Dubbo application is created using the HTTP protocol over the Spring framework. A combination of a known-vulnerable class in Spring being invoked naively by Dubbo deserializes user input using the extremely vulnerable (and nigh indefensible) ObjectInputStream. An attacker may provide a payload which, when deserialized, will trigger a cascade of objects and method invocations which, given a vulnerable gadget chain in deserialization scope, may result in Remote Code Execution, as will be demonstrated in this POC.
The vulnerable Spring Remoting class is HttpInvokerServiceExporter. From the Spring documentation:
WARNING: Be aware of vulnerabilities due to unsafe Java deserialization: Manipulated input streams could lead to unwanted code execution on the server during the deserialization step. As a consequence, do not expose HTTP invoker endpoints to untrusted clients but rather just between your own services. In general, we strongly recommend any other message format (e.g. JSON) instead.”
This is exactly what happens with Dubbo HTTP Remoting. By using the Dubbo HTTP remoting module, an HTTP endpoint is exposed that receives an HTTP request of the following structure:

  • A POST request
    • Whose URL refers to the packagename.classname for the bean exposed by the provider, which is wired by Dubbo to the actual package and class
    • Whose body is a stream of an object, as serialized by ObjectOutputStream

The HttpProtocol handler parses the incoming org.apache.dubbo.rpc.protocol.http.HttpRemoteInvocation object using the HttpInvokerServiceExporter, which, internally, utilizes ObjectInputStream to deserialize it. HttpRemoteInvocation contains an invocation call to a certain method, and the arguments to pass to this method. However, with ObjectInputStream, any arbitrary serialized Java object can be passed, which would then be deserialized in an insecure manner, resulting in unsafe deserialization.
ObjectInputStream, on its own without any external classes, is vulnerable to memory exhaustion and heap overflow attacks, when it is used to deserialize malformed nested objects.
If an ObjectInputStream deserializable gadget chain is available within code scope that allows code or command execution, an attacker can exploit this to craft an object that results in Remote Code Execution. Such a gadget chain was found and exploited.

Tainted Code Flow

Within the Dubbo HTTP service, the following occurs:

  1. JavaX HttpServlet is invoked with user input
  2. This input is passed to the Dubbo remoting dispatcher, DispatcherServlet, which uses an HttpHandler, an internal class in HttpProtocol, to handle the request and return a response
  3. InternalHandler.handle() creates the insecure HttpInvokerServiceExporter in line 210 and invokes it on the request in line 216
  4. From there, internal calls in HttpInvokerServiceExporter finally pass the request stream into an ObjectInputStream in line 115, which is then internally read by the handler’s superclass RemoteInvocationSerializingExporter in line 144.
  5. The gadget chain is then triggered by the ObjectInputStream readObject operation

Required Prior Knowledge for Exploitation

The only piece of knowledge required to exploit an open HTTP port to a Dubbo HTTP Remoting service is the name of a Remote Invocation interface’s package and class. This information is used to craft the URL to which a serialized malicious object must be submitted, which is standard Spring bean behavior. For example, if the remoted interface’s package is named “org.apache.dubbo.demo” and the interface being remoted is named “DemoService”, an attacker needs to POST an object serialized by ObjectOutputStream to the URL “https://domain:port/org.apache.dubbo.demo.DemoService”. This information can be obtained with various methods:

  • Querying a Zookeeper for available beans, if Dubbo uses a Zookeeper as a registry
  • Observing HTTP traffic via Man-in-the-Middle attacks
  • Spoofing is also likely to be possible if Dubbo uses a multicast to find services (this was not tested)
  • Other means, such as logging services

No additional information is required to perform the attack.
It should be noted that URL paths are generally not considered confidential information, and hiding a vulnerable web service behind an allegedly unknowable URL path would constitute security through obscurity.

Summary of Disclosure and Timeline

When the vulnerability was first discovered, the Checkmarx research team ensured that they could reproduce the process of easily exploiting it. Once that was confirmed, the research team responsibly notified Apache of their findings.

Disclosure Timeline

  • 13/8/2019 – Checkmarx provides full disclosure to security@apache.org, issue forwarded to security@dubbo.apache.org
  • 6/9/2019 – Acknowledgement by Apache, Dubbo team that issue is clear
  • 4/10/2019 – Dubbo team responds regarding technical specifics of intended fix. Checkmarx responds by further explaining the issue – this is the first and last time technical issues are brought up by anyone at Apache in the context of this disclosure
  • 24/11/2019 – Reminder sent after 90 days had elapsed and publication is imminent with no action on Apache’s part
  • 3/12/2019 – Apache had requested more time to re-evaluate this issue prior to Checkmarx report publishing. This request was granted, with Apache confirming two days later that a CVE will be issued and a proper fix will be released
  • 11/2/2020 – CVE-2019-17564 disclosed via dev@dubbo.apache.org mailing list, six months (180 days) after original disclosure
  • 12/2/2020 – first POC emerges in the wild, but it does not contain the new gadget chain disclosed in this article

Package Versions

org.apache.dubbo.dubbo – 2.7.3
org.apache.dubbo.dubbo-remoting-http – 2.7.3
org.springframework.spring-web – 5.1.9.RELEASE

Vendor Mitigation

The Apache Dubbo team has resolved this issue by updating FastJSON, which contains the latest version of JSONObject, to its latest version in the project dependencies. This effectively breaks the current chain. They have also replaced the deserialization mechanism used by the HTTP protocol, altering the communication protocol, ensuring this specific exploit will not work.

Conclusions

The Dubbo HTTP Remoting service is vulnerable to unauthenticated Remote Code Execution, with virtually no prior knowledge required, other than a URL, for successful exploitation.
The root cause of this issue is the usage of an unsafe Spring class, HttpInvokerServiceExporter, for binding an HTTP service to. This class utilizes a standard Java ObjectStream with no security mechanisms in the form of a class whitelist, which in turn means deserialization allows invocation of arbitrary classes whose deserialization process may trigger malicious code. Use of this class should be discontinued, and replaced with a robust solution that whitelists expected classes in Dubbo HTTP beans.
This type of research activity is part of the Checkmarx Security Research Team’s ongoing efforts to drive the necessary changes in software security practices among all organizations in an effort to improve security for everyone overall.

Appendix 1

Appendix 1A: DubboGadget Class

A class for attacking a Dubbo HTTP instance

Appendix 1B: Utils Class

Utility class, which includes utility methods, was used in the creation of certain parts of the malicious gadget chain and exposing certain functionality by streamlining reflections. It is derived largely from auxiliary classes and comfort methods in ysoserial by Chris Frohoff – https://github.com/frohoff/ysoserial. Additionally, the makeXStringToStringTrigger is derived from prior research by Moritz Bechler, demonstrated in https://github.com/mbechler/marshalsec



Appendix 1C: pom.xml File for DubboGadget

Maven dependencies for DubboGadget

]]>
Checkmarx Research: A Race Condition in Kubernetes https://checkmarx.com/blog/checkmarx-research-race-condition-in-kubernetes/ Wed, 05 Feb 2020 07:20:46 +0000 https://www.checkmarx.com/?p=30414 Last year, the Checkmarx Security Research Team decided to investigate Kubernetes due to the growing usage of it worldwide. For those who are not too familiar with this technology, you can find more information at the official site here. Kubernetes is an open-source framework written in the Go language, originally designed and developed by Google to automate deployment, scaling, and management of containerized applications.
To understand what we discovered, it’s important to know some of the Kubernetes basics. Kubernetes’ purpose is to orchestrate a cluster of servers, named nodes, and each node is able to host Pods. Pods are processes, running on a node, that encapsulate an application. Keep in mind that an application can consist on a single or multiple containers. This allows Kubernetes to automatically increase resources as the applications require, by creating/deleting more Pods of the same application.
There will be a Master node and Worker nodes on a cluster. The Master node runs the kube-apiserver process that allows the Master to monitor and control the Workers. On the Workers side, the communication with the Master is done by the kubelet process, and the kube-proxy process reflects the networking services of the Pods, allowing users to interact with the applications. The following diagram illustrates the main components of Kubernetes and how they interact.

(source: https://en.wikipedia.org/wiki/Kubernetes)
To look for vulnerabilities in Kubernetes, we needed a lab environment with multiple servers. For this, it was decided to use virtual machines rather than physical ones, because they are much faster to configure every time there is the need to re-create the lab. To automate the lab creation and re-creation process, we used Terraform, Packer and Ansible. The vulnerability that we discovered in Kubernetes was uncovered by this automation process.
While creating the lab, we reused the Packer image without changing the hostname of the servers by mistake, and when we promoted the servers to Kubernetes cluster members, we realized that the cluster was unstable. The CPU load on the Master node was very high and eventually the cluster crashed!
We couldn’t understand what was causing this behavior. Although we had mistakenly configured servers in the cluster with the same hostname, this is a very likely situation in a DevOps process. There was also the attack vector, where a user with enough privileges in a Worker could lead the whole cluster to crash.
When listing the cluster nodes in the Master with the command kubectl get nodes, we only got one member and it was the original Master, although the other nodes were added to the cluster without errors.
After a reboot to the Master node, the cluster remained stable. When testing with two Workers with the same hostname and a different Master hostname, there was also instability in the cluster, and only the first Worker to be added to the cluster was shown in the output of the kubectl get nodes command.
Digging deeper, we were able to understand what was causing this instability. There is a race condition in the update of an etcd key. Etcd is used by the Kubernetes Master to store all cluster configuration and state data. The hostname of the cluster nodes is used to name a key in etcd, in the following format: /registry/minions/HOSTNAME – where HOSTNAME is the actual hostname of the node.
When two nodes share the same hostname, every time they communicate their state to the Master node, etcd updates the referred key. When checking the value of this key periodically, we proved the race condition, since the values of both nodes were shown randomly over time as shown in Figure 1.

Figure 1: Differences between two consecutive key updates
Besides the number of updates increase, on each update, several other keys (events) are also created and should be dispatched by Kubernetes components. This is what caused the cluster instability due to high CPU load on the Master node.
A video demonstrating the vulnerability can be found here. In addition to adding a Worker node with a hostname that already exists, it is also possible to exploit the vulnerability using the option –-hostname-override when adding a node to the cluster.
We validated this behavior against a public Kubernetes service provider, Azure Kubernetes Service (AKS), and we noticed that it adds a prefix to the hostname of the nodes. This behavior is enough to mitigate the described vulnerability.
Following our research, an issue was created in the official Kubernetes GitHub page, recommending two solutions to fix the vulnerability:

  • prevent nodes with a duplicate hostname or –hostname-override value to join the cluster
  • add a prefix/suffix to the etcd key name

Later, the Pull Request 81056 was created to address the vulnerability, following our first recommendation described above. The issue was fixed by rejecting a node joining the cluster if a node with the same name already exists #1711.
Discovering vulnerabilities like the one mentioned in this blog is why the Checkmarx Security Research team performs investigations. This type of research activity is part of our ongoing efforts to improve security for organizations worldwide.

]]>
Checkmarx Research: Solidity and Smart Contracts from a Security Standpoint https://checkmarx.com/blog/checkmarx-research-solidity-and-smart-contracts-from-a-security-standpoint/ Wed, 15 Jan 2020 06:00:00 +0000 https://www.checkmarx.com/?p=30149 Quoting the official documentation, Solidityis a contract-oriented, high-level language for implementing smart contracts.” It was proposed back in 2014 by Gavin Wood and developed by several people, most of them being core contributors to the Ethereum platform, to enable writing smart contracts on blockchain platforms such as Ethereum.
Solidity was designed around the ECMAScript syntax to make something web developers would be familiar with, but it is statically typed like C++, with support for inheritance, libraries, and user-defined data types.

At the time Solidity was proposed, it had significant differences to other languages also targeting the EVM (e.g., Serpent, LLL, Viper, and Mutan) such as mappings, structs, inheritance, and even a natural language specification NatSpec.
Like other programming languages targeting a Virtual Machine (VM), Solidity is compiled into bytecode using a compiler: solc.

Smart Contracts can be seen as a computer protocol intended to complete some task according to the contract rules. In the cryptocurrencies context, smart contracts enforce transactions’ traceability and irreversibility, avoiding the need of a third-party regulator like banks. This concept was suggested by Nick Szabo back in 1994.

This article is an introduction to Solidity from a security standpoint, created by the Checkmarx Security Research Team.
As more and more people/organizations look to blockchain as a promising technology, and being willing to build on top of it, it is mandatory to apply software development best practices such as code review, testing, and auditing while creating smart contracts. These practices become even more critical as smart contracts execution happens in public with source code generally available.

It is hard to ensure that software can’t be used in a way that was not anticipated, so it is essential to be aware of the most common issues as well as the exploitability of the environment where the smart contract runs on. An exploit may not target the smart contract itself, but the compiler or the virtual machine (e.g., EVM) instead.
We cover that in the next sections, providing a Proof-of-Concept that demonstrates the discussed topics.

Preamble

In the context of Ethereum (abbreviated Eth), Smart Contracts are scripts that can handle money. These contracts are enforced and certified by Miners (multiple computers) who are responsible for adding a transaction (execution of a Smart Contract or payment of cryptocurrency) to a public ledger (a block). Multiple blocks are called blockchain.
Miners spend “Gas” to do their work (e.g., publish a smart contract, run a smart contract function, or transfer money between accounts). This “Gas” is paid using Eth.

Common Issues

Privacy

In Solidity, private may be far from what you may expect, mainly if you’re used to Object-Oriented Programming using languages like Java.
A private variable doesn’t mean that someone can’t read its content, it just means that it can be accessed only from within the contract. You should remember that the blockchain is stored on many computers, making it possible for others to see what’s stored in such “private” variables.
Note that private functions are not inherited by other contracts. To enable private functions inheritance, Solidity offers the internal keyword.

pure/view functions

Preventing functions from reading the state at the level of the EVM is not possible, but it is possible to prevent them from writing to the state ( i.e., view can be enforced at the EVM level, while pure cannot).
The compiler started enforcing that pure is not reading the state in version 0.4.17.
Source

Reentrancy

Reentrancy is a well-known computing concept, and also the cause of a $70M hack back in June 2016 called the DAO (Decentralized Autonomous Organization) Attack. David Siegel authored “Understanding The DAO Hack for Journalists” a complete events timeline and comprehensive explanation of what happened.
In computing, a computer program or subroutine is called reentrant if it can be interrupted in the middle of its execution and then safely be called again (“re-entered”) before its previous invocations complete execution.” (Wikipedia).
By using a common computing pattern, it was possible to exploit a Smart Contract. It is still possible. The call() function is the heart of this attack, and it is worth noting that it:

  • is used to invoke a function in the same contract (or of another contract) to transfer data or Ethereum;
  • does not throw, it just returns true/false;
  • triggers the execution of code and spends all the available Gas for this purpose; there’s no Gas limit unless we specify one;

The following warning message was taken from Solidity’s documentation:
“Any interaction with another contract imposes a potential danger, especially if the source code of the contract is not known in advance. The current contract hands over control to the called contract and that may potentially do just about anything. Even if the called contract inherits from a known parent contract, the inheriting contract is only required to have a correct interface. The implementation of the contract, however, can be completely arbitrary and thus, pose a danger. In addition, be prepared in case it calls into other contracts of your system or even back into the calling contract before the first call returns. This means that the called contract can change state variables of the calling contract via its functions. Write your functions in a way that, for example, calls to external functions happen after any changes to state variables in your contract so your contract is not vulnerable to a reentrancy exploit.”
The highlighted part in bold text above is exactly how a Smart Contract can be exploited due to Reentrancy. In the Proof-of-Concept section below and in the accompanying video, there’s a ready to run example. To avoid this attack:

  • “be prepared” – any function running external code is a threat;
  • These functions:
    <address>.transfer(uint256 amount)/ <address>.send(uint256 amount) return (bool)
    are safe against Reentrancy as they currently have a limit of 2300 Gas;
  • if you cannot avoid using call(), update the internal state before making an external call.

Overflow

Solidity data types are cumbersome because of the 256 bits Virtual Machine (EVM). The language does not offer a floating point representation and data types shorter than 32 bytes are packed together into the same 32 bytes slot. The literal 0 type-infers to byte, not an int as we might expect.
Being limited to 256 bits, overflow and underflow are something we may expect. It can happen with a uint8 whose max value is 255 (2ˆ8-1 or 11111111)

OverflowUint8.sol source code or with a uint256 whose max value is 1.157920892×10 (2ˆ256-1)

OverflowUint256.sol source code
Although uint256 is suggested to be (more) secure as it is unlikely to overflow, it has the same problem than any other data type. The batchOverflow bu (CVE-2018–10299) is a great example of a uint256 overflow.

Proof-of-Concept

Consider the following Bank Smart Contract which keeps tracking of balances for addresses that put ether on it.

A careful look at the withdraw() function reveals the reentrancy pattern highlighted in the above correspondent Common Issues > Reentrancy section external call before internal state update.
Now we need a malicious crafted Smart Contract to exploit the Bank one: Thief

Let’s rehearse the robbery using a Solidity development environment. To run it we’ll just need a docker enabled environment.
Clone the solidity-ddenv project and move inside the solidity-ddenv folder
$ git clone https://github.com/Checkmarx/solidity-ddenv && cd solidity-ddenv
Let’s start the development environment
$ ./ddenv
Creating network “solidityddenv_default” with the default driver
Creating ganache … done
Creating truffle … done
If ddenv started correctly you’re expected to be inside workspace folder (you can check it running pwd).
Let’s move into reentrancy directory where the Bank and Thief Smart Contracts are located
$ cd reentrancy
Now, it’s time to compile the source code
$ ddenv truffle compile
Starting ganache … done
Compiling ./contracts/Bank.sol…
Compiling ./contracts/Migrations.sol…
Compiling ./contracts/Thief.sol… Writing artifacts to ./build/contracts
and deploy the Smart Contracts to our development network:

We are now ready to perpetrate the attack. Let’s spawn a console to our development network so that we can issue a few commands
$ ddenv truffle console –network development
Starting ganache … done
truffle(development)>
truffle(development) > is the prompt. If you want to run the attack yourself, just copy the commands next to the prompt from the scripts below and paste them into the console prompt you have launched before.
Now, let’s


Discovering vulnerabilities like the one mentioned above is why the Checkmarx Security Research team performs investigations. This type of research activity is part of their ongoing efforts to drive the necessary changes in software security practices among organizations worldwide.

]]>
Breaking Down the OWASP API Security Top 10 (Part 2) https://checkmarx.com/blog/breaking-down-owasp-api-security-top-10-part-2/ Mon, 06 Jan 2020 07:07:51 +0000 https://www.checkmarx.com/?p=30191 Due to the widespread usage of APIs, and the fact that attackers realize APIs are a new attack frontier, the OWASP API Security Top 10 Project was launched. From the beginning, the project was designed to help organizations, developers, and application security teams become increasingly aware of the risks associated with APIs. This past December, the 1st version of the OWASP API Security Top 10 2019 was finalized and published on OWASP.
In my first article on this topic, I provided a high-level view of the interaction between API endpoints, modern apps, and backend servers, in addition to how they’re different from their traditional browser-based counterparts. I also discussed why this project was so important to the contributors and industry overall.
In my second article, I focused on the first five (5) risks and emphasized some of the possible attack scenarios in the context of the risks. In this article, I will attempt to clarify the last five (5) risks to help organizations understand the dangers associated with deficient API implementations. The following discussion follows the same order as found in the OWASP API Security Top 10.

API6:2019 – Mass Assignment:

Modern frameworks encourage developers to use functions that automatically bind input from the client into code variables and internal objects. What this means is that users should have the ability to update their user name, contact details, etc. (within their profiles for example), but they should not be able to change their user-level permissions, adjust account balances, and other administrative-like functions. An API endpoint is considered vulnerable if it automatically converts the client input into internal object properties, without considering the sensitivity and the exposure level of these properties. This could allow an attacker to update things that they should not have access to.

Example Attack Scenario:

To illustrate this further, imagine that a ride sharing application provides the user the option to edit basic information about themselves in their user profile. For example, they can adjust their user name, age, etc. In this case, the API request would look like this: PUT /api/v1/users/me with the following legitimate information {“user_name”:”john”,”age”:24}
However, the attacker determines that the request GET /api/v1/users/me includes an additional credit_balance property (field) as shown below.
{“user_name”:”john”,”age”:24,”credit_balance”:10}.
The attacker desires to increase their credit balance on their own and replays the first request with the following payload:
{“user_name”:”john”,”age”:24,”credit_balance”:99999}
Since the endpoint is vulnerable to mass assignment, the attacker can easily adjust their own credit_balance at will, for example changing it from 10 credits to 99999 as shown above.

API7:2019 – Security Misconfiguration:

Attackers will often attempt to find unpatched flaws, common endpoints, or unprotected files and directories to gain unauthorized access or knowledge of the system they want to attack. Security misconfigurations can not only expose sensitive user data, but also system details that may lead to full server compromise.

Example Attack Scenario:

Say for instance that an attacker uses a popular search engine like Shodan to search for computers and devices directly accessible from the Internet. The attacker found a server running a popular database management system, listening on the default TCP port. The database management system was using the default configuration, which has authentication disabled by default, and the attacker gained access to millions of records with PII, personal preferences, and authentication data.

API8:2019 – Injection:

Injection vulnerabilities cause computing systems to potentially process malicious data that attackers introduce. To put it in its simplest terms, attackers inject code into a vulnerable piece of software and change the way the software is intended to be executed. As a result, injection attacks can be somewhat disastrous, since they normally lead to data theft, data loss, data corruption, denial of service, etc.

Example Attack Scenario:

Suppose an attacker starts inspecting the network traffic of their web browser and identifies the following API request designed to help a user recover their password. The attacker identifies the request responsible to start the recovery-password process as follows:
POST /api/accounts/recovery {“username“: “john@somehost.com”}
Then the attacker replays the request with a different payload:
POST /api/account/recovery {“email”: “john@somehost.com’;WAITFOR DELAY ‘0:0:5’–“}
By adding the ;WAITFOR DELAY ‘0:0:5’–” the attacker observes that the response from the API took ~5 seconds longer, which helps confirm the API is vulnerable to SQL injection. Exploiting the injection vulnerability, the attacker was able to gain unauthorized access to the system.

API9:2019 – Improper Assets Management:

Old API versions are often unpatched and can become an easy way to compromise systems without having to fight state-of-the-art security systems, which might be in place to protect the most recent API versions. Attackers may gain access to sensitive data, or even takeover the server through old, unpatched API versions connected to the same database.

Example Attack Scenario:

Say for instance that an organization redesigning their applications forgot about an old API version (api.someservice.com/v1) and left it unprotected, and with access to the user database. While targeting one of the latest released applications, an attacker found the API address (api.someservice.com/v2). Replacing v2 with v1 in the URL gave the attacker access to the old, unprotected API, exposing the personal identifiable information (PII) of millions of users.

API10:2019 – Insufficient Logging and Monitoring:

Without logging and monitoring, or with insufficient logging and monitoring, it is almost impossible to track suspicious activities targeting APIs and respond to them in a timely fashion. Without visibility over ongoing malicious activities, attackers have plenty of time to potentially compromise systems and steal data.

Example Attack Scenario

Imagine that a video-sharing platform was hit by a “large-scale” credential stuffing attack. Despite failed logins being logged, no alerts were triggered during the timespan of the attack, and it proceeded without being noticed. As a reaction to user complaints about a possible breach, API logs were analyzed and the attack was detected, way after the fact. The company had to make a public announcement asking users to reset their passwords, and report the incident to their regulatory authorities.
Pertaining to the five (5) risks above, one could easily imagine similar attack scenarios. Those provided were just examples of the nearly unlimited possibilities when attacking vulnerable API implementations. Hopefully, you can see the risks above are primarily caused by errors or oversights that could be easily remedied, especially when it comes to the way organizations utilize APIs today.

]]>
Injection Vulnerabilities – 20 Years and Counting https://checkmarx.com/blog/injection-vulnerabilities-20-years-and-counting/ Wed, 04 Dec 2019 11:46:14 +0000 https://www.checkmarx.com/?p=29990 Injection vulnerabilities are one of the oldest exploitable software defects, which unfortunately are still prevalent today. Doing a simple search on cve.mitre.org com for the term injection returns with over 10,852 injection-related vulnerabilities in commercial and open source software since the year 2000, and the number of injection vulnerabilities continues to grow daily. The earliest tracked injection vulnerability was CVE-2000-1233 which was discovered in that year, allowing remote attackers to execute arbitrary code on a vulnerable system.

What exactly are injection vulnerabilities?

Before going into a short explanation, first let’s discuss how abundant injection vulnerabilities are. For example, Injection has made the OWASP Top 10 for the most critical Web Application Risks in 2010, 2013, and 2017. Injection also made the list on the OWASP Top 10 for API Security in 2019 as well. Clearly, injection risks and associated attacks have been in existence for nearly 20 years and have often been the catalyst in many reported data breaches.

Injection vulnerabilities cause computing systems to potentially process malicious data that attackers introduce. To put it in its simplest terms, attackers inject code into a vulnerable piece of software and change the way the software is intended to be executed. As a result, injection attacks can be somewhat disastrous, since they normally involve data theft, data loss, data corruption, denial of service, etc.

According to OWASP Top 10 2017, “Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.” For more information on SQL injection and LDAP injection for example, Checkmarx provides detailed descriptions in our vulnerability knowledge base.

In light of nearly 20 years of widespread injection vulnerabilities, I asked our guest expert Inon Shkedy to provide his insight on a few questions as shown below. Inon worked closely with Erez Yalon, Head of Security Research at Checkmarx, on the OWASP API Security Project.

Together, they spearheaded the development of the OWASP API Security Top 10 2019, which defines the top ten most critical API security risks.

Guest Expert Interview


Question: Why is injection still making the list of Top Ten risks, and shouldn’t this problem have been remedied by now?
Inon:
There are two main trends in the field of injections:
On one hand, some types of injections become less and less prevalent, because of proper security education and a set of modern technologies which address them.
On the other hand, new frameworks and technologies open a door for new types of injections. For example, NoSQL injection is a relatively new attack vector for systems that use NoSQL.
These two trends make injections less severe and prevalent, but at the same time they still have a place on the list.

Question: What is the cause of developers writing injection vulnerabilities into the code they produce?
Inon:
There are a few main reasons:
Lack of awareness: it’s common to see junior software engineers writing code that is vulnerable. The “injection” concept may be not very intuitive for them, and they deliver vulnerable code because it’s the easiest / fastest way for them to implement a specific component.
Rush: we all know how stressful and demanding modern software development environments can be. Concepts like Agile and CI/CD are great for fast delivery, but when developers are focused only on delivering the code, they might forget to check for security issues.
Complexity: APIs and modern apps are complex. A modern app, like Uber for example, might look very simple from the UX (user experience) perspective, but on the backend there are many databases and microservices that communicate between them behind the scenes. In many cases, it’s hard to track which inputs come from the client itself and require more security attention (as filtering and scanning), and which inputs are internal to the system.

Question: What will it take to finally get developers to stop writing injection vulnerabilities into their code, and / or organizations releasing code with injection vulnerabilities?
Inon:
Raising awareness: I believe that secure coding is a result of education. It’s super important to raise awareness by providing security guidelines for new software engineers in the company and to have ongoing training, talks, and discussions about security.
Pre-production automation: use automatic tools like SAST and IAST to find injections vulnerabilities in your code before it gets exposed to the whole world.

Use ORMs: It’s harder to write vulnerable code when you use ORMs. They provide security mechanisms by design.

Conclusion

Clearly, injection flaws must be addressed during the software development process by first detecting them, then mitigating them by fixing the vulnerabilities. If applications with injection flaws make their way to the internet, it’s only a matter of time before they’re found by attackers, and are eventually exploited.

]]>
How Attackers Could Hijack Your Android Camera to Spy on You https://checkmarx.com/blog/how-attackers-could-hijack-your-android-camera/ Tue, 19 Nov 2019 07:17:09 +0000 https://www.checkmarx.com/?p=29842 In today’s digitally-connected society, smartphones have become an extension of us. Advanced camera and video capabilities in particular are playing a massive role in this, as users are able to quickly take out their phones and capture any moment in real-time with the simple click of a button. However, this presents a double-edged sword as these mobile devices are constantly collecting, storing, and sharing various types of data – with and without our knowing – making our devices goldmines for attackers.
In order to better understand how smartphone cameras may be opening users up to privacy risks, the Checkmarx Security Research Team cracked into the applications themselves that control these cameras to identify potential abuse scenarios. Having a Google Pixel 2 XL and Pixel 3 on-hand, our team began researching the Google Camera app [1], ultimately finding multiple concerning vulnerabilities stemming from permission bypass issues. After further digging, we also found that these same vulnerabilities impact the camera apps of other smartphone vendors in the Android ecosystem – namely Samsung – presenting significant implications to hundreds-of-millions of smartphone users.
In this blog, we’ll explain the vulnerabilities discovered (CVE-2019-2234), provide details of how they were exploited, explain the consequences, and note how users can safeguard their devices. This blog is also accompanied by a proof-of-concept (PoC) video, as well as a technical report of the findings that were shared with Google, Samsung, and other Android-based smartphone OEMs.

Google & Samsung Camera Vulnerabilities

After a detailed analysis of the Google Camera app, our team found that by manipulating specific actions and intents [2], an attacker can control the app to take photos and/or record videos through a rogue application that has no permissions to do so. Additionally, we found that certain attack scenarios enable malicious actors to circumvent various storage permission policies, giving them access to stored videos and photos, as well as GPS metadata embedded in photos, to locate the user by taking a photo or video and parsing the proper EXIF data [3]. This same technique also applied to Samsung’s Camera app.
In doing so, our researchers determined a way to enable a rogue application to force the camera apps to take photos and record video, even if the phone is locked or the screen is turned off. Our researchers could do the same even when a user was is in the middle of a voice call.

The Implications

The ability for an application to retrieve input from the camera, microphone, and GPS location is considered highly invasive by Google themselves. As a result, AOSP created a specific set of permissions that an application must request from the user. Since this was the case, Checkmarx researchers designed an attack scenario that circumvents this permission policy by abusing the Google Camera app itself, forcing it to do the work on behalf of the attacker.
It is known that Android camera applications usually store their photos and videos on the SD card. Since photos and videos are sensitive user information, in order for an application to access them, it needs special permissions: storage permissions. Unfortunately, storage permissions are very broad and these permissions give access to the entire SD card. There are a large number of applications, with legitimate use-cases, that request access to this storage, yet have no special interest in photos or videos. In fact, it’s one of the most common requested permissions observed.
This means that a rogue application can take photos and/or videos without specific camera permissions, and it only needs storage permissions to take things a step further and fetch photos and videos after being taken. Additionally, if the location is enabled in the camera app, the rogue application also has a way to access the current GPS position of the phone and user.
Of course, a video also contains sound. It was interesting to prove that a video could be initiated during a voice call. We could easily record the receiver’s voice during the call and we could record the caller’s voice as well.

A PoC of a Worst-Case Scenario

To properly demonstrate how dangerous this could be for Android users, our research team designed and implemented a proof-of-concept app that doesn’t require any special permission beyond the basic storage permission. Simulating an advanced attacker, the PoC had two working parts: the client-part that represents a malicious app running on an Android device, and a server-part that represents an attacker’s command-and-control (C&C) server.
The malicious app we designed for the demonstration was nothing more than a mockup weather app that could have been malicious by design. When the client starts the app, it essentially creates a persistent connection back to the C&C server and waits for commands and instructions from the attacker, who is operating the C&C server’s console from anywhere in the world. Even closing the app does not terminate the persistent connection.
The operator of the C&C console can see which devices are connected to it, and perform the following actions (among others):

  • Take a photo on the victim’s phone and upload (retrieve) it to the C&C server
  • Record a video on the victim’s phone and upload (retrieve) it to the C&C server
  • Parse all of the latest photos for GPS tags and locate the phone on a global map
  • Operate in stealth mode whereby the phone is silenced while taking photos and recording videos
  • Wait for a voice call and automatically record:
    • Video from the victim’s side
    • Audio from both sides of the conversation

Note: The wait for a voice call was implemented via the phone’s proximity sensor that can sense when the phone is held to the victim’s ear. A video of successfully exploiting the vulnerabilities was taken by our research team and can be viewed here. Our team tested both versions of Pixel (2 XL / 3) in our research labs and confirmed that the vulnerabilities are relevant to all Google phone models.

Android Vulnerability: Watch the Explainer Video

Summary of Disclosure and Events

When the vulnerabilities were first discovered, our research team ensured that they could reproduce the process of easily exploiting them. Once that was confirmed, the Checkmarx research team responsibly notified Google of their findings.
Working directly with Google, they notified our research team and confirmed our suspicion that the vulnerabilities were not specific to the Pixel product line. Google informed our research team that the impact was much greater and extended into the broader Android ecosystem, with additional vendors such as Samsung acknowledging that these flaws also impact their Camera apps, and began taking mitigating steps.

Google’s Response

“We appreciate Checkmarx bringing this to our attention and working with Google and Android partners to coordinate disclosure. The issue was addressed on impacted Google devices via a Play Store update to the Google Camera Application in July 2019. A patch has also been made available to all partners.”

Mitigation Recommendation

For proper mitigation and as a general best practice, ensure you update all applications on your device.

Timeline of Disclosure

  • Jul 4, 2019 – Submitted a vulnerability report to Android’s Security team at Google
  • Jul 4, 2019 – Google confirmed receiving the report
  • Jul 4, 2019 – A PoC “malicious app” was sent to Google
  • Jul 5, 2019 – A PoC video of an attack scenario was sent to Google
  • Jul 13, 2019 – Google set the severity of the finding as “Moderate”
  • Jul 18, 2019 – Sent further feedback to Google
  • Jul 23, 2019 – Google raised the severity of the finding to “High”
  • Aug 1, 2019 – Google confirms our suspicion that the vulnerabilities may affect other Android smartphone vendors and issues CVE-2019-2234
  • Aug 18, 2019 – Multiple vendors were contacted regarding the vulnerabilities
  • Aug 29, 2019 – Samsung confirmed they are affected
  • Nov 2019 – Both Google and Samsung approved the publication

Note: This publication was coordinated with Google and Samsung after their confirmation of a fix being released. Please refer to Google for information regarding the fixed version of the Android OS and Google Camera app.

Final Words

The professionalism shown by both Google and Samsung does not go unnoticed. Both were a pleasure to work with due to their responsiveness, thoroughness, and timeliness.
This type of research activity is part of our ongoing efforts to drive the necessary changes in software security practices among vendors that manufacture consumer-based smartphones and IoT devices, while bringing more security awareness amid the consumers who purchase and use them. Protecting privacy of consumers must be a priority for all of us in today’s increasingly connected world.
References

[1] https://en.wikipedia.org/wiki/Google_Camera
[2] https://developer.android.com/guide/components/intents-filters
[3] https://en.wikipedia.org/wiki/Exif

]]>
Breaking Down the OWASP API Security Top 10 (Part 1) https://checkmarx.com/blog/breaking-down-owasp-api-security-top10-part1/ Wed, 06 Nov 2019 07:41:20 +0000 https://www.checkmarx.com/?p=29756 As a result of a broadening threat landscape and the ever-increasing usage of APIs, the OWASP API Security Top 10 Project was launched. From the start, the project was designed to help organizations, developers, and application security teams become more aware of the risks associated with APIs. This past September, the OWASP API Security Top 10 release candidate (RC) was finalized and published on OWASP.
In my previous blog, I provided a high-level view of the interaction between API endpoints, modern apps, and backend servers, in addition to how they’re different from their traditional browser-based counterparts. I also discussed why this project was so important to the contributors and industry overall. In this blog, I aim to clarify the first five (5) risks by highlighting some of the possible attack scenarios to help organizations and end-users understand the dangers associated with deficient API implementations. The following discussion follows the same order as found in the OWASP API Security Top 10.
API1:2019 – Broken Object Level Authorization: Attackers can exploit API endpoints that are vulnerable to broken object level authorization by manipulating the ID of an object that is sent within the client request. What this means is that the client can request information from an API endpoint that they are not supposed to have access to. This attack normally leads to unauthorized information disclosure, modification, or destruction of data.
Example Attack Scenario:
Say for instance there is an e-commerce platform that provides financial and hosted services to a group of different online stores (shops). The platform provides an API used to gain access to revenue charts for each of their hosted stores, and each store should only have access to their own revenue charts. However, while inspecting the client request from a single store who wants to gain access to their own revenue charts, an attacker can identify (find) the API endpoint for those revenue charts and identify the URL in use, for example /shops/{shop name}/revenue_data.json. Using the names of other stores being hosted on the e-commerce platform, an attacker can create a simple script to modify the {shop name} ID object in subsequent requests, and gain access to the revenue charts of every other store.
API2:2019 – Broken Authentication: Being different than Authorization discussed above, Authentication on the other hand is a complex and confusing mechanism concerning APIs. Since authentication endpoints are exposed to anyone by design, the endpoints that are responsible for user-authentication must be treated differently from regular API endpoints and implement extra layers of protection for credential stuffing attempts, in addition to brute force password and token guessing attacks.
Example Attack Scenario:
Suppose that an attacker obtained a list of leaked username/password combinations as the result of a data breach from another organizations. If the API endpoint handling authentication does not implement brute force or credential stuffing protections like CAPTCHA, rate-limiting, account lockout, etc., an attacker can repeatedly attempt to gain access using the list of username/password combinations to determine what combination(s) work.
API3:2019 – Excessive Data Exposure: By design, API endpoints often expose sensitive data since they frequently rely on the client app to perform data filtering. Attackers exploit this issue by sniffing the traffic to analyze the responses, looking for sensitive data that should not be exposed. This data is supposed to be filtered on the client app, before being presented to the user.
Example Attack Scenario:
Image that an IoT-based camera surveillance system allows administrators to add a newly-hired security guard as a system user, and the administrator wants to ensure the new user should only have access to certain cameras. These cameras are accessible via a mobile app that the security guard uses while at work. The newly hired security guard’s mobile app makes an API request to an endpoint in order to receive data about the cameras, and relies on the mobile app to filter which cameras the guard has access to. Although the mobile app only shows the cameras the guard can access, the actual API response contains a full list of all the cameras. Using the sniffed traffic, an attacker can manipulate the API request to show all cameras, bypassing the filtering on the mobile app.
API4:2019 – Lack of Resources & Rate Limiting: It is common to find API endpoints that do not implement any sort of rate limiting on the number of API requests, or they do not limit the type of requests that can consume considerable network, CPU, memory, and storage resources. The amount of resources required to satisfy a request greatly depends on the user input and endpoint business logic. Attackers exploit these issues causing denial-of-service attacks and associated endpoint outages.
Example Attack Scenario:
Let’s say that an attacker wants to cause a denial-of-service outage to a certain API that contains a very large list of users. The users’ list can be queried, but the application limits the number of users that can be displayed to 100 users. A normal request to the application would look like this: /api/users?page=1&size=100. In this case, the request would return with the first page and the first 100 users. If the attacker changed the size parameter from 100 to 200000, it could cause a performance issue on the backend database, since the size parameter in use is so large. As a result, the API becomes unresponsive and is unable to handle further requests from this or any other client.
API5:2019 – Broken Function Level Authorization: Although different than API1 above, exploitation of this issue requires the attacker to send API requests to endpoints that they should not have access to, yet are exposed to anonymous users or regular, non-privileged users. These types of flaws are often easy to find and can allow attackers to access unauthorized functionality. For example, administrative functions are prime targets for this type of attack.
Example Attack Scenario:
To illustrate this further, imagine that during the registration process to a certain application that only allows invited users to join, the mobile app triggers an API request to GET /api/invites/{invite_guid}. GET is a standard HTTP method used to request information from a particular resource. In this case, the response to the GET contains details about the invite, including the user’s role and email address.
Now, say that an attacker duplicated the request and manipulated the HTTP method by changing GET to POST. POST is an HTTP method used to send information to create or update a resource. The URL would look like this: POST /api/invites/new/{“email”:”hugo@malicious.com”,”role”:”admin”}. In this case, the attacker easily exploits this issue and sends himself an email invite to create an admin account.
In the context of the five risks above, one could easily image many similar attack scenarios. Those provided were just examples of the nearly unlimited possibilities when attacking vulnerable API implementations. Hopefully you can see, the risks above are primarily caused by errors or oversights. I believe these risks could easily be managed or nearly eliminated when organizations improve their secure coding practices, especially when it comes to the way they’re utilizing APIs.

]]>