checkmarx one The world runs on code. We secure it. Thu, 15 Aug 2024 13:58:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp checkmarx one 32 32 OpenSSL CVE-2022-3786: Food for Thought on the Importance of Security Scanning  https://checkmarx.com/blog/openssl-cve-2022-3786-food-for-thought-on-the-importance-of-security-scanning/ Thu, 08 Dec 2022 14:12:00 +0000 https://checkmarx.com/?p=80591

After a CVE on open source software has been discovered and a fix has been released, a fruitful practice for security researchers is to go deep into the nature of the CVE and the fix. 

In addition to curiosity, this good practice helps professionals and researchers extend their knowledge and improve their understanding of security vulnerabilities. 

Being an engineer at Checkmarx, the main tool that comes to mind to deep dive into the nature of vulnerabilities is the Checkmarx Static Application Security Testing (SAST) engine. 

This blog post will go into details about the nature of two separate vulnerabilities, CVE-2022-3602 and CVE-2022-3786, which hit the news at the beginning of November impacting a very well-known and largely adopted open source software package, OpenSSL. We will explore their fix and how Checkmarx solutions can detect such vulnerable code and support developers with remediation.  

CVE-2022-3602 and CVE-2022-3786 

The official page from OpenSSL, dealing with November’s CVEs (CVE-2022-3786 and CVE-2022-3602) can be found here

The two vulnerabilities affect OpenSSL up to version 3.0.6, and they involve a file named punycode.c, which is one of the files that manage the parsing of specific encoding of domain names (known as punycode). 

With both vulnerabilities, “A buffer overrun can be triggered in X.509 certificate verification, specifically in name constraint checking”, and they have been classified as CWE-120 by NIST, as a “Classical Buffer Overflow.” 

Buffer overflows

Generally, in C language, it is important to securely manage memory buffers to avoid any value written in a buffer from being reused elsewhere in the code and being interpreted with unexpected semantics. It’s worth noting that in C, buffers include strings too! 

Traditionally, strings are represented as a region of memory containing data terminated with a NULL character. Different string handling methods may rely on this NULL character to determine the length of the string. If a buffer that does not contain a NULL terminator is passed to one of these functions, the function will read past the end of the buffer or will lead to unexpected behaviors, such as buffer overflows. 

Let’s go through some technical details on the nature of the two OpenSSL vulnerabilities. 

CVE-2022-3602 

It is a buffer overflow related to the function ossl_punycode_decode, where there is a check on the size of a buffer for larger-than (>) values but omits the equals-to (=) values out of any check, as seen below. 

The red highlight is the old version of the ossl_punycode_decode source code (OpenSSL 3.0.6), while the green one is the fixed version (OpenSSL 3.0.7). 

The check at line 184 was leaving the case “=” unchecked, letting data with the same size of max_out variable to reach the next lines in the program and enter the further part of the function. 

The fix addresses such a case, by adding the “=” case to the ones that must be excluded with return 0. 

According to this evidence, the buffer overflow is due to the absence of a proper check against the size of a buffer which EQUALS the maximum, while there are cases defined to address sizes that are larger (return 0) and smaller (just continue). 

Technically, this case is similar to off-by-one vulnerabilities, where an application does not properly manage buffers which are exactly the size of the maximum expected, due to several assumptions on the data structures received in input. 

Various assumptions can be done on data structures received in input, and they all deal about trusting the source of such input, in term of its content, its format, and its size; to build secure code, assumptions should be avoided. 

Instead, sanitization and validation techniques should be implemented in the code: sanitization is the activity of removing extraneous data from a given input, transforming it into harmless data, compatible with business rules and security policies (e.g. removing special characters from a string before using it as a parameter in SQL); validation, on the other hand, is the activity of checking the content of a given input and rejecting any input that does not comply with application’s constraints (e.g. rejecting any string that is greater or equals to the size of a memory buffer). 

Checkmarx’s Codebashing training platform offers several lessons to teach developers how to write secure code in C, addressing proper methods to manage structures such as buffers, stacks, and heaps, which can help developers avoid introducing vulnerabilities in their code, such as CVE-2022-3602. 

For example, regarding off-by-one, Checkmarx’s gamified platform shows the student a part of vulnerable code and its side effects at runtime, both in memory and interactively: 

In this scenario, the vulnerable code is robust when the buffer is larger or smaller than the expected size; however, it is not safe when exactly 8 characters are entered. 

CVE-2022-3786 

t is a buffer overflow related to the function ossl_punycode_decode, which manages several cases of writing characters (chars) to a buffer, before using that buffer in further contexts, such as reading it as a string. 

OpenSSL 3.0.7 introduced the use of this PUSHC macro, to manage operations in buffers properly, as stacks: 

The macro is then used in place of old 3.0.6 “memcpy” functions, everywhere it is needed. 

Similar to the previous example above, the red highlight is the old version of the ossl_punycode_decode source code (OpenSSL 3.0.6), while the green one is the fixed version (OpenSSL 3.0.7). 

The change on line 301 manages the absence of the NULL character, which may lead to unexpected behaviors if the just populated stack is used in other contexts (e.g., as a string!). 

Checkmarx SAST has a specific query for C/C++ Language which is called “Improper_Null_Termination.” It is in the category of Buffer Overflows, and it has High severity. 

Its goal is to identify buffers that have not been properly terminated by NULL characters. 

By scanning OpenSSL 3.0.6 with Checkmarx SAST, the punycode file appears in two results related to line 276 and 297 for Improper Null Termination query: 

The description of the finding at line 276 states: 

“The buffer outptr in openssl-openssl-3.0.6cryptopunycode.c at line 276 does not have a null terminator. Any subsequent operations on this buffer that treat it as a null-terminated string will result in unexpected or dangerous behavior.” 

By scanning OpenSSL 3.0.7 with the same preset, the two vulnerabilities appear as fixed: 

Conclusion

Any application may present vulnerabilities due to the large size of applications, inheritances from older versions, mistakes, or errors. As evident with the recently identified OpenSSL vulnerabilities, even well-maintained and mature applications may present vulnerabilities that could be all but impossible to be identified by a manual review of the code. 

At the same time, the security awareness of developers is a key factor to produce and maintain secure code. 

Checkmarx SAST and Checkmarx Codebashing can help in raising the bar of security in your company and in your developers and security champions. Both solutions are fully integrated into our Checkmarx One™ Application Security Platform. 


Learn More

To learn more or see for yourself, request a demo today

]]>
How to Use Infrastructure as Code Securely and Avoid Cloud Misconfigurations https://checkmarx.com/blog/how-to-use-infrastructure-as-code-securely-and-avoid-cloud-misconfigurations/ Mon, 05 Dec 2022 23:14:20 +0000 https://checkmarx.com/?p=80567 Moving applications to the cloud delivers clear competitive advantages, but organizations must have the right strategies, access rights and policies in place to do this successfully. Cloud adoption was already expanding before it was super-charged by the pandemic and there are no signs of this trend abating. The consumption of cloud continues to expand across all industry verticals and disrupt the way in which IT teams provision, manage and orchestrate resources.

But cloud adoption requires organizations to shift from provisioning and managing static infrastructure to deploying dynamic infrastructure across their environment. The implementation of dynamic infrastructure means IT operations and security teams must now provision and manage an infinite volume and distribution of services, embrace ephemerality, and deploy onto multiple target environments. 

A challenging environment

This leads to many challenges, including appropriately managing access permissions, being able to identify and prioritize risks, and then proactively mitigating cloud misconfigurations and vulnerabilities. At the same time organizations must facilitate greater collaboration between security, DevOps, and engineering teams, because in a cloud environment, lines of responsibility are not so clearly drawn.

In today’s heightened cyber-attack landscape, organizations must also work out how to reduce their cloud attack surface, while simplifying compliance requirements, and find new ways to innovate and scale their business in a secure manner.

This is easier said than done

One of the great benefits of cloud is how easy it is to spin up resources. Lines of business don’t have to request IT to allocate resources, they just click a button to run any Infrastructure as Code (IaC) template and they have an application running in minutes. However, every cloud account has thousands of entitlements that need to be managed and maintained. Unfortunately, many have excessive permissions that put cloud assets, the data stored, or the whole cloud account at risk. Analyst organization, Gartner, predicted: “By 2023, 75% of security failures will result from inadequate management of identities, access and privileges, up from 50% in 2020.”

An increase an IAM solutions

This has prompted an increase in IAM (identity and access management) solutions purporting to solve the problem of managing identities in cloud environments. However, modern tools like CIEM and CSPM are based on heuristic rules which means they often advise and detect when it is too late, and don’t offer a tailored solution based on the genuine risk to the application.

As a result, CISOs, AppSec, and DevOps teams are overwhelmed with notifications; they need help in identifying which alerts to prioritize. For example, they might be alerted to a misconfigured AWS Lambda function which doesn’t pose a serious threat to their application. They need accurate context to determine which risks to ignore and which to action. The reality is that they can’t fix every misconfiguration, therefore they must focus on the most important business critical risks. 

Alongside the problem of alert fatigue, there is often tension with Dev/Ops teams who just want to move fast and use all their admin and access privileges. Additionally, organizations are not always aware of all their data and sensitive resources in the cloud and many security permissions are not always necessary and can cause account and data leakage.

One size fits all approach doesn’t work

One option is to manually analyze the infrastructure layer and the applications running on it. This might work for smaller organizations, but for larger organizations with a dynamic environment, where developers create new cloud accounts for every dev team, a manual approach is nigh on impossible to scale. Additionally, when it comes to audits, it is hard for the organization to keep track and prove compliance. 

In a bid to get around these issues, organizations are creating repositories of standard policies to use. But these are generic; they don’t name the specific resource that every component needs to access. Some organizations use these same policies for all their cloud functions. Think about it, this is like using the same key to open every individual apartment door in an apartment block, how secure would that be?

How Checkmarx One can help

Reducing software risk and boosting developer and AppSec team productivity is central to Checkmarx’s mission. Our Checkmarx One™ Application Security Platform identifies code vulnerabilities and integrates seamlessly into the tools developers already use. Our aim is to help organizations improve software security without compromising their ability to innovate—making life easier for developers and application security teams at the same time.

Our partner Solvo shares our vision of a world running on secure code and we are pleased to announce a new Solvo integration into the Checkmarx One platform that will help our customers overcome many of the IaC security challenges outlined above. 

Hitting the IaC security sweet spot

Solvo is incredibly easy to onboard, and the outputs are actionable meaning this application-aware cloud security platform helps R&D, DevOps and security teams discover, monitor, and remediate misconfigurations.  

Solvo is an adaptive cloud infrastructure security platform that enables organizations to innovate at cloud speed and scale. Leveraging real-time monitoring and analysis across cloud infrastructure, applications, data and users, Solvo automatically creates customized, constantly updated least privileged access policies based on the level of risk associated with entities and data in the cloud. 

The prioritized findings deliver the remediation organizations need, uniquely created for every component, which is highly complementary to Checkmarx AppSec capability. Checkmarx One finds the IaC misconfiguration, and Solvo informs organizations not only how to remediate, but also how to do this in the best possible way, by automating IAM on a least-privileged basis.

Helping developers deliver secure code

Today we see a lot of responsibility shifting to developers, where they are becoming the single stakeholder for all things cloud. Therefore, they simply don’t have the time or the knowledge to understand the complexities of all these environments. As a result, developers often adopt a trial-and-error approach which can cause issues in production. One simple change in a code file can have the ripple effect of blocking user access to resources and causing production downtime. Or worse still they are bombarded with so many misconfigurations that they simply ignore them, which opens the attack surface for hackers. And while security should be everyone’s responsibility, unfortunately developers are measured on delivering the next feature, and not how secure the application is. 

This is why our partnership with Solvo is so important, because Solvo provides customers with an Infrastructure-as-Code template meaning developers can use Solvo’s integration recommendations seamlessly via the Checkmarx One platform. 

Learn more

To find out more, view the recoding of our recent webinar with Solvo, Teaming Up to Tackle Cloud Security Misconfigurations.

 

]]>
KPIs in QA and AppSec – You Call it Bug, We Call it Vulnerability https://checkmarx.com/blog/kpis-in-qa-and-appsec-you-call-it-bug-we-call-it-vulnerability/ Wed, 30 Nov 2022 13:00:00 +0000 https://checkmarx.com/?p=80536 Measuring and why it is important

Peter Drucker is usually credited saying “If you can’t measure it, you can’t manage it.” Yet others say that he never said it like that and also that it’s a fallacy[1]. We also believe that, like Drucker, that the first role of a manager is a personal one: “It is the relationship with people, the development of mutual confidence, the identification of people, the creation of a community. This is something only you can do.”[2]

However, this does not mean that measuring is not important, it just means that the inversion of the argument is not true, i.e., measuring is very important for management, but it is not the only important factor of management. Without measuring how would you be able to know if you are progressing towards your goals and success criteria? And, if you see that you are not progressing or not progressing as planned aren’t metrics at least one good option in trying to understand the issue and look into ways to improve the situation? So, measuring is important, and these questions also give us some indications about how metrics should be designed.

Link to goals and objectives

But let us come back to what this means for an AppSec program: When we design AppSec programs we view it as the foundation to know what the specific goals and objectives the program. By clarifying the goals and breaking them down them into objectives (or milestones) we can also plan better success criteria and then plan how to work towards them. For working towards the success criteria there can be soft factors like job satisfaction and organization culture, in our case secure development culture and the job satisfaction of the stakeholders of the software development organization, but there are also factors that boil down to questions about hard numbers such as “How many critical vulnerabilities do we have?”, “Is the number of vulnerabilities going up or down?” and “Is the risk from application level vulnerabilities[3] at an acceptable level to the business?”

Learning from other disciplines

Sometimes it is valuable to lean on shoulders of other disciplines who have the same or similar requirements. For a software application in production, it is quite similar if an outage is caused by a bug or a directly exploitable vulnerability. Therefore, it stands to reason that the ways to measure progress in managing bugs are similar to managing vulnerabilities. There are of course differences, for example, that risk to information disclosure or data integrity is usually caused by vulnerabilities and that the damage to the business from information disclosure or altered data can be even bigger than a system outage, but that mainly means that the problem is even bigger so there is even more motivation to address an exploitable vulnerability than a bug. Learning from long-standing background and experience in software quality assurance (QA) testing there are 3 important ways to measure this at a business level:

1.) Escaped defects

This metric refers to the number of defects that have “escaped” the QA process and are found in production, i.e., in software that is released. This is one of the most important metrics in QA since it is tied to the performance of the QA practice.[4]

2.) Defect distribution

Usually, this metric refers to when the defects are found in the development cycle, i.e., in unit testing, and of course the aim is to find them as early as possible, which means to shift left (note to the AppSec experts among the readers: sounds familiar?). Defect distribution measures how many defects were found earlier compared to later in the testing cycle, for example, unit testing vs. integration testing vs. escaped defects.

3.) Defect density

This metric refers to the number of defects in relation to the size of the release or software. The problem is however, how do you measure the size of a release or software? There are different ways. All of them are not perfect but the best available today is still to count the number of lines of code (loc). Another option is to count the amount of time that went into developing the software or to count story points[5]. But it is not always easy to get reliable data for other measures of density. LOC data is normally reliable to obtain but it can depend on whether some code in auto-generated and how much custom code is needed may depend on the programming language.

How to design KPIs for the business/strategic perspective of AppSec

As stated above the challenge of bugs/defects in software in production is quite similar or almost the same as vulnerabilities in software in production. Therefore, we can learn from the best practices from QA testing also for AppSec testing. Some of the key requirements for designing and using KPIs are:

  1. Measure only what is part of the goals and objectives
  2. Be able to “drill-in”
  3. Create a baseline and track values over time
  4. Make the metrics available to all stakeholder and as live as possible
  5. Indicators should not fluctuate unnecessarily
  6. Combine measures to give executive leadership one score to track

(1) The business goals and objectives for AppSec typically revolve around ensuring that the risk from application-level attacks is at an acceptable level to the business, i.e., the metric should endeavour to measure and quantify risk. What constitutes risk, e.g., if it is compliance risk, reputational risk, legal risk, or risk from a loss of revenue directly tied to a production outages due to software being unavailable requires a more detailed analysis. When you design your metrics or key performance indictors (KPIs) you should make sure that you only measure what is included in your goals and objectives and nothing else. The aim should be to measure as much as possible of the goals and objectives in as few as possible KPIs, ideally in just one metric at executive level.

You should be able to “drill-in” to the metric (2). This mean you need to be able to break down the number into individual components. For example, when using the defect density of the whole software development organization as a metric, you should be able to drill down into the defect density of each application that the organization in developing, so that areas with lower performance can be looked into, addressed, and improved. This is the best and sometimes the only way to work towards overall improvements. This also requires (3) to baseline and track values over time, otherwise you cannot compare the performance of one area (e.g., one application) with the performance of the same area in a previous time period and therefore don’t know if the area has progressed or regressed. In order for the organisation to work together on improvements it is crucial that everyone is able to monitor the metrics (4) therefore be able to act if the values decline. This can also help in driving some level of healthy competition between teams because no team normally wants to be at the bottom of the list in terms of performance or even worse than another team they relate to.

Furthermore, indicators should not fluctuate unnecessarily (5). This can maybe best be explained by an example of a KPI that has a serious flaw in this sense: Mean time to remediate (MTTR) is a common KPI that measures how much time (usually measured in days) it takes to remediate an issue after it was found. However, this metric only captures issues that are actually remediated. This may mean that when a software organisation or team starts remediating issues that were identified a long time ago, the value will actually temporarily go up. Therefore, this penalizes good behaviour if it is not at least viewed in conjunction with the number of vulnerabilities that are still open. The amount of time that a vulnerability is open can of course be relevant, but our recommendation is to look first at the vulnerability age (independently of whether the vulnerability was already fixed or not). Only in practices that have a high maturity it can be interesting to also look at the MTTR, but it should not be the first and most important KPI.

Lastly (6), it is a good practice that has proven to be effective to combine measurements into one weighted risk score value. For example, relevant measurements from different testing tools such as SAST and SCA and findings of different criticalities should be combined into one weighted score. That way executive leadership can review this score on a regular basis, e.g., monthly and see if it is developing in the right direction. The metric only needs to be agreed once and afterwards can be tracked and reported on a regular basis which should mean that the business can rest assured that the risk from application-level attacks is appropriately managed. If it does not develop in the right direction, you can drill-into it (see above) understand the root cause and address it.

KPIs for different levels – leadership/strategic and management/tactical

In QA as well as in AppSec there are KPIs/metrics that are relevant for different levels of the organization. For example, in QA testing metrics such as test coverage and code coverage are important on a tactical level and should be improved over time, but they are more important at the level of the QA management and less for the executive leadership. The number of escaping defects will already reflect if the test coverage is sufficient. Therefore, executive leadership does not have to track test coverage whereas as a QA manager this metric is important to work towards reducing escaping defects. Similarly, in AppSec testing there are findings that are detected in code that is already in production and findings that are found in development branches that are not yet in production (feature branches). The latter are not causing risk to the business, but the density of such issues is relevant from a tactical perspective to make sure fewer new vulnerabilities are introduced in newly developed source code.

Conclusion for AppSec KPIs

In conclusion our recommendation is to only measure defect density vulnerabilities from different testing types such as SAST and SCA for code that is already in production (i.e., release branches). The defect density should be weighted by the criticality of the finding, i.e., higher criticality findings should have a higher weight in the calculation. This metric is the equivalent to a combination of “escaped defects” and “defect density” in QA and can serve as the only value to track at executive level.

Other metrics, such as vulnerability aging and defect distribution are also important on a AppSec management level (but not necessarily at executive level). Defect distribution can compare how many defects were identified in code that is not yet in production compared to vulnerabilities detect in source code that is already in production and the ratio should be reduced over time. Vulnerability aging is a metric that should be reduced over time, and it will contribute to reducing the vulnerability density over time: The shorter the time vulnerabilities are open the fewer of them there will be. It is therefore a tactical level KPI that will help to improve the long term strategic KPI.

Conclusion

Finding the right KPIs for AppSec testing can be difficult but it is crucially important as one handle to manage application-level risk over time and this risk is one of the biggest risks to a lot of organizations whose main business process relies on software or is software. In our opinion it is valuable to lean on other related disciplines and learn from the past to inform decisions about KPIs. We hope this article gave some insights from our practice into which metrics and KPIs to use. Please contact the authors if you have comments or further questions. The considerations presented in this article is one of a number of best practices in AppSec management and practice that we gather and use in the Checkmarx AppSec Program Methodology and Assessment (APMA) Framework. For more information, please see our APMA page.

This article originated from joint work of Yoav Ziv and Carsten Huth at Checkmarx, Yoav with a long-standing background and experience in QA management, Carsten with a similar background and experience in application security. Working together in the customer-facing application of the topics discussed in this article, we gained insights which we find valuable to share with practitioners in application security and information security.


[1] Anne-Laure Le Cunff: “The fallacy of ‘what gets measured gets managed’”, see https://nesslabs.com/what-gets-measured-gets-managed

[2] Peter Drucker, according to Paul Zak, see https://www.drucker.institute/thedx/measurement-myopia/

[3] Despite the esoteric introduction we talk about AppSec in this article.

[4] https://www.testim.io/blog/qa-metrics-an-introduction/

[5] Story points are used in Agile and Scrum methodologies for expressing an estimate of the effort required to implement a product backlog item or any other piece of work.

]]>
Presets, Queries, & Onboarding: The Checkmarx One Difference https://checkmarx.com/blog/presets-queries-onboarding-the-checkmarx-one-difference/ Tue, 22 Nov 2022 20:41:19 +0000 https://checkmarx.com/?p=80372

Introduction To Checkmarx One

As more and more companies adopt modern application development methodologies and aim to “shift-left,” they are also adopting modern application security testing (AST) tools and best practices like integrating and automating AST tools into their development pipelines. But are these companies ensuring that they’re checking for the appropriate risks and working with high-fidelity results?

Checkmarx, a leader in Gartner’s AppSec Magic Quadrant for five consecutive years, understands the needs of modern development. In an effort to streamline scanning and help development teams secure code without slowing time to market, we released Checkmarx One™, the most comprehensive application AST platform on the market. Checkmarx One brings our industry-leading SAST engine (and many others such as SCA, KICS, etc.) to your AppSec and development team via the cloud.

However, flexibility and speed-to-scan delivery are only part of the modern AppSec equation. Equally, if not more important, is providing solutions to the question above—this is where key Checkmarx One differentiators, presets and queries, make all the difference.

 

Presets And Queries In Checkmarx One

Before we dig into how exactly Checkmarx One’s presets and queries can help us address the challenge of checking for appropriate risks and working with high-fidelity results, it is important to understand the basics of both, including how they are used in the SAST engine scan process:

Preset = collection of vulnerability queries that define the scope of the SAST scan

Query = vulnerability rule written in CxQL

Any SAST engine scan initiated through Checkmarx One must have a preset defined at the organization, project, or scan level —see below for an example of a SAST preset being set on project creation via a presetName rule:

Note: The full list of predefined presets that are available in Checkmarx One can be found in our documentation here.

Selecting a preset from the drop-down menu, such as OWASP Top 10 – 2021, will limit that project’s scans to only check for vulnerability queries specific to the top 10 web application security risks according to the OWASP (Open Web Application Security Project) compliance guidelines for 2021.

After selecting a preset, each SAST scan generally follows this high-level process:

  1. Parse source code
  2. Build AST and DOM
  3. Build data-flow graphs (DFG) from code’s source and sinks
  4. Execute the scan preset’s queries against the DFGs
  5. Return vulnerabilities

As we saw in the definition provided for presets, they are integral to a successful, actionable SAST scan. Incorrectly setting a scan’s scope can cause scans to run long and inefficiently, but, even more detrimental, have results that provide a lot of noise and unnecessary work and confusion for your triaging teams.

Note: When evaluating AppSec platforms, it is important to verify that the SAST engine includes some sort of preset functionality as many solutions do not provide one which makes it impossible to limit result “noise.”

Speaking of triaging, while presets can ensure the correct scan scope, if your SAST results are not of high-quality and contain too many false positives (FP) or false negatives (FN) then your SAST solution runs the risk of becoming ‘shelfware’. This is another area in which Checkmarx One excels compared to competing solutions, as only Checkmarx One’s SAST vulnerability queries use a proprietary syntax, CxQL (C# derivative), that allows AppSec teams to easily customize vulnerability queries as needed to remove false positives and false negatives.

A common use case that neatly highlights the benefits of customizing queries can be found in cross-site scripting (XSS) vulnerability findings where a false positive may be occurring due to the use of an in-house sanitizer method that is not included in the Checkmarx One default out-of-the-box query. We can simply add this method to the appropriate CxQL query and rescan the project to remove the FP.

See this screenshot showing the ‘Find_full_XSS_Sanitize’ query via Checkmarx One’s CxAudit console:

Now that we understand the basics and benefits of presets and queries and Checkmarx One, let’s take an in-depth look at how we make the best use of both.

 

Preset Selection: Recommendations And Best Practices

There are several preset selection strategies that have proved to be successful amongst our customers of all sizes, from SMBs to the largest Fortune 500 enterprises:

  1. Only scan for what can be ‘reasonably remediated’
  2. Design custom presets based on application type and threat modeling
  3. Start small and expand—maturity model approach

Only Scan For What Can Be Reasonably Remedlated

One of the common mistakes that we see for those both early in their SAST scanning journey and those with mature programs is a misguided but intentional approach to scanning for everything. A desire to get their money’s worth or prevent all possible risks results in their initial preset selections (or lack of options to choose a preset with competitors) returning an unworkable volume of risks that weighs on all teams involved. Unfortunately, this tends to result in major efforts to review and triage these extreme volumes of findings, only for development teams to end up prioritizing and remediating a handful of vulnerabilities.

A better approach is to consider what is most critical for each project/team to remediate and select a preset with a scope that allows your teams to reasonably address and fix these issues before the next scan. This can help prevent frustration at unresolved issues and create momentum as teams close out issues.

Select Presets Based On App Type And Threat Modeling

It is also extremely important to use your knowledge and understanding of a project to choose presets which make sense based on the application’s architecture and application type.

Application type can influence the kind of weaknesses an application may be susceptible to.  For example, if there is no front-end web code in the application, XSS vulnerabilities, by definition, will not be present—so it does not make sense to use a preset that will try to find XSS weaknesses. Or, if an application doesn’t communicate with a database, SQL injection vulnerabilities will not be present and don’t need to be sought either.

This is where the use of predefined presets such as Android, Apple Secure Coding Guide, JSSEC, OWASP Mobile Top 10 – 2016, OWASP Top 10 API, WordPress, etc. are beneficial.

Start Small And Expand: Maturity Model Approach

Starting small is a good strategy for any customer, no matter the size, resource capacity, or AppSec maturity. But, it’s particularly appropriate for development teams that are new to application security testing. Starting small when selecting a preset will ensure teams aren’t overwhelmed or scared away by thousands of results.  Once a team has sufficiently triaged results found with a small, targeted preset, the scope of the preset can be widened to look for additional kinds of results.

This approach is most often implemented by taking a severity-based approach.

An example maturity model, utilizing the predefined presets, may look like the following, with each preset used until all scan findings for that preset are remediated, after which, the project advances to use the next preset:

  1. OWASP Top 10 – 2021
  2. High and Medium
  3. High, Medium, and Low
  4. ASA Premium

Project Onboarding: Putting It All Together

As noted previously, choosing the right preset is only half of the battle in providing suitable and high-fidelity SAST scan results. Each preset includes a selection of vulnerability queries, and it is these queries that ultimately identify the risks within a scan. The accuracy and robustness of each query is the driving factor in whether FPs or FNs are present in your SAST scan results and Checkmarx One’s SAST engine is the only AppSec platform with a truly flexible query language open to its users.

We recommend that our customers, either themselves or utilizing our services, perform a process that we call project onboarding first for their ‘main business applications’ followed by lower priority applications. Project onboarding is an optimization process that includes the following:

  1. Use selection strategies to select appropriate starting preset
  2. Perform initial scan
  3. Triage results to identify TP, FP, and FN
  4. Modify vulnerability queries to remove any quality issues found in step #3
  5. Adjust/select new preset if scope adjustment required
  6. Rescan and repeat process as necessary

This type of complete and dynamic approach is required as the industry changes to modern application development and its push for integrated SAST and other engine scans become more and more prevalent. Checkmarx One and its SAST engine are one-of-a-kind, and our unique use of presets and queries set us apart.

  

Request A Demo

Reach out to us today to request a demo, or sign up for a Free Trial to see for yourself!


]]>
Reducing Friction in AppSec Program Adoption: How Checkmarx One Can Help  https://checkmarx.com/blog/reducing-friction-in-appsec-program-adoption-how-checkmarx-one-can-help/ Fri, 14 Oct 2022 12:21:05 +0000 https://checkmarx.com/?p=79799

Organizations looking to adopt or improve an AppSec program often encounter challenges when trying to harmonize among various security scans and tools. And collecting, collating, and filtering security scan data is only half the battle—we also need to empower development and DevOps teams to take action to mitigate found vulnerabilities and risks in order to develop an effective security strategy. 

Often, information security and development teams have differing objectives; InfoSec teams aim to implement security controls to minimize risk as much as possible, whereas dev teams try to address security requirements with the lowest impact and effort without jeopardizing application stability or new features. And unless vulnerabilities are identified early in the development process, implementing necessary security controls can result in significant delays, frustrating both teams in the process. 

With Checkmarx One™, organizations can reduce friction among AppSec and development teams by simplifying, optimizing, and expediting the identification and remediation of security vulnerabilities, all within a single tool earlier in the software development process. 

Checkmarx One includes multiple scan engines, including (Static Application Security Testing (SAST), Software Composition Analysis (SCA), and Infrastructure-as-Code (IaC) security, all on a single platform with easy-to-read, correlated scan reports. 

SCA Knowledge Center 

Checkmarx One SCA engine indexes and analyzes software components bound to the scanned application through imported packages and/or dependencies. Third-party software can introduce additional unexpected vulnerabilities in an application and our SCA scan engine can help developers and information security teams understand and remediate those vulnerabilities. 

When an application is scanned, Checkmarx One SCA identifies packages involved in the dependency tree of the application, together with Supply Chain Security (SCS) risks, licensing information, known vulnerabilities, and packages linked to software containers. 

The main result for a given package, found during a Checkmarx One SCA engine’s scan, allows the user to collect all the relevant data to triage the finding and transform this information into a remediation action: 

Checkmarx SCA suggests the newest version of the component, based on all the information collected during the scan and the history of releases of a given package, in terms of security fixes and updates. 

Upgrading to the latest version of a component would need a proper evaluation of impacts not only on application security, but also the functional capabilities within the application; thus, the process of decision may cause friction among Development and InfoSec teams. 

As in this example, upgrading a component three major versions ahead may disrupt features, result in incompatibilities, or require additional development effort. 

Usually, an application security program is adapted to the risk of the application that needs mitigation, and there are some opportunities for negotiation to balance the needs of InfoSec with those of Developers: mitigate risks as much as possible, without breaking any existing application functionality. 

Checkmarx’s AppSec Knowledge Center assists in this goal by identifying and illustrating the history of releases of a given package, to help in that negotiation: 

By browsing the change list, the user can identify the most suitable upgrade, allowing organizations to lower risk while minimizing impact. 

In the example above, there is a version of the mysql-connector-java package (5.1.49) which is suitable for lowering the risk from High to Medium while not changing the major version of the package: 

As a side note, the version slider can also help identify less effective upgrades; for example, a higher major version is not always the most suitable choice (version 8.0.12 still has high vulnerabilities): 

With the support of Checkmarx One’s SCA technology, InfoSec teams and Developers can implement a prioritized remediation program addressing even the most complicate dependency structure, with the opportunity of drilling down each specific dependency and version upgrade recommendation. 

Managing multiple vulnerabilities 

Checkmarx One offers multiple engines, including Software Component Analysis, API Security, IaC Security, and the well-known SAST engine. 

Checkmarx One SAST has the task of executing the source code analysis for an application, supporting a large number of programming languages and frameworks. 

Generally, security vulnerabilities are detected using specific patterns and the evidence of each result is shown through an attack vector or a data flow. 

For example, injection vulnerabilities (such as XSS, SQL Injection) are demonstrated if a dataflow is detected from an untrusted source of data (e.g., user-controllable input), known as a source, and a sink, which is a location in the code where the untrusted data “flows” to a destination point of impact (e.g., a Database operation, an HTML response), without a sanitizer (e.g., parametric queries, proper encoding). 

Due to the nature of applications and their modularity, often several attack vectors have data flows in common: the same user-controllable source could influence several sinks, or a single sink is the destination of multiple user-controllable sources. 

Checkmarx One is often able to identify the Best Fix Location (BFL) and can highlight the specific line number within the specific file where the vulnerability can be addressed, saving both InfoSec teams and developers time and effort. 

To help prioritize findings for both InfoSec and development teams, multiple filtering techniques can be leveraged for Checkmarx One SAST results, allowing teams to apply primary and secondary groupings. 

Filters can be applied by: 

  • Grouping all the languages involved 
  • Grouping results by severity 
  • Grouping results by Sink File (destination of attack vectors) 

Leveraging filtering, InfoSec and development teams can identify pertinent security scan results and properly prioritize mitigation actions. 

For example, in the figure below, we can see the files that are the most impacted by security vulnerabilities, since the files represent multiple attack vectors colliding in the same sink: analyzing these files will help developers resolve a larger number of results in one single analysis: 

In contrast, we can identify which files are most exposed to user-controllable input by grouping per source file: 

In the case of a relevant number of results, it is a major priority for InfoSec teams to lower the effort needed to analyze results. On the other hand, it is a major priority for development teams to address the most punctual (and eventually less numerous) locations in the code to apply the proper mitigations. 

This is the key to a successful AppSec Program. 

Evaluating the security posture 

In general, applications are made up of a number of modules, which may be tied in a single repository or widespread in multiple repositories. Furthermore, application modules may be owned by different development teams. 

An organization can be very diversified and one of the objectives of InfoSec teams is to evaluate the security posture of their assets from perspectives that can differ much from the technical organization of modules and repositories, as seen by developers. 

Checkmarx One offers the opportunity to group different modules of applications (or different applications) under the same entity, to aggregate results, and to have a perception of the general security posture of a given group of assets. 

Every AppSec program has an associated risk level for each asset involved; therefore, it is extremely important to reflect the same logic on the unified security tool. 

Within Checkmarx One, the user can create an application group, which aggregates scans from different projects, optionally giving grouping criteria through tags, and giving a “Criticality Level” to the group itself, as illustrated in the figure below: 

By filtering projects by tag, projects can be assigned to an application group, automatically. 

From a list of projects to application-wise security posture in just a few clicks! 

Scanned projects correspond to specific repositories, modules, languages, microservices, or monolithic applications: 

Grouping them by a given criteria will show the larger picture of the logical entity “Front End Apps:” 

InfoSec team can then see the real security posture of a group of assets: 

Being able to aggregate results across different sets of findings will help InfoSec teams evaluate the overall security posture of a multipart application, drive decisions toward more focused mitigating actions, and assist developers with more accurate employment of their efforts. 

Conclusion

Checkmarx One is a powerful platform, designed to execute a large number of scan operations on applications in a very short time frame. The results can be managed and integrated with multiple issue-tracking environments.  And because we aggregate results from multiple scan engines within a single platform, we improve developer productivity, foster better collaboration among InfoSec and development teams, and help organizations improve their overall security posture through targeted guidance and prioritization. 

Harmonizing the visibility of all the actors involved will help transform results from a security platform into a prioritized, feasible, accurate, and effective mitigation program. 

To see for yourself, sign up for a free trial or reach out to a sales team today

]]>