Stephen Gates, Author at Checkmarx https://checkmarx.com/author/stephen/ The world runs on code. We secure it. Tue, 12 Nov 2024 15:45:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp Stephen Gates, Author at Checkmarx https://checkmarx.com/author/stephen/ 32 32 Checkmarx Supply Chain Threat Intelligence: The Next Level of Defense for Open Source Security https://checkmarx.com/blog/checkmarx-supply-chain-threat-intelligence-the-next-level-of-defense-for-open-source-security/ Tue, 31 Jan 2023 14:03:00 +0000 https://checkmarx.com/?p=81149

In the world of attacker vs. defender, security teams often feel they’re behind the eight ball, operating in a state of perpetual reactionary-mode. Although they tirelessly try to get ahead of attackers and their campaigns, defenders’ efforts often fall short. This is not due to a failure on their part. Instead, attackers’ tactics, techniques, and procedures (TTPs) are in constant flux and with relentless inventiveness. And what we witnessed in 2022 has motivated Checkmarx to be the industry’s first software security vendor to deliver supply chain threat intelligence to those who rely on the open source software ecosystem.

2022 was another year of unrelenting attacks against organizations that thrive on the very software they develop, but it was much different than anything our researchers ever saw before. Last year, we observed an increasingly advanced level of ingenuity as attackers took complete advantage of a system built upon trustthe open source software supply chain. And this time, having caught them red-handed on multiple occasions, we now know what it will take to stay one step ahead of their attacks.

Last March, Checkmarx released its Supply Chain Security solution as our research teams witnessed the evolution of attackers’ TTPs firsthand. Currently, the solution is being widely adopted by organizations who depend on the software supply chain, since open source packages play an important role as part of their code base. Understanding that organizations are going to continue using open source packages in their applications for the foreseeable future, Checkmarx just announced another arrow in the quiver of enterprise-class, open source supply chain defenses—Checkmarx Supply Chain Threat Intelligence.

How our threat intelligence is different

Traditionally, real-time threat intelligence has mostly been about identifying nefarious source IP addresses that were engaging in attacks. Many of these IP addresses were compromised devices that became part of a botnet, being centrally controlled from somewhere in the world, and used to strike organizations with denial of service, credential stuffing, password guessing, site scraping, spamming, and probing attacks. Consumers of this type of threat intelligence would block traffic coming from these nefarious addresses somewhere in the cloud or at their perimeters.

However, Checkmarx Supply Chain Threat Intelligence is much different than what has been traditionally available. This threat intel solely focuses on the software supply chain the world depends on. Also, the solution Checkmarx is delivering is not based upon vulnerable packages that are commonly tracked by cve.mitre.org. Instead, this intel is all about tracking purpose-built, malicious packages that often contain ransomware, cryptomining code, remote code execution, and other common types of malware. Malicious packages are designed to infect organizations worldwide and are much different than packages that contain unintentional coding errors that end up leading to vulnerabilities.

What our threat intelligence delivers

Based exclusively on proprietary research by Checkmarx Labs, our Supply Chain Threat Intelligence is for organizations that want:

 

    • Identification of malicious packages by attack type such as dependency confusion, typosquatting, chainjacking, and more

    • Analysis of contributor reputation through identification of anomalous activity within open source packages

    • Intelligence on the malicious behavior of packages, including static and dynamic analysis to understand how the code runs

    • Historical Archives in the form of a data lake that allows the ongoing analysis of packages long after they have been deleted from package managers

How to consume our threat intelligence

Checkmarx Supply Chain Threat Intelligence is delivered via an application programming interface (API). Users obtain a unique token from Checkmarx, send in a package name and version, and receive threat intelligence on the package. The intel is simple to integrate into many dashboards and to automate into your software development environments.

Why you need our threat intelligence

The best part of Checkmarx Supply Chain Threat Intelligence is that it is designed for you, the developer and AppSec professional. Subscribing to the service, and using it regularly, will help you:

 

    • Avoid malicious packages before they become part of your code base – and, critically,  before code containing them is ever deployed

    • Understand the evolution of attackers’ TTPs against the supply chain

    • Collect intelligence on large numbers of packages at once using bulk queries

    • Increase security awareness with real-time updates and alerts on new threats

    • Make better open source package selections using our valuable insights and context

Next steps

To learn more about Checkmarx Supply Chain Threat Intelligence, you can check out the interactive demo below and download our Solution Brief to share with others.

If you’d like a live demo of the solution, don’t hesitate to contact us here.

Or you can download the following white papers to learn more about supply chain attacks and the Checkmarx approach to supply chain security:

]]>
Navigating the Microservices Maze: Understanding the Pros and Cons of Microservices Architectures https://checkmarx.com/blog/navigating-the-microservices-maze-understanding-the-pros-and-cons-of-microservices-architectures/ Wed, 11 Jan 2023 11:50:00 +0000 https://checkmarx.com/?p=80921 The Internet is filled with articles that sing the praises of microservices, and more than three-fourths of developers say they are using microservices in at least some of their applications.

The fact that microservices are popular, however, doesn’t mean they are always the right architectural solution for a given application. Although microservices have a lot of benefits to offer, they can also create challenges in areas like observability, security, and beyond.

So, before jumping on the microservices bandwagon, it’s critical to understand the pros and cons of microservices architectures. This article does that by briefly defining microservices, then discussing the three biggest advantages and disadvantages of using a microservices-based design.

Microservices, Defined

Microservices are a type of software architecture that breaks application code and functionality down into discrete, loosely coupled units that communicate with each other over the network. This makes microservices different from so-called monolithic applications, in which all application components run as a single, tightly integrated program.

There are many ways to go about designing and implementing microservices. You don’t need to use a certain programming language or a special platform or infrastructure to create microservices apps. Nor is there a minimum number of services you have to create for your software to count as a microservices application. As long as you break the application into discrete, loosely coupled parts, it’s a microservices app, no matter where it’s deployed or exactly how you define each microservice.

3 Main Advantages of Microservices

If I had to explain why microservices have become so popular in recent years, I’d point to three key advantages that they offer: scalability, ease of deployment, and fault tolerance.

Scalability

Microservices help applications to scale in two main ways.

First, microservices make it feasible to scale individual parts of your application in response to fluctuations in demand. For example, if there is a spike in login requests to your app, you can create more instances of the microservice that handles authentication. Because you only need to scale up this particular part of the application, scaling is faster and more efficient than it would be if you had to create more instances of a full application.

Second, because each microservice in an application is typically small, spinning up additional instances (or spinning down unnecessary ones) is faster and easier than spinning up a complete application. There’s less code to deploy, and less compute time needed.

Simpler Deployment

Beyond facilitating scalability, the small size of microservices means that they are easier to deploy in general. Deploying a new version of a monolith requires moving a fairly large amount of code into production, then spinning up the application. Plus, you’d also have to build, stage, and test the entire monolith before you could deploy a new version, which also significantly slows down the process.

With microservices, deployment is faster and less complicated because you only have to deploy one microservice at a time. Typically, you can update one microservice in your app while continuing to use older versions of other microservices.

Fault Isolation

Because microservices break applications into discrete parts, the failure of one microservice doesn’t typically cause the entire application to crash. In this way, microservices create fault tolerance.

On balance, it’s important to recognize that fault tolerance alone doesn’t guarantee application reliability. If a critical microservice fails, the application may effectively stop functioning, even if the other microservices keep running. For instance, if your authentication service goes down, users won’t be able to log in, so the app will still experience a critical disruption.

Still, microservices provide some ability for applications to continue working even when part of the application fails. Users who are already logged in, for example, would most likely be able to continue using the app even if the authentication service failed. That beats having a monolith where everything fails completely due to a failure in any one part of the app.

The Top 3 Disadvantages of Microservices

On the other hand, microservices can pose challenges. The biggest are management complexity, observability challenges, and heightened security risks.

Management Complexity

Managing any type of application is challenging, but managing microservices apps is especially challenging.

The reason why is simple enough: microservices mean that there are more moving pieces within your application. By extension, there is more to configure and more to deploy. (Indeed, although deploying each microservice is simpler than deploying a monolith, the fact that you have to deploy each microservice separately means you end up having many more deployment processes to plan and manage.) There is also more room for things to go wrong due to misconfigurations, malformed requests between microservices, infrastructure failures, and so on.

Observability

Observing microservices applications – which means collecting and analyzing data from them in order to understand their state – is harder than it is with monoliths, for two main reasons.

The first, and simplest, factor is that there is more data to collect, and more sources to collect it from, because you are dealing with multiple microservices. With a monolith, you’d typically have only a handful of data sources (like an application log and a server log) to deal with.

Second, to observe microservices effectively, it’s critical to be able to correlate and compare observability data from your various microservices. Thus, you face the challenge not just of collecting more data from more sources, but also of figuring out how to put all of that data together in a way that provides meaningful context. Modern observability tools can simplify this process to an extent by automatically interrelating data from discrete microservices, but you’ll still need to configure observability rules manually in some cases to gain as much insight as possible into your application.

Security Challenges

There is nothing inherently insecure about microservices. In fact, in some respects, you could argue that microservices are more secure in the sense that the breach of one microservice doesn’t necessarily mean your entire app is breached.

On the other hand, the fact that microservices are more complex, and that they introduce more moving parts to an application, means that securing them is harder. You have more configurations to worry about, which means there is a greater risk of misconfigurations that could create security vulnerabilities. At the same time, it can be harder to detect anomalies within the complex patterns of communication between microservices. That means attacks are not always as visible.

On top of this, microservices typically require the use of additional layers of infrastructure, such as an orchestrator and a service mesh. By extension, microservices create a larger attack surface, which also heightens security risks.

All of these security challenges can be addressed, but they require more effort than you’d typically need to secure a monolith.

Conclusion

To decide whether microservices are right for you, weigh what’s most important in an app. Are benefits like scalability, ease of deployment, and fault isolation worth more than the challenges of managing, observing, and securing microservices?

There’ no universally right or wrong answer to that question. But there is a right or wrong answer depending on your circumstances. Deciding when – and when not – to use microservices hinges on answering that question for yourself.

Chris Tozzi has worked as a Linux systems administrator and freelance writer with more than ten years of experience covering the tech industry, especially open source, DevOps, cloud native and security. He also teaches courses on the history and culture of technology at a major university in upstate New York.

.

.


Anyone who develops applications built on microservices architectures understands that APIs are a foundational element that must be secured. To learn more about our how Checkmarx can help you secure your APIs, click here.

 

]]>
When It Comes to Cloud, Location Matters https://checkmarx.com/blog/when-it-comes-to-cloud-location-matters/ Tue, 02 Aug 2022 14:26:54 +0000 https://checkmarx.com/?p=77887

Checkmarx One™ Cloud-Based Application Security Platform Is Expanding its Footprint 

The average layperson likely envisions “the cloud” as being some anomalous, effervescent thing that has no boundaries and none of the physical constraints of a typical computing network. Those of us who have helped build “the cloud” know it much differently. To us, it’s just another network made up of routers, switches, cables, fibers, load balancers, servers, security technology, and lots and lots of software. The cloud is nothing more than a network that just so happens to not exist in your buildings. Instead, it’s a network in someone else’s buildings that often spans the world with an almost global reach. Terms like points of presence, edge locations, availability zones, regions, branches, global backbones, peering agreements, etc. are terms often heard by engineers who speak the cloud vernacular.

One of the issues that has often stymied the acceptance of cloud-based offerings like SaaS, PaaS, IaaS, etc. is all about location. The reason for this is simple—jurisdiction. Regulatory compliance requirements like GDPR in the EU, data protection and privacy laws in countries all over the world, and general FUD (fear, uncertainty, and doubt) about the cloud being too risky have slowed its growth. FUD is something that is not easy to overcome; however, data protection and privacy requirements have been addressed due to the cloud footprint. Let’s use DDoS defenses as an example of what we’re talking about.

If an organization in the EU wanted to buy DDoS defenses from a cloud-based provider, the provider would have to have cloud-based DDoS scrubbing centers in the EU. In the event of an organization coming under a DDoS attack, the cloud-based DDoS defense company would swing all inbound traffic destined to a site under attack and route that traffic to a DDoS scrubbing center. The scrubbing center would remove the DDoS component of the traffic, and then return all “good traffic” back to the site under attack.

Most people would believe that this is due to distance requirements, potential latency, etc., but it is not. Since cloud backbones are made up of large numbers of 100 gigabit pipes, latency is not the issue—but jurisdiction and data sovereignty are. In other words, if traffic is coming into an organization in the EU, once that traffic is inside the EU, it must remain in the EU to meet regulatory compliance requirements. If the DDoS scrubbing center was outside of the EU, this would be a compliance issue. And the same thing goes for cloud-based application security testing (AST) platforms like Checkmarx One.

Checkmarx One is an AST platform that exists in our own cloud, running in the cloud. And like any cloud-based DDoS defense provider, the Checkmarx “cloud” must also have a global presence. That is why we already have SaaS instances in North America, the EU, Australia, India, and Singapore. This equates to five (5) SaaS instances to better serve our customers worldwide. Again, this has nothing to do with latency or distance. This has to do with data sovereignty. And with organizations, software developers, and application security teams located all over the world, we must operate in the same regions in which they operate. Checkmarx One is available to all organizations in all regions regardless of their size or number of software developers.

For Checkmarx, it’s always been about a global AppSec presence since software knows no boundaries, but data sovereignty does. Also, there is no difference in the costs of our platform no matter where in the world you operate. Organizations send their lines of code to the location of choice where we operate, the platform scans the code for vulnerabilities and associated risks, and results are returned to the organizations—in the most secure fashion possible. The term we coined for this is called, “Global AppSec,” since we understand code knows no boundaries, and neither does security.

In other words, software security where you need it, when you need it, and however you like it. That is what Checkmarx One is all about.

To learn more about Checkmarx One Application Security Platform, visit us here. Or sign up for a Free Trial of the Platform. We guarantee you won’t regret it.

]]>
What Are the Challenges with Securing APIs? https://checkmarx.com/blog/what-are-the-challenges-with-securing-apis/ Wed, 27 Jul 2022 17:50:45 +0000 https://checkmarx.com/?p=77712 When you expose API services to the public internet, you are responsible not only for their reliable operation, but also for their security. Sufficiently securing and protecting a public API is not just a business necessity – it’s a regulatory requirement. If your APIs are leaking secrets and sensitive PII information, you run the risk of facing lawsuits for failing to protect that information by properly securing them.

This may sound cliché, but the harsh truth is that many public APIs contain numerous vulnerabilities. Therefore, properly securing APIs so that the worst-case scenario doesn’t become a reality is a major challenge for development teams.

This article aims to give the reader a better understanding of what will work and what won’t work in the context of API security. We will walk you through the most essential considerations, taking into account the fact that there is no silver bullet for API security due to the ways in which APIs are built and deployed.

Let’s get started.

Reasons Why APIs Are Hard to Test and Secure

APIs come in all shapes and sizes. The most critical parts of an API are often the most susceptible to attacks on vulnerabilities like those listed in the OWASP API Security Top 10. For example, attackers might attempt to abuse paths for logged in users, exfiltrate sensitive data by fuzzy testing the endpoints, or force the site down using DDoS attacks. As the API surface increases, of course, so does the risk of exposure. Imagine having to protect APIs with hundreds of endpoints, each with its own conventions.

This could happen quite easily when there are multiple teams working on the same project and contributing to the same API. The API might not have a sole owner; instead, it might be exchanged between multiple stakeholders (such as project owners, developers, testers, operations, and network and security teams), with each group submitting their own piece to deliver new features to their customers.

Relying too heavily on some parts of the delivery pipeline to include sufficient security controls is equally problematic. For example, you shouldn’t rely on the development team to constantly provide the most secure software all the time. Developers do not habitually think like attackers. They will incorporate basic pragmatic reasoning and accept logical trade-offs when delivering sprint goals. However, they will miss important security considerations fairly often simply because they are generally unaware of use and abuse scenarios. Likewise, if you expect your WAFs to perpetually block all unwanted traffic and your static code analyzers to always pinpoint all security flaws in the code, then you will be in for a big surprise when attackers exploit unknown vulnerabilities or zero-day bugs.

Therefore, since APIs are hard to test and secure, you must be innovative in finding conventional tools, agile methodologies, and various approaches that work well together. Let’s explore some of them.

Tools and Approaches That Work Well Together – and Some That Don’t

Once you have established that you need to use flexible approaches to keep your APIs secure, you want to make sure that you are following the right guidance. There are lots of great guides and best practices for API security, but it’s important to differentiate the ones that make the most sense for your APIs from the ones that don’t.

Some of the approaches that work with API security are mainly focused on establishing in-depth defenses:

  • Authenticate and authorize by default: When you are developing APIs, you should explicitly mark unauthenticated endpoints (and not the other way around). For example, you should protect all endpoints with strong authentication (2-factor) and assign a default role that cannot read or modify any data. That way, there is less risk of exposing new endpoints that do not have any sort of protections.
  • Use API security scanners: Tools and services in this category include runtime protection, DAST (vulnerability scanning tools), static code analyzers, and security bots. These tools provide a nice layer of defense against baseline security issues.
  • Use penetration testing: You can hire professionals who think like attackers and are able to perform sophisticated tampering. These professionals have access to unconventional tools and leverage techniques that help expose issues that security scanners won’t pick up by default.

On the other hand, some approaches might not work in the long run because they possess inherent risks. For example:

  • Security through obscurity: This means that we try to secure part of the API by making it harder to use or to discover its endpoints or behavior. This is like an Easter egg hunt where you expose certain parts of the API only to clients that know where to look for them. Or you might make it harder for attackers to guess some of the API schemes by using special query parameters or headers. Although this approach might work for certain cases, it is not considered 100% secure. Cunning attackers might figure out a way to uncover those hidden parts of the API or infer how the API response works, which would give them the ability to retrieve sensitive information.
  • Using JSON Web Tokens (JWTs) for storing sensitive data: JWTs offer a good balance between security and convenience when working with APIs – as long as you abide by the rules. If you store sensitive data in the JWT payload, for example, you are already compromising security since JWTs can be decoded easily. Always use industry-standard JWT libraries, strong JWT secrets, and short expiration tokens. In addition, always use HTTPS.

We mentioned before that APIs can be secured by automating the scanning of application code. Let’s explore that in more detail.

How to Effectively Scan APIs

Start with the source of truth, the source code. By scanning the source code, you get a complete view of what APIs are inside of a project. This also allows developers to easily and quickly fix any security issue that has been identified by this static analysis. Since APIs need to be human-readable, scanning the API Documentation as Swagger can also be key to identifying risk—checking key risks like access control, configurations, and best practices. This is an example of shift-left security, wherein DevOps teams ensure that security is built into application development rather than added on later.

Next, once those APIs are moved to an environment, it is easy to perform real E2E tests based on predefined rules against your real API. If the scanner finds any defects or suspicious red flags, it will report them to a dashboard for triaging. These scanners include many checks (like the OWASP API Security Top 10 and open CVEs) and can automatically create policies without intervention.

Even if you have a perfectly valid specification file, though, it doesn’t mean that the scanners will find all of the issues. It’s equally important that you are able to feed the scanner raw HTTP recorded sessions using either Fiddler or Burp so that it can verify unknown parts of the API.

Ways to Address the Continuous Development of APIs and Ever-Changing Contracts

At the end of the day, how can you address the continuous development of APIs and ever-changing contracts when you have to contend with multiple attack vectors in real time?

There are several options to consider:

  • Scan early and often: Software is developed at the speed of light nowadays.The only way to keep up with the ever-changing contracts is scanning the API Documents and Source Code when they are changed by the development team.This can alert them immediately to critical vulnerabilities and risk. Allow the developer to learn better practices and secure the APIs well before they are live.
  • Create clean environments for security testing: Sometimes you can’t slow down the release of new features in the name of security. It’s important that security teams create special environments where they can introduce novel security testing tools and scanners for advanced security testing. The primary idea is to conduct intelligent analytics, pinpoint hidden attack vectors, and expose vulnerabilities without affecting the production environment. Once a security issue has been found and mitigated in this special environment, the security team can patch the production systems using the current change request system.
  • Adopt DevSecOps workflows: DevSecOps is about integrating your IT security team into the full lifecycle of your app. Put simply, this means that security teams follow short and frequent development cycles, integrate security tools and vulnerability scanners with minimal intervention, ensure that all operational technologies run with the optimal security configurations, and promote a security-first mindset across isolated teams. This can be accomplished by including automated security checks throughout the CI/CD process and creating service templates that are secure by default for the development teams to adopt.

Next Steps with API Security

We want to conclude this article by emphasizing the fact that there is no way your APIs can be 100% secure at all times. Instead, organizations should be constantly vigilant to scan their APIs for risks and vulnerabilities. As a security professional, it’s crucial that you stay informed about the status of the API security ecosystem. Reading security advisories like the ones from Checkmarx is a great way to acquire this knowledge. Their advisories discuss real public exposures while providing deep root cause analysis and explaining mitigation tactics. Feel free to sign up for more tutorials about API security and DevSecOps.

About the Author

Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source software Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto.

..

..


To learn more about the many risks (including APIs) in modern application development, download this e-book today.

Or if you would like to learn more about Checkmarx approach to API security within the Checkmarx One™ Application Security Platform, this white paper explains it all.

]]>
What Is Your API Attack Surface? https://checkmarx.com/blog/what-is-your-api-attack-surface/ Wed, 20 Jul 2022 11:36:20 +0000 https://checkmarx.com/?p=77455 The proliferation of APIs today is astonishing. According to a recent report, the number of active APIs will approach 1.7 billion by 2030. You might expect that the majority of those APIs would be resistant to attacks or vulnerabilities; however, that is not necessarily the case. In fact, a major study from RapidAPI on the state of enterprise APIs revealed a rather distressing lack of consistent policy enforcement and visibility across all APIs. Many of those APIs might expose parts that are undocumented, which is a great concern among security teams.

This article explores the fundamental reasons why an API can become insecure and discusses ways to reduce the attack surface.

How Do Documentation Tools Like Swagger Fall Short?

Documenting APIs comes with inherent difficulties. The main problem areas include ambiguity of exposed functionality, incomplete or undocumented content, and incorrect responses. Development teams rely on tools like Swagger to help them document their APIs and use automation to mitigate some of those issues.

Swagger is not a panacea, however, and using Swagger/OpenAPI for API development does not protect you against all vulnerabilities. Recent research conducted by Soufian El Yadmani of Cybersprint found many flaws in this technology, including many vulnerabilities listed in the OWASP API Top 10.

For example, issues can happen when a web framework is integrated with Swagger. Developers using the FastAPI framework can receive automatically generated Swagger UI fields from the code without declaring the API specification. This works fine on paper, but it might not work with your particular business requirements. The CRUD model might return a different result based on certain circumstances, for instance, or use a different way of calculating the end result that deviates from a typical case. The framework might be unable to introspect the correct result and rely instead on manual intervention. In this case, developers will have to completely own the dependencies and generated documentation to ensure that they match the expected outcome.

In addition, Swagger generates complex code that has little opportunity for customization, which makes it inconvenient to use. For example, if hypermedia links are missing from the response, developers might try to intervene by writing custom queries that deviate from the spec, thereby increasing the risk of producing undocumented responses.

Overall, the hardest part is establishing proper workflows for working with Swagger in the optimal way – writing specs in YAML, implementing the specs, and writing unit tests that conform with those specs. The API maintainer’s job is to make sure that they don’t deviate from the supported features and that they keep the spec as an authoritative source of truth.

Why It’s Hard to Keep Track Of APIs

There are many reasons why APIs are hard to keep track of, and therefore hard to support at scale, but they all relate to technical debt. Working on many products as part of a broader ecosystem is a very typical scenario. Some parts of the system might be unknown due to a change in business priorities and focus, when some API services were neglected and forgotten – until they failed to work. This is an example of code ownership debt.

Another type of technical debt is people debt. This happens when you allow developers to work on critical API systems for a long time and then they decide to resign from the company. The domain experience that these people acquired when building those systems may be lost to future maintainers unless there is a good handover process. Having many API services and no one who understands how they work significantly contributes to the initial problem.

The issue becomes even bigger when you introduce architectural debt by implementing APIs using microservices. Let’s explain.

How Microservices Architectures Lead to API Communication Flows

APIs and microservices are directly related. As you develop applications using microservices, you create highly decoupled services that enclose their own domain and communicate with each other using APIs. For example, a user microservice needs to communicate with the auth microservice to authenticate the user before responding to a request to view that user’s profile. This communication dictates the use of an API contract between each microservice. In addition, API gateways can be used to aggregate multiple APIs under a single namespace, which helps maintain observability and centralized monitoring at a fundamental level.

More real-world domains may need many microservices. In practice, that means having a separate API layer behind each microservice – and each one has its own attack surface. It’s not unusual to have hundreds of microservices that each expose OpenAPI interfaces. This leads to what is known as API sprawl. In this case, the sheer number of services makes things harder to maintain. It’s very common for businesses to use APIs without tracking when or where they are used, which makes it harder to keep track of their security risks.

Not Knowing All Your APIs Leads to Catastrophe

It is a well-known fact that you cannot secure something when you don’t know it exists. But achieving continuous runtime visibility into all APIs is not trivial. It requires that you understand the exposed parts of the API in depth and that you document the obscure sections, run static code analyzers, and subject the application to security testing. This is mandatory, since the failure to record and secure these parts is very dangerous.

APIs are already extremely susceptible to many kinds of attacks. Undoubtedly, there is no limit to what attackers can use when it comes to stealing sensitive data for malicious purposes. Here are some examples of attacks that you may encounter:

  • DDoS: Attackers can target unknown parts of your APIs to do the maximum damage. They can overload the system with terabytes of bandwidth if they’re successful, and you won’t know what hit you. It’s therefore important to safeguard against DDoS attacks on each exposed part of your APIs.
  • Innocuous access: Hidden or undocumented parts of your API can be easier to exploit. By using unknown endpoints, for example, attackers can dig in quickly and gain unauthorized access without someone detecting anything suspicious. Unless the traffic from those endpoints is more consistent, it will be harder for the security teams to recognise the danger.
  • Injection: API endpoints that are vulnerable to injection attacks (SQL, XSS, and so on) can help attackers expose sensitive data, leak credentials, and gain insights into how to attack the infrastructure. If the API under attack is unknown or undocumented, you will lose valuable time trying to figure out how to safeguard it.

Take extra care when you develop and expose APIs – especially the ones that are used for public access. These can be used as target practice for all sorts of attacks. Conducting a solid security assessment of your APIs will position you one step ahead of any future attacks.

Conclusion

Just because you use Swagger, follow the best practices for documentation, and lead by example does not mean that your APIs are secure by default. It literally takes a village to achieve a superior level of security – mainly because of the inherent difficulty of covering the attack surface of your APIs. This is why you need to establish an end-to-end API security strategy, complete with security testing, extensive discovery of available APIs, and the use of modern tooling and monitoring services. With some effort, you will improve the security posture of your APIs without hurting their overall performance.

About the Author

Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source software Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto.

..

..


To learn more about the many risks (including APIs) in modern application development, download this e-book today.

Or if you would like to learn more about Checkmarx approach to API security within the Checkmarx One™ Application Security Platform, this white paper explains it all.

]]>
Static Analysis of Infrastructure as Code with Codefresh and Checkmarx https://checkmarx.com/blog/static-analysis-of-infrastructure-as-code-with-codefresh-and-checkmarx/ Tue, 10 May 2022 12:55:00 +0000 https://checkmarx.com/?p=75751 Infrastructure as Code (IaC) is the description of infrastructure (clusters, virtual machines, networking, storage, etc.) with a declarative model and its subsequent management using the same principles as source code. Companies that have adopted Infrastructure as Code have a unified way of working with both applications and infrastructure using the same tools (i.e., version control) and workflows.

IaC tools are now available for many different infrastructure providers ranging from traditional virtual machines to Kubernetes clusters and containerized applications.

One advantage of IaC is that all techniques that are usually aimed at source code (e.g., unit tests, linting, security scanning) can now be applied to infrastructure as well.

Static Analysis with Checkmarx KICS

Static analysis of infrastructure is a particularly important aspect of a secure workflow, as errors in infrastructure can result in unplanned downtime, to having security issues in production environments.

KICS (Keeping Infrastructure as Code Secure) is a free, open source solution for static code analysis of IaC powered by Checkmarx.

KICS automatically parses common IaC files of any type to detect insecure configurations that could expose your applications, data, or services, which are deployed on the cloud, to attack. That means you can let anyone on your team write IaC files, and then vet the files to ensure they are secure before deployment.

Instead of setting security guidelines in your IT governance policies and hoping engineers and developers follow them when creating IaC files, you can automatically enforce IaC security with KICS.

KICS is an open source tool that supports all mainstream IaC platforms like:

  • Terraform
  • CloudFormation
  • Azure Resource Manager
  • Google Deployment Manager
  • Ansible
  • Kubernetes/Helm manifests
  • and more

KICS also integrates with a variety of software development tools and makes it possible to add IaC security scanning to your existing workflows without friction.

As an open source, platform-agnostic IaC scanning tool, KICS can grow seamlessly along with your development and deployment operations.

Developers can extend KICS with new checks and policies using a simple, industry-standard query language (REGO). In addition, developers can quickly onboard new items to automated scanning workflows while also extending IaC scanning capabilities into new parts of their application stack or new types of IaC resources by taking advantage of KICS’ modular design.

KICS also enables organizations to keep up with security best practices and comply with CIS benchmarks such as “CIS Amazon Web Services Foundations Benchmark version 1.4.0, Level 1” or level 2.

For more details on KICS refer to:kics.io

Automating Static Analysis with Codefresh

KICS offers several ways to run it out of the box. You can run it from the command line, via a Docker container, or even try it from a web interface. For a production environment however, an automated workflow where a deployment pipeline is running KICS as part of the full infrastructure lifecycle is strongly recommended. A logical way to run KICS  would be as part of a Continuous Integration (CI) process resulting in repeatable scans against any change in infrastructure files.

The Codefresh Software Delivery Platform includes a Continuous Integration component that can easily run KICS in an automated way using any of the supported triggers. As a very simple example, if a developer changes a Kubernetes manifest for an environment, KICS  should be executed as part of the commit action to verify that the change is valid.

For Codefresh CI, a KICS pipeline step is already available in the steps marketplace. You can therefore insert KICS anywhere in a Codefresh pipeline and use it to verify any supported IaC file.

Once the security scan is complete, you can further copy/upload the report to any supported storage provider of your choice, or use an intermediate format (e.g., json) to further process the results.

You can also apply the infrastructure files themselves directly via Codefresh pipelines. Codefresh has built-in support for Kubernetes/Helm manifests and can easily use any other IaC tool such as Terraform for creating a complete pipeline that both checks and applies Infrastructure as Code files.

Running KICS using the Codefresh GitOps platform

You can use KICS from the classic Codefresh CI platform as we saw in the previous section. There is also upcoming integration for using KICS in the new Codefresh Software Delivery Platform that is specifically created for GitOps deployments and is powered by the Argo Project. Adopting GitOps is a natural extension of Infrastructure as Code and the new integration will soon appear in the Codefresh Argo Hub

With KICS and Codefresh, developers and AppSec team can improve the security of their IaC to reduce risk and increase confidence of secure IaC. In fact, over 650K people have downloaded KICS as of date.

]]>
Picture1-1-1 Picture2-1-1
Checkmarx Named a Leader in the 2022 Gartner® Magic Quadrant™ for Application Security Testing for the 5th Consecutive Year https://checkmarx.com/blog/checkmarx-named-a-leader-in-the-2022-gartner-magic-quadrant-for-application-security-testing-for-the-5th-consecutive-year/ Thu, 21 Apr 2022 15:40:13 +0000 https://checkmarx.com/?p=75344 Today marks the much-anticipated release of the 2022 Gartner Magic Quadrant for Application Security Testing1 (AST), and we’re thrilled to announce that Checkmarx has been named a Leader for the 5th consecutive year based on our Ability to Execute and Completeness of Vision.

We believe, Checkmarx continues to maintain its strong position in the AST market and we’re very proud of that.  More and more organizations are embedding AST throughout their modern application development and cloud-native initiatives, driving rapid AST market growth that trends alongside the proliferation of software amidst worldwide digital transformation.

We believe that the observations made by Gartner support the need for enterprises to not only implement a strong foundation of testing proprietary code and open source libraries via SAST and SCA scans, but to also account for emerging cloud-native technologies including APIs, containers, microservices, and Infrastructure as Code (IaC). Beyond that, evolving risks in the open source supply chain continue to increase as more open source becomes part of today’s codebases.  

To address the growing need of AST solutions that fit well into modern application development, we released the Checkmarx AST Platform™ late last year. And just last month, we added Checkmarx Supply Chain Security to our portfolio of solutions. Built for the cloud development generation, our AST Platform delivers essential application security testing services from a unified, cloud-based platform. In one scan, it analyzes source code, open source dependencies, supply chain risks, and IaC templates, correlates and verifies the results, and augments them with expert remediation advice. Best of all, these services integrate right into your existing software development tools and processes.

Application security testing solutions that address the broadening risk landscape are no longer a ‘nice to have,’ but rather a ‘must have,’ and today, it’s imperative to leverage innovative solutions that address all code types and the associated risks within modern applications. As a result,  Checkmarx is laser-focused on helping our customers navigate software complexity and expand their test coverage to address the way applications are being developed and deployed, so they can improve the security and quality of their software without slowing down development. With 16+ years of innovation in AST, we remain committed and intensely passionate about delivering powerful solutions to organizations that thrive on the software they develop.

As we celebrate being named a Leader in the 2022 Gartner Magic Quadrant for Application Security Testing, we’d like to thank our incredible customers, partners, and employees who have been, and will continue to be, the cornerstone of our success.

Read the Full Report

Download a complimentary copy of the 2022 Gartner Magic Quadrant for Application Security Testing here.

To learn more about the Checkmarx AST Platform and our suite of Checkmarx AST solutions, visit: www.checkmarx.com

1Gartner, Magic Quadrant for Application Security Testing, Dale Gardner, Mark Horvath, and Dionisio Zumerle, 18 April 2022.

Gartner Disclaimer:

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

]]>
The Open Source Supply Chain Under Assault – New Defenses Are Required https://checkmarx.com/blog/the-open-source-supply-chain-under-assault-new-defenses-are-required/ Tue, 22 Mar 2022 13:03:00 +0000 https://checkmarx.com/?p=74498 For those who’ve been working in the world of information security over the last two decades have likely taken note of attacker Tactics, Techniques, and Procedures (TTP), and how they’ve evolved over time. Let’s take a closer look at what’s changed.

The Evolution of TTP

In the very beginning of cyberattacks, attackers would spend time creating self-propagating viruses and worms to exploit vulnerable operating systems and desktop applications. For example, the “I Love You” virus, which dates back to the year 2000, infected over ten million computers worldwide. Names like Code Red, SQL Slammer, Sobig, MyDoom, Netsky, Stuxnet, Zues, and so on, made headlines all over the globe. As a result, antivirus companies proliferated, holes were plugged in operating systems, devices and perimeters were hardened, bug bounties were initiated, and many of these TTPs were defeated.

During much of this same period, a new genre of TTPs emerged in concert with these highly successful malware examples, and phishing became the new name – of an old game. Since perimeter and workstation defenses were somewhat difficult to overcome from the outside-looking-in, attackers knew that if they could fool someone into clicking on a link in an email, back doors could be opened, and perimeter defenses may well be defeated.

Therefore, a whole new generation of malware surfaced in the form of ransomware and botnets. For example, names like Locky, Tiny Banker Trojan, Mirai, WannaCry, Petya, and many more were the next malware variants to gain notoriety. Email phishing defenses, spam detection systems, employee email phishing training, etc. proliferated and helped defeat some of these attacks.

As a result, attackers likely began to conclude, “If we can infect a software supply chain, our malware proliferation and victim count could grow exponentially.” And in December of 2020 they did just that. The SolarWinds supply chain attack took place, leading to both government and enterprise data breaches that made headlines worldwide. However, the SolarWinds’ attack was leveraged against a commercial software supply chain and was not necessarily focused on what is called the open source supply chain.

Why Supply Chain – Why Now?

Today’s attackers realize that infecting the supply chain of open source libraries, packages, components, modules, etc., in the context of open source repositories, a whole new Pandora’s box can be opened. And as we all know, once you open that box, it’s nearly impossible to close. In fact, Checkmarx leadership saw this coming. Back in December of 2019, Maty Siman, Founder and CTO of Checkmarx contributed to this predictions blog.

Maty wrote, “With organizations increasingly leveraging open source software in their applications, next year, we’ll see an uptick in cybercriminals infiltrating open source projects. Expect to see attackers ‘contributing’ to open source communities more frequently by injecting malicious payloads directly into open source packages, with the goal of developers and organizations leveraging this tainted code in their applications.

As we see this scenario unfold, there will be a growing need for processes like developer and open source contributor background checks [contributor reputation]. Currently, open source environments are based entirely on trust – organizations typically don’t vet developers’ past projects or reputations. However, as attackers take advantage of open source projects, this trust will begin to erode, forcing organizations to take proactive mitigation steps by thoroughly vetting the open source code within their applications, as well as those providing it.”

So, as we see here, Maty Siman was spot on. Not only did Checkmarx see attacks on the open source supply chain coming, in fact, they did something about it by acquiring Dustico in August of 2021. Now, TTPs like dependency confusion, typosquatting, repository jacking (aka ChainJacking), and star jacking are the new name of the game. In fact, Checkmarx just released a new white paper today, Introduction to Supply Chain Attacks, explaining how these attacks actually work.

Landscape Changer: Checkmarx Supply Chain Security

As a result of Maty’s predictions (which did come true, by the way), and their proactive stance on defeating supply chain attacks, Checkmarx just announced a new arrow in the quiver of enterprise-class, open source supply chain defenses. Checkmarx SCA with Supply Chain Security (SCS) is now available, and the solution sets an entirely new bar for all SCA solutions.

Checkmarx is first to market with supply chain defenses organizations need now which include:

  • Health and Wellness, and Software Bill of Materials (SBOM)
  • Malicious Package Detection
  • Contributor Reputation
  • Behavior Analysis
  • Continuous Results Processing

In addition to our white paper on supply chain attacks, Checkmarx released another white paper today, Don’t Take Code from Strangers – An Introduction to Checkmarx Supply Chain Security. This paper goes into detail about topics like SLSA, traditional code analysis, and pushing boundaries in secure software supply chain innovation.

Checkmarx SCA with Supply Chain Security (SCS) offers a more comprehensive approach to preventing supply chain attacks and securing open source usage by enabling developers to perform vulnerability, behavioral, and reputational analysis from a single, integrated platform. By natively integrating advanced behavioral analysis into SCA, Checkmarx provides developers with a streamlined, frictionless user experience to enhance their organization’s supply chain security.

To learn more about Checkmarx SCA with Supply Chain Security, you can request a demo here.

]]>
KICS – From IaC Security to Cloud Security Posture and Drift Control https://checkmarx.com/blog/kics-from-iac-security-to-cloud-security-posture-and-drift-control/ Tue, 25 Jan 2022 13:27:27 +0000 https://checkmarx.com/?p=73759 Gartner mentions that by 2025, 70% of all enterprise workloads will be deployed in cloud infrastructure and platform services. Also, through 2025, more than 99% of cloud breaches will have a root cause of preventable misconfigurations or mistakes by end users. Hence, as more organizations utilize cloud infrastructure and deploy their software on the Cloud as part of their business model, it’s crucial to be able to understand security posture and constantly mitigate security risks by scanning their Cloud infrastructure in all environments.

IaC (Infrastructure as Code) scanning as part of your software development life cycle is step one, and now we are taking scanning to the next phase. Besides scanning your IaC files, we can now connect and scan the deployed production environments to help identify any security misconfigurations in those environments. Whether those misconfigurations come from your IaC files, manual resource provisions and changes, or resources not being up to date with current versions or security features, KICS by Checkmarx can now help solve many of these issues.

In order to help developers, DevOps, and security teams with the challenge around IaC concepts such as managing cloud resources configurations, ensuring they are aligned across all environments, while keeping up with security best practices, an organization’s policies, and mitigating risks, we introduce KICS 1.5 release.

This release enables organizations to extract cloud resources configurations from runtime environments on AWS by leveraging Terraformer capabilities. Then organizations can construct the IaC files which reflects the runtime configuration, and scan them automatically with KICS, in order to get the actual security posture as seen in the scan report, which highlights a list of vulnerabilities and misconfigurations.

By using this new capability, developers, DevOps, and security teams can now scan live production environments and get an overview of their cloud security posture. In addition, manually comparing these results with the IaC pipeline scanning can help identify any cloud configuration drift.

According to Ori Bendet, Vice President of Product Management, Checkmarx, “With this new capability we are securing cloud infrastructure a step further. Companies can now scan their IaC pipelines together with their live environments and getting a better understanding of their cloud security posture.”

While this is a major step forward of automatically comparing security findings and misconfigurations between the cloud infrastructure’s different environments, and mitigating the risks as soon as they occur, we are planning to empower developers, DevOps, and security teams even further. Soon we’ll be supporting other Cloud providers infrastructure scanning (such as Azure, GCP, etc.) and also delivering an enhanced drift detection tool called “Driffty”, which will complement KICS capabilities and provide more actionable insights on top of it. So, stay tuned to what’s coming next!

]]>
4 Essential Security Skills for Modern Application Development https://checkmarx.com/blog/4-essential-security-skills-for-modern-application-development/ Wed, 12 Jan 2022 15:10:03 +0000 https://checkmarx.com/?p=73497 Modern application development strategies provide a range of benefits, such as faster release cycles and applications that are easy to port from one environment to another.

But modern applications also present some steep challenges, not least in the realm of security. In order to thrive in the modern development landscape, developers and AppSec pros alike must be sure that their security skills can address the variety of threats that teams face today – a time when cyberattacks are growing steadily in frequency and scope.

Here’s a list of four essential security skills that anyone involved in modern application development should possess.

IaC Security

Infrastructure-as-Code (or IaC) tools have become an essential part of many modern application development workflows. Using IaC, teams can automatically provision large-scale software environments, which maximizes scalability and minimizes the risk of configuration mistakes due to human error.

But the downside of IaC is that, if the configurations applied using IaC templates contain security issues, those issues will be proliferated across your entire environment. And in most cases, the IaC tools themselves do nothing to alert you to potential security issues.

That’s why developers and AppSec teams must learn to follow security best practices when configuring IaC templates, such as:

  • Secure your secrets: Don’t hard-code passwords or other secrets into IaC templates. Store them in an external secrets manager instead.
  • Modularity: Keep IaC templates short and modular. Trying to cram too much configuration into a single template increases the risk of making a mistake.
  • Use default settings with care: Don’t blindly trust default IaC settings. Make sure you tailor templates to your environment.

Just as important, teams should use IaC scanning tools to validate their IaC configurations prior to applying them. IaC scanners automatically check IaC templates for misconfigurations that could create security issues.

Open Source Security Risks

There are many excellent reasons for modern development teams to leverage third-party open source code. Open source saves time because it allows you to reuse code that someone already wrote rather than having to write it yourself from scratch. It is also often easy to customize. And it’s usually free of cost, which is never a bad thing.

Yet open source also poses some significant security risks when you incorporate it into your applications. You can’t guarantee that third-party open source developers adhere to the same security standards that you do when they write their code.

For that reason, it’s critical to know where in your application you incorporate open source code, and also to scan that code to identify known vulnerabilities within it.

Container Security

Modern applications are often deployed using containers. Because containers – and the orchestration platforms used to manage them (like Kubernetes) – add another layer of infrastructure and complexity to your software stack, they create a variety of potential security risks that would not be present if you were deploying applications directly on a host OS, without having a container runtime and orchestrator in the equation.

Modern developers and AppSec teams need to know what these risks are and take steps to address them. A full discussion of container security is beyond the scope of this article, but basic concepts include:

  • Image security: Securing container images by ensuring that any external components on which they depend are secure, as well as using a container image scanner to check for vulnerabilities.
  • Configuration security: Scanning the configuration files that are used to deploy and orchestrate containers. Like IaC templates, these files can contain configuration mistakes that enable security breaches.
  • Runtime security monitoring: Using monitoring tools that can collect and analyze data from across complex containerized environments (which means collecting data from various containers, from all of the operating systems running within the cluster of servers that hosts your containers, and from the orchestration platform) to detect signs of breaches at runtime.

Microservices Security Risks

Microservices have become massively popular because they make it easy to build agile, scalable applications that lack a single point of failure, among other benefits.

But microservices also significantly increase the complexity of application architectures, which in turn increases the risks of security issues. Because microservices rely on the network to communicate with each other, there is a higher risk of sensitive data exposure. Insecure authentication and authorization controls between microservices could enable a breach to spread from one microservice to an entire app. Security problems with the tools used to help manage microservices (like service meshes) create another set of security risks to manage.

As with container security (which, by the way, is a closely related topic, because microservices are often deployed using containers), there is more to say about microservices security than we can fit into this article. But some core microservices security best practices for developers and AppSec teams to follow include:

  • Microservices isolation: When designing your microservices architecture, strive to isolate each microservice as much as possible. Allow interactions between services only where strictly necessary.
  • Authentication and authorization: Always require authentication and authorization for microservices to communicate with each other.
  • Network isolation: Isolate the internal networks that microservices use from the public Internet, and never expose a microservice directly to the Internet unless it needs to be.

Conclusion: Modern Application Development Requires Modern Application Security

Most of the security challenges discussed above simply didn’t exist ten years ago. Back then, no one was provisioning hundreds of servers using IaC templates or deploying applications with highly complex architectures based on microservices and containers.

But those practices are the norm for modern application development. Developers and AppSec teams must respond by ensuring that they are able to leverage the tools and skills necessary to meet the modern security risks that go hand-in-hand with modern application development.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Click here to learn more about security in the context of Modern Application Development

]]>
Picture1-1