List of Risks The world runs on code. We secure it. Tue, 20 Aug 2024 07:52:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp List of Risks 32 32 A Developer’s List of Key Container Security Risks https://checkmarx.com/blog/a-developers-list-of-key-container-security-risks/ Tue, 21 Sep 2021 14:16:10 +0000 https://checkmarx.com/?p=65434 There are a variety of excellent reasons to use containers. They’re more agile and consume fewer resources than virtual machines. They provide more flexibility and security than running applications directly on the OS. They are easy to orchestrate at massive scale using platforms like Kubernetes.

At the same time, however, containers present some significant challenges, not least in the realm of security. Although the benefits of containers outweigh the security risks in most cases, it’s important to assess the security problems that containers can introduce to your software stack and take steps to remediate them.

Toward that end, this article lists the top seven security risks that containers may pose, along with tips on addressing them.

Risk 1: Running Containers from Insecure Sources

Part of the reason containers have become so popular is that admins can pull a container from a public registry and deploy it with just a few commands.

That’s great from the perspective of achieving agility and speed. But from a security point of view, it can pose problems if the container images that you pull contain malware.

This risk is not just theoretical. Hackers have actively uploaded malicious container images to Docker Hub (the most widely used public container registry) and given them names intended to trick developers into believing they are images from a trusted source. Indeed, according to one source, no fewer than half of all images on Docker Hub contain at least one vulnerability, which is an astounding figure.

The lesson here is that it’s absolutely vital to check and double-check the origins of container images that you pull, especially when dealing with public registries.

Risk 2: Exposing Sensitive Data through Container Images

The risks associated with container registries can run in the opposite direction, too: you could upload data to a private registry that you assume is secure, only to discover that your registry – and the sensitive data you stored in it – are actually accessible to the world at large.

That’s precisely what happened to Vine in 2016. The company uploaded a container image that included source code for its entire platform into a registry that was not properly secured. The registry’s URL hadn’t been publicly shared, but anyone who could guess the URL had unfettered, no-password-required access to the images in it.

Mistakes like this are easier to make than you might imagine. When you’re juggling dozens or even hundreds of container images, it’s easy to fall into the trap of accidentally placing a sensitive image in an unsecured registry, or even forgetting that an image contains sensitive data in the first place.

Risk 3: Placing Too Much Faith in Image Scanning

Image scanners, which can automatically determine whether containers contain known vulnerabilities, are a vital tool for helping to secure containers.

But scanners are only one type of tool, and they’re hardly a complete guarantee against all types of risks. They work by matching the contents of container images against lists of known vulnerabilities, which means they won’t discover security flaws that have not yet been publicly disclosed. Container scanners may also overlook vulnerabilities if container images are structured in unusual ways or their contents are not labeled in the way the scanner expects.

The takeaway: by all means, use container scanners. But never assume that an image is secure just because your scanner deems it so. Take additional steps to secure the container, such as monitoring the runtime environment for signs of security issues.

Risk 4: Broader Attack Surface

Running containers requires more tools and software layers than running a conventional application. In this respect, containers create a broader attack surface.

When you deploy containers, you have to worry about the security not just of the application and the operating system that hosts it, but also of the container runtime, the orchestrator, and possibly a variety of plugins that the orchestrator uses to manage things like networking and storage. If you run “sidecar” containers to help with tasks like logging, those become a security risk, too.

All of the above can be managed, but it requires a deeper investment in security – and a broader set of security tooling – than you’d use with a traditional, non-containerized application stack.

Risk 5: Bloated Base Images

Container base images are images that developers use as the foundation for creating custom images. Typically, a base image is some kind of operating system, along with any common libraries or other resources required to run the types of applications you are deploying.

It can be tempting to pack more than the bare minimum into base images. You never know what you may need to run your applications in the future, so you may decide to include libraries that aren’t strictly necessary for your applications today, for instance.

But the more you include in your base images, the greater the risk of a vulnerability that allows your containers or applications to be compromised. A best practice is to build base images that are as minimal as possible, even if that means updating them periodically or maintaining different base images for different applications.

Risk 6: Lack of Rigid Isolation

Containers should isolate applications at the process level. But the fact is that they don’t always do that perfectly well. At the end of the day, containers share the same kernel, and a bug in the runtime or a misconfiguration in the environment could allow a process running inside one container to access resources that live in other containers, or even gain root access to the host.

This is why it’s extra important in the case of containers to vet your configurations for security as well as monitor runtime environments for malicious activity. There is simply a greater risk of privilege escalation and similar issues with containers than there is with virtual machines.

Risk 7: Less Visibility

The harder it is to observe and monitor an environment, the harder it is to secure it. And when it comes to containers, observability and monitoring are especially difficult.

It’s not that the data you need to track containers doesn’t exist. It’s that that data is spread across multiple locations – inside containers, on Kubernetes worker nodes, on Kubernetes master nodes – and that it’s not always persistent (logs inside containers will disappear forever when the container instance shuts down, unless you move them somewhere else first).

Here again, these challenges are manageable. But they require a more sophisticated strategy for keeping track of what is happening inside your environment than you would typically have with a simpler type of application stack.

Conclusion: Containers Are Great, but They Are Harder to Secure

Again, none of the security risks described above are a reason not to use containers at all. But they are reminders that with the great agility that containers provide comes extra responsibility. Before you go pulling images from a random Docker Hub registry and calling it a day, be sure you know where your images came from, what’s in them, and which security risks may arise when they run.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Download our Ultimate Guide to SCA here.

]]>
Picture1-1 Screenshot-2021-09-01-082024-1024×792-1
A Developer’s Guide to Managing Open Source Risks https://checkmarx.com/blog/a-developers-guide-to-managing-open-source-risks/ Thu, 16 Sep 2021 16:36:34 +0000 https://checkmarx.com/?p=64830 We’re living in an open source world. If you’re a developer today, it’s very likely that – no matter where you work or what type of applications you build – you rely at least in part on open source. Indeed, the Linux Foundation reported in 2018 that 72 percent of organizations use open source software in one way or another, and that more than half actively incorporate open source code into their commercial products.

It’s easy to understand why open source is so pervasive. By importing open source libraries, extensions, and other resources into applications, developers save themselves from having to reinvent the wheel. They can reuse code that others have already written, which frees up more time to write innovative features that don’t yet exist.

Yet open source can have its downsides. In order to leverage open source responsibly and avoid the security and compliance challenges that often accompany open source code, developers need full visibility into the open source software they use and the risks associated with it.


To provide guidance on that point, this article walks through the most common risks of incorporating open source code into a larger codebase. It also identifies best practices for working with open source code.

Risk 1: Inconsistent Security Standards

When it comes to security, open source code varies tremendously. Some open source projects, like the Linux kernel, maintain very high security standards (although even they sometimes let vulnerabilities slip by). Others, like random tools on GitHub that were written by CS students for a class assignment, don’t always set the bar so high.

This doesn’t mean that open source software is always insecure. Sometimes, it’s very secure. But sometimes, it’s not very secure at all.

This inconsistency presents a major challenge for developers. It means they have to vet the security of third-party open source code on a case-by-case basis. Although open source advocates like to claim that “many eyeballs make all bugs shallow,” and that vulnerabilities are therefore likely to be discovered and fixed quickly by the open source community, the fact is that the number of eyeballs on a given open source codebase can vary significantly. The less attention an open source project gets, and the less experienced its developers, the more likely it is to have low security standards.

Risk 2: Unknown Source Code Origins

Sometimes, it’s hard to vet the security of open source code because it’s unclear where the code originated in the first place.

For example, you might find a source code tarball on a website that offers little information on who wrote the code. Maybe there is a README inside the file that attributes the code to someone, but you have no way of verifying its authenticity.

Or, perhaps you clone code from a git repository, assuming that the maintainer of the repo is the code’s author. But the repo could contain code that actually originated somewhere else, and was simply copied into the repo you use. Here again, any documentation files that mention authorship are difficult to verify.

Being unsure where code originated matters because it’s easier to trust code when you know it was written by experienced, well-intentioned developers. Obviously, you should scan the code for vulnerabilities either way, but you can make better decisions about whether or not to use third-party code if you have confidence in where it came from.

Risk 3: Licensing Non-Compliance

Developers have a tendency to think they’re experts in open source licenses. But most are not. They often misunderstand licensing terms and hold false beliefs, such as “the GPL says you can’t sell your code for money” or “under the MIT license, you can do whatever you want as long as you attribute the original developers.”

Misconceptions like these create risk in the form of non-compliance with open source licensing terms. If developers don’t understand the specific requirements of the licenses that govern each of the various open source components they use, they may violate licensing agreements. And while it’s easy to assume that no one will actually sue you for breaking an open source license, such lawsuits are more frequent than you may think.

Complicating matters is the fact that open source projects occasionally change their licensing terms from time to time – as Elastic famously did in 2021, for example. This means that it’s not enough to determine the licensing requirements of open source code the first time you use it. You need to reevaluate every time the code changes versions.

Best Practices for Managing Open Source Risks

The risks identified above are not a reason to avoid open source. When managed responsibly, open source provides developers a range of benefits that outweigh the risks.

To keep the risks in check, consider the following best practices for working with open source code:

  • Download code from the project’s website or from GitHub repos that are linked from the project’s site. This is better than pulling code from random GitHub repositories, where there is a risk that a seemingly legitimate repo actually contains vulnerability-ridden code.
  • Always scan your code, no matter who wrote it or how certain you are of its origin.
  • Continuously validate the licensing terms of all open source components that you use.
  • Collaborate with your legal team to verify that your practices surrounding open source code actually comply with licensing terms. Don’t assume your developers “just know” what the licenses require.

Again, open source is an excellent tool. The life of the modern developer would probably be considerably more tedious without the ubiquity of freely reusable open source code. But to avoid shooting yourself in the foot when working with open source, it’s crucial to manage the inherent risks.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Download our Ultimate Guide to SCA here.

]]>
Picture1-1 Open_Source_eBook_Email_Banner-1024×375-1
A Developer’s List of Infrastructure as Code (IaC) Risks https://checkmarx.com/blog/a-developers-list-of-infrastructure-as-code-iac-risks/ Tue, 14 Sep 2021 14:16:18 +0000 https://checkmarx.com/?p=64565 Infrastructure-as-Code (IaC) tools are exemplary software solutions that Developers and DevOps teams use to describe common infrastructure components like servers, VPCs, IP addresses or VMs in a configuration language. Once ready to deploy, they use this configuration as a blueprint to provision actual infrastructure services on demand. The benefit they are getting here is better control of the change process, and more efficiency and consistency when deploying changes.

However, the devil is in the details and trying to implement those tools in real life often comes with risks. In this article we explain those risks with using IaC, coming from a developer’s point of view.

Biggest Risks of IaC

Steep learning curve

To start with, using IaC tools without taking the dedicated time to learn some of their quirks and limitations can become not just an inconvenience, but a serious problem.

Many of the most prevalent IaC tools like – Terraform and CloudFormation – are incredibly simple to start. But they become very complicated as your requirements change over time. For example, trying to utilize custom resources in CloudFormation requires some insight into how CloudFormation works under the hood. Failure to delve into the inner details will put your organization at increased risk of configuration drift, pushing the wrong infrastructure components or obtaining a system in an incomplete state.

Another example that requires some deep insight of how Terraform works is with Terraform Imports. The official page mentions in the warning box that:

“If you import the same object multiple times, Terraform may exhibit unwanted behavior

However, you will need to delve into the details in order to understand the nature of this unwanted behavior. What does it really mean? Does it create configuration drift? Does it break the local tfstate file? How can you prevent importing the same object multiple times? All of those questions need time and dedication to answer. As a developer, you should reserve that time as part of the process of learning the tool while using it for real projects.

Human mistakes

Some IaC tools have a distinct set of phases that you need to perform in a specific order. For example, in Terraform we have the following steps:

  1. terraform plan : Creates an execution plan for what terraform will perform against the infrastructure.
  2. terraform apply : Applies the plan.

Failure to perform a plan review before using apply can result in destructive changes. This is because a visual inspection beforehand gives you one more chance to review the changes before applying. The caveat here is that, by default, there is no requirement to follow the order. Directly applying the changes, skipping the first step, could lead to unfortunate mistakes.

The repercussions of missing a destructive change can be excruciatingly painful. It is recommended that you invest in establishing automated protections to apply changes to infrastructure. For example, instead of manually reviewing terraform plan outputs (which could be lengthy), you should invest in PR (pull request) automation tools like Atlantis to help in discovering any unintentional destructive changes early.

Configuration drifts

Terraform and other tools can help in maintaining infrastructure components and associated attributes, as long as they are managed solely by them. For example, Terraform cannot detect any drift of resources if you have applied while using other tools like Chef or Puppet. This could be a major problem for organizations still working on a variety of IaC tools that are unaware of each other or overlap their body of work.

Scanning for drift helps with figuring out those deviations between the IaC tool and the real infrastructure. Because each case is different, you will find there are virtually no public or private services that do that for you accurately. The best option for managing configuration drift for your organization is to make sure there is no overlap between the various IaC tools; any changes to the infrastructure are monitored by one IaC tool; and some sort of reporting and health checking are integrated. External checks can also sometimes help to verify that.

Difficulty protecting sensitive data and exposing ports

Although Terraform and CloudFormation can provision infrastructure components, there are some questions that need answering. For example, how would you know during the process if it leaked sensitive data? How would you know if the S3 bucket it manages in fact contains the security profiles you specified? Are you using a provider that leaks sensitive data to stdout? Did you unexpectedly open a port to the internet?  

These are only a subset of questions you need to answer when delegating your Infrastructure management to IaC tools. It’s easier to make a mistake and expose resources to the public web than the opposite, so there are a lot of things at stake here.

[In fact, in an analysis done by Checkmarx in August 2021, one of the most common IaC misconfigurations is having HTTP port 80 and ELB ports being left open to the public.]

How Should Developers Detect and Mitigate the Most Common IaC Risks?

As mentioned, most of the common risks can be mitigated by adhering to some common security best practices so it’s not all doom and gloom. Here is what you can start practicing:

  • Spend some time with the tools: Spend time to fully understand how each tool works: its quirks, its open issues and its best practices. Participate in meetups and subscribe to events so you can learn from other industry experts. This will give you an advantage when trying to figure out how to perform tasks – simple and advanced – without compromising the security posture.
  • Establish common engineering processes and best practices: Practice peer code reviews, CI/CD checks, linting, and verification. This can reduce the number of common accidents and mistakes that happen when you rush things.
  • Use purpose-built IaC security tools: Look around for purpose-built tools like Checkmarx KICS (Keeping Infrastructure as Code Secure) that help you establish a secure and efficient infrastructure-as-code pipeline. The benefit of this tool is that you can configure it to match your organization’s policies due to its great extensibility and cloud provider coverage.

[If you are looking to get a quick look and better understanding on your current state, you can also quickly upload your IaC into Checkmarx new KICSaaS and get immediate results without having to download anything. Give it a try at: https://kics.checkmarx.net]

How and Where Can Developers and Security Teams Learn More About the Risks?

First and foremost, you can learn more about the risks of IaC by asking local IaC experts or official communities about some of their experiences with IaC. You may find that many of the issues they encounter prepare you for what to expect and what to look out for.

Furthermore, you should subscribe to reputable email lists and blogs from vendors that specialize in those areas, and offer tailored solutions like KICS for tackling most of the problems explored in this article.

Lastly, you should perform your own investigation to assess the pros and cons of each tool based on your own requirements and success criteria. The one-size-fits-all approach isn’t suitable for all lines of business. By performing all the necessary steps, you can hopefully mitigate most of the critical risks of using IaC tools.

Theo Despoudis is a Senior Software Engineer, a consultant and an experienced mentor. He has a keen interest in Open Source Architectures, Cloud Computing, best practices and functional programming. He occasionally blogs on several publishing platforms and enjoys creating projects from inspiration. Follow him on Twitter @nerdokto. He can be contacted via http://www.techway.io/.

Download our Ultimate Guide to SCA here.

]]>
Picture1-2 Screenshot-2021-09-01-082024-1024×792-1
A Developer’s List of Microservices Risks https://checkmarx.com/blog/a-developers-list-of-microservices-risks/ Wed, 01 Sep 2021 12:31:28 +0000 https://checkmarx.com/?p=63988 If you’re a developer today, it’s hard not to love microservices. By adding agility and resiliency to applications, microservices architectures make it easier to build high-performing apps.

But a microservices strategy only pays off if you effectively manage the risks that go hand-in-hand with microservices. In certain key ways, microservices are fundamentally more challenging from a security perspective than less complex monolithic architectures. If you fail to manage the security risks of microservices, you may find yourself with an application that doesn’t perform well at all because it has been compromised – a problem that no amount of agility will solve.

That’s why it’s important to identify and address the security risks that accompany microservices. Here’s a list of the top five most common security issues developers should think about when writing microservices-based apps, along with tips on addressing them.

Risk 1: Complexity

If you’ve ever written or managed a microservices app, you know that microservices architectures bring complexity to a whole new level.

They make it more complex to write applications because developers have to ensure that each microservice can find and communicate with other microservices efficiently and reliably. And they make management harder because admins have to contend with service discovery, distributed log data, instances that constantly spin up and down, and so on.

Both of these challenges translate to security risks in the sense that, when it’s hard to keep track of everything happening in an environment, it’s more difficult to detect vulnerabilities. In order to conquer the complexity, developers and security teams need stronger tools for managing source code and monitoring runtime environments than they would require when dealing with monolithic apps.

Risk 2: Limited Environment Control

Depending on how you deploy microservices, you may have limited control over the runtime environment.

For example, if you use serverless functions to host microservices, you will have little to no access to the host operating system. You get only the monitoring, access control, and other tooling that the serverless function platform provides to you.

From a security perspective, this makes matters significantly more challenging because you can’t rely on OS-level tools to harden your microservices, isolate them from one another, or collect data that might reveal security issues. You have to handle all of the risks within the microservice itself. That can certainly be done, but here again, it requires more coordination and effort than developers of monolithic apps are accustomed to.

Risk 3: “Denial-of-Wallet” Attacks

Part of the reason everyone loves microservices is that they can scale so easily because new instances can be launched in just seconds.

That’s great when you actually want your microservices to scale. But what if someone with malicious intent gets hold of your environment and massively scales up your microservices in such a way that they consume enormous amounts of cloud resources?

You end up as the victim of a so-called Denial-of-Wallet attack, which is an attack designed to waste victims’ money, even if it doesn’t actually disrupt service.

So far, Denial-of-Wallet attacks remain purely theoretical; no such attack has yet been reported in the wild. Still, this is a real risk, especially for businesses with poorly secured cloud computing accounts or fewer measures in place to detect malicious spending activity.

Risk 4: Securing Data

In a monolithic application, data is usually stored in a simple and straightforward way. It probably lives on the local file system of the server that hosts the data, or possibly in network-connected storage that is mapped to the server’s local storage. This data is easy to encrypt and lock down with access controls.

Microservices typically use an entirely different storage architecture. Because microservices are usually distributed across a cluster of servers, you can’t rely on local storage and OS-level access controls. Instead, you most often use some kind of scale-out storage system that abstracts data from underlying storage media.

These storage systems can usually be locked down with access controls. But the access controls are often more complex than dealing with permissions at the file system level, which means it’s easier for developers to make mistakes that invite security breaches.

On top of this, the complexity of ensuring that each microservice has the necessary level of access to the storage can lead some developers to do the easy but irresponsible thing of failing to configure granular storage policies and allowing all microservices to access all data.

Either way, you end up with storage that is not as secure as that of a conventional, monolithic app.

The answer here is to ensure that you take full advantage of granular access control within storage systems, while also scanning access configurations for potential misconfigurations.

Risk 5: Securing the Network

Securing the network is critical for any type of application that connects to the network – which means virtually every application today.

When you’re dealing with microservices, however, network security assumes a whole new level of complexity. That’s because microservices don’t just communicate with end-users or third-party resources over the Internet, as a monolith would. They also usually rely on a complex web of overlay networks to share information among themselves.

More networks mean more opportunities for attackers to find and exploit vulnerabilities. They can intercept sensitive data that microservices exchange with each other, for example, or use internal networks to escalate breaches from one microservice to others.

Conclusion: Are Microservices Worth the Risk?

All of these risks can be managed. To say that developers should avoid microservices because they’re too complex and challenging would be like saying we should return to the age of horse-drawn buggies because cars are too dirty and dangerous.

But that doesn’t mean that it’s not important to manage the risks of microservices. Just as no responsible driver would move a car without taking the reasonable precaution of buckling up first, no developer should deploy microservices without taking steps to manage their inherent risks.

Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.

Download our Ultimate Guide to SCA here.

]]>
Picture1-1 Screenshot-2021-09-01-082024-1024×792-1