• Browse topics
Login
Login

SNYK LEARN LOGIN

OTHER REGIONS

For Snyk Enterprise customers with regional contracts. More info

Supply chain vulnerabilities

When a trusted third party becomes untrustworthy

AI/ML

Supply chain vulnerability: the basics

What are supply chain vulnerabilities in LLMs?

Let’s start by talking about supply chains. A supply chain is a network of entities, resources, activities, and technologies involved in creating and delivering a product or service from the initial supplier to the final customer. Supply chain is often talked about in manufacturing but it also applies to application development and cyber security.

In the context of cyber security, the supply chain encompasses all the digital and physical elements involved in the development, deployment, and maintenance of software, hardware, and services. Cyber security supply chain risks arise from vulnerabilities at any point in this chain. And there are a lot of points! The IDE, plugins, containers, build tools, storage, etc. And we added one more recently: AI tools.

The supply chain in the context of Artificial Intelligence (AI) encompasses all the steps and components involved in developing, deploying, and maintaining AI systems. This includes data collection, algorithm development, model training, deployment, and ongoing maintenance. Each step in this chain can introduce potential vulnerabilities and risks.

About this lesson

In this lesson, you will learn about supply chain vulnerabilities and how they can be introduced along the supply chain. We’ll look at the risks involved and how best to mitigate them. We’ll also discuss how this vulnerability can be seen in a fictional application.

FUN FACT

LLM Top 10

Supply chain vulnerability is closely related to LLM03 (training data poisoning) and LLM07 (insecure plugins). Training data and plugins are part of the supply chain; if they are compromised, the whole chain is compromised.

Supply chain vulnerability in action

Supply chain vulnerabilities in LLMs

  • STEP 1
  • STEP 2
  • STEP 3
  • STEP 4
  • STEP 5
  • STEP 6

Setting the stage

We want to start a blog that posts cooking recipes daily. It'll be a lot of work but this can really drive some traffic and maybe make some money!

supply-chain-1.svg

In the above example, we are relying on a third-party plugin and their training data. The problem is, if you look at our lesson on training data poisoning, you'll see that the Old Fashion Cooking forum has been tampered with and the training data has been poisoned!

Supply chain vulnerability under the hood

As previously mentioned, supply chain vulnerabilities in LLMs are closely related to training data poisoning (LLM03) and insecure plugins (LLM07). The example above combines those two and creates a vulnerability in the chain.

Let’s look at a fictional example different from the one above. In this example, we have a Python application that uses a pre-trained LLM from a third-party source to generate text responses in a chatbot application. The pre-trained model is downloaded from an online repository.

You might notice the weakness in the supply chain. We are relying on a third-party model. Let’s also imagine that the fictitious-pretrained-model has been compromised by an attacker who injected a backdoor into the model. The backdoor triggers when the model receives a specific input, causing it to generate malicious responses or execute harmful actions.

The next question you might have is, “How does a model get compromised?” This is a great question with a complex answer. Injecting a backdoor into an LLM is a sophisticated attack that typically requires access to the model's training process or its weights after training.

We looked at data poisoning in a different lesson, and an attack could introduce malicious examples into the training dataset. For instance, if the model is trained on text data, the attacker could insert sentences that contain specific trigger words or phrases. The model produces a predefined, malicious output when these triggers appear in the input.

An easier way might be for the attacker to create or maintain a malicious pre-trained model. It might even be legitimate, but the maintainer introduces malicious inputs once it becomes popular. Then, attackers can distribute this pre-trained model with hidden backdoors, which unsuspecting users download and use in their applications. These pre-trained models might be uploaded to a popular repository (like Hugging Face or GitHub) with a backdoor that activates under specific conditions.

What is the impact of supply chain vulnerabilities?

The obvious impact of supply chain vulnerabilities is malicious outputs. This can be seen through our example above and in other examples shown throughout our LLM lessons. However, these aren’t the only vulnerabilities.

Attackers can exploit backdoors in compromised LLMs to access sensitive information processed by the models. This includes user data, business data, and any other sensitive content. An improperly set up LLM may have too much access, or the training data may have obtained data by scraping confidential information, which could potentially make its way into the LLM.

If, in the case above, the LLM has access to sensitive data, such as login credentials, this data could be extracted via the LLM interface. This could lead to unauthorized access. And any time there is a breach or downtime, this could lead to financial loss and reputation damage.

FUN FACT

XZ Utils backdoor

Earlier, we mentioned that it might be easier for an attacker to introduce a backdoor into legitimate software or models. This happened with XZ Utils, where a user spent years gaining trust and then introduced malicious code.

Scan your code & stay secure with Snyk - for FREE!

Did you know you can use Snyk for free to verify that your code
doesn't include this or other vulnerabilities?

Scan your code

Supply chain vulnerability mitigation

Mitigating supply chain vulnerabilities in LLMs is not straightforward. Because, in our example, we rely on trust. We trust the third-party model to be accurate and secure. For us to mitigate (not eliminate) these vulnerabilities, we need to vet our data sources carefully. As with insecure plugins, we need to make sure that the plugins we are using are secure, tested, and proven. Using open-source models has the benefit of a community standing behind a model and vetting it.

If the scenario is similar to that of XZ Utils, it can be very hard to detect. However, if there is a working, vetted model, you can compare the model you downloaded to the vetted model by comparing the hash.

We can also perform anomaly detection on models and data to help detect tampering and poisoning, as discussed in LLM 03: Training Data Poisoning.

Quiz

Test your knowledge!

Quiz

Using a third-party tool for your LLM potentially opens your application up to which of the following:

Keep learning

Learn more about training data poisoning and other LLM vulnerabilities.