• Browse topics
Login

OWASP Top 10 LLM and GenAI 2024

~2hrs 30mins estimated

This is the 2024 OWASP Top 10 LLM and GenAI Learning Path. To view the latest, please click here.

Save your learning progress.

  • Track your learning progress
  • Keep up to date with the latest vulnerabilities
  • Scan your application code to stay secure
Sign up for free

LLM01: Prompt Injection

Prompt injection is a vulnerability where attackers manipulate the input prompts of a language model to alter its behavior, potentially causing it to generate misleading, harmful, or unauthorized outputs.

LLM02: Insecure Output Handling

Insecure output handling refers to the failure to properly manage and sanitize the outputs generated by a system, which can lead to the disclosure of sensitive information, injection attacks, or other security vulnerabilities.

LLM03: Training Data Poisoning

Training data poisoning is a malicious attack where adversaries intentionally introduce incorrect or biased data into a machine learning model’s training set to manipulate the model’s outcomes or degrade its performance.

LLM04: Model Denial of Service

Model Denial of Service (DoS) refers to attacks that disrupt or exhaust a machine learning model’s resources, rendering it unable to process legitimate requests or operate effectively.

LLM05: Supply Chain Vulnerabilities

Supply chain vulnerabilities refers to the risks associated with third-party dependencies and components used in developing or deploying large language models, which can introduce security weaknesses and potential backdoors into the system.

LLM06: Sensitive Information Disclosure

Sensitive information disclosure involves the risk of a large language model unintentionally revealing confidential or private data through its responses or outputs.

LLM07: Insecure Plugin Design

Insecure plugin design refers to vulnerabilities in plugins or extensions used with large language models, which can introduce security flaws, leading to potential unauthorized access or data breaches.

LLM08: Excessive Agency

Excessive agency describes the risk of a large language model being given too much autonomy or control, potentially leading it to execute harmful actions or make decisions beyond its intended scope.

LLM09: Overreliance

Overreliance refers to the risk of users depending too heavily on large language models for critical tasks, which can lead to errors or harmful outcomes if the model’s outputs are inaccurate or biased.

LLM10: Model Theft

LLM10: Model Theft involves the unauthorized copying or replication of a large language model, leading to intellectual property loss and potential misuse of the stolen model.