Category AI/ML
LESSON
Training data poisoning
Learn how your LLM can become insecure and unreliable with training data poisoning. We'll look at examples and mitigation techniques.
0% Completed
LESSON
Sensitive information disclosure in LLMs
Learn how your LLM might give away too much data, including sensitive information. We'll look at examples and mitigation techniques.
0% Completed
LESSON
Insecure output handling in LLMs
Learn how your LLM can create vulnerabilities by not sanitizing data and creating insecure output. We'll look at examples and mitigation techniques.
0% Completed
LESSON
Overreliance on LLMs
Learn how you can introduce vulnerabilities into your code by overreliance on LLMs. We'll look at examples and mitigation techniques.
0% Completed
LESSON
Insecure plugins for LLMs
Learn how an attacker can exploit insecure plugins in LLM-based applications and compare them to similar attacks like resource exhaustion with examples.
0% Completed
LESSON
Denial of service
In this lesson, we'll look at how Denial of Service (DoS) attacks work, why they occur, and how to prevent them. We'll specifically be focusing on LLMs and OWAPS's LLM04.
0% Completed