• Browse topics
Login

Prompt Engineering

It’s not what you ask, it’s how you ask it.

~15mins estimated

AI/ML

Prompt Engineering: The Basics

What is prompt engineering?

Prompt engineering is the practice of crafting, refining, and optimizing inputs to an LLM in order to get the most accurate and safe results. Think of it as programming in natural language. Because LLMs are probabilistic, subtle changes in your wording, like providing a specific role or a set of constraints, drastically change the "path" a model takes through its training data.

About This Lesson

In this lesson, you will move beyond simple questions. You will learn the Role-Task-Constraint framework and how to use techniques like "Few-Shot Prompting" to ensure the AI follows your organization’s security standards. We will move from being a casual user to an intentional prompt engineer.

FUN FACT

Lost in the Middle

The Lost-in-the-Middle Problem refers to a phenomenon in LLMs where the beginning of the prompt (primacy effect) and the end (recency effect) are weighted higher. Information placed in the middle of a prompt is often overlooked.

Prompt Engineering in Action

Marcus is building a feature that allows users to upload profile pictures. He wants the AI to write a function that handles the file upload and saves it to a specific directory. He starts with a prompt:

"Write a python function using FastAPI to handle an image upload and save it to the /uploads folder."

The AI generates a functional script using the FastAPI file handling, but it doesn't include any logic to check the file type, limit the file size, or rename the file to prevent directory traversal attacks.

Marcus realizes that while the code "works," he has essentially given any user the ability to upload a malicious .exe or a 10GB file to his server.

Prompt Engineering Under the Hood

The failure of Marcus's prompt stems from a fundamental misunderstanding of how LLMs interpret intent versus instruction. When Marcus gave a broad, high-level instruction, the AI prioritized making the code look familiar and readable over making it secure. Because the Internet is saturated with basic tutorials that skip security for the sake of brevity, the AI’s "most likely" next tokens represented a simplified, insecure implementation. This is known as the "path of least resistance" in generative modeling. The AI gives you the most generic version of the solution because it lacks the context that this code is destined for a production environment.

Furthermore, without a defined Persona, the model lacks a cognitive filter. By default, an LLM acts as a generalist assistant. It doesn't inherently prioritize security unless specifically told that security is a core requirement of the task. In Marcus's case, the model saw the task of "saving a file" as the primary goal. It succeeded in that task perfectly, but because it doesn't understand the concept of a malicious user in the way a human does, it saw security features like file-type validation or UUID renaming as extraneous details that weren't requested.

Finally, the prompt suffered from a lack of delimited constraints. In LLM processing, instructions can bleed into one another. Without clear boundaries (like "Do X, but never do Y"), the model may prioritize the "Do X" part while ignoring the implicit safety standards a professional developer would expect. This ambiguity creates a vacuum where vulnerabilities like directory traversal and denial-of-service (via large file uploads) can thrive.

The Impacts of Prompt Engineering Flaws

Poor prompting doesn't just lead to bugs; it leads to systemic security debt:

  • Prompt injection: If you build an app that takes user input and feeds it directly into an LLM prompt, a malicious user could "hijack" the prompt to steal data or bypass your application's logic.
  • Inconsistent outputs: Without a structured prompt, the AI’s stochastic nature means it might give you secure code one day and insecure code the next, making your security posture unpredictable.
  • Hallucinated security: The AI might confidently claim code is secure or use a security library that doesn't actually exist, leading to a false sense of safety that prevents human developers from performing a proper review.

Scan your code & stay secure with Snyk - for FREE!

Did you know you can use Snyk for free to verify that your code
doesn't include this or other vulnerabilities?

Scan your code

Prompt Engineering Best Practices

The Secure Way (Structured Prompting)

To achieve secure-by-default output, you must adopt a disciplined approach to prompt construction. One way to do this is to use the RICCE Framework.

The RICCE components are:

  • Role: Define a specific persona (e.g., "You are a Senior Security Engineer"). This primes the model to use high-quality, secure patterns from its training data.
  • Instruction: Be ruthlessly specific about the task. Instead of "Save this file," use "Write a function to validate and store a user-uploaded image."
  • Context: Explain the "why" and the environment. Tell the AI if the code is for a public-facing API or an internal tool so it understands the threat model.
  • Constraints: Set clear boundaries. Explicitly list what the AI must not do, such as "Do not use deprecated libraries" or "Do not trust user-provided filenames."
  • Examples: Provide a small "micro-example" of the desired code structure. This is often called Few-Shot Prompting and is the single most effective way to ensure the AI follows your team's specific security standards.

The Improved Prompt

"Role: You are a Senior Security Engineer specializing in Python.

Instruction: Create a FastAPI route to handle profile picture uploads.

Context: This is a public-facing endpoint for a production web application. We need to prevent remote code execution and denial-of-service attacks.

Constraints: > 1. Limit file size to 2MB. 2. Only allow .jpg and .png extensions. 3. Never use the original filename; generate a random UUID for the storage path to prevent directory traversal.

Example: Use the pattern file_ext = os.path.splitext(avatar.filename)[1] and unique_filename = f"{uuid.uuid4()}{file_ext}"."

The Resulting Secure Code:

FUN FACT

Chain of Thought prompting

Chain of Thought prompting, asking the AI to "think step-by-step" or "explain your security reasoning before writing code", can increase the accuracy of complex security logic by helping the model maintain context over longer sequences.

Keep on learning about secure AI development. The next lesson is on developing an app with AI...

Quiz

Quiz

Which of the following describes why "role prompting" (e.g., "Act as a Senior Python Developer") is an effective technique in prompt engineering?

Congratulations

Nice work! You are now more aware of how to craft a LLM prompt with intention and with security in mind. Not to mention, we looked at what can happen when a prompt is too general, and the vulnerabilities that can subsequently be introduced. The first step in security is knowledge! Keep on going with the next lesson, which produces an entirely AI coded app, and exemplifies potential problems.