• Browse topics
Login
Login

SNYK LEARN LOGIN

OTHER REGIONS

For Snyk Enterprise customers with regional contracts. More info

AI Security Posture Management (AI-SPM)

Enable users to detect AI assets via AI-BOM scans and enforce governance through Natural Language Policies as well as traditional menu items

~15mins estimated

Governing the AI Lifecycle

In the "Navigating the EVO Interface" course, we explored how to navigate the interface. Now, we shift our focus to the core mission of AI-SPM: establishing a continuous loop of visibility, risk assessment, and automated governance.

Traditional AppSec tools often miss AI risk because models aren't "code" in the traditional sense. EVO bridges this gap by providing AI assets with the same level of visibility and governance as traditional software components.

The AI-BOM: Beyond the Standard Manifest

The foundation of AI-SPM is the AI Bill of Materials (AI-BOM). While a standard SCA scan might find a library like openai or langchain, EVO's Discovery Agent goes deeper. It extracts:

  • Model Provenance: Is this a location foundation model or a third-party API?
  • Data Lineage: What datasets or vector databases are being used for RAG (Retrieval-Augmented Generation)?
  • Orchestration Layers: How are MCP (Model Context Protocol) servers and custom agents interacting with your proprietary data?

Risk Intelligence vs. Static Scanning

In this lesson, you will learn to interpret the Snyk Risk Index. Unlike a standard CVE score (which might be 0 for a brand-new model), EVO evaluates the functional risk. You will be looking for five specific risk categories:

  • Bias & Discrimination: Does the model's output violate complaince or ethical standards?
  • Code Generation Risk: Is the model suggesting insecure code patterns?
  • Sensitive Data Exposure: Is the model prone to leaking training data or PII?
  • Attack Reconnaissance: Does the model provide information that could help an attacker map your network?
  • Safety Guardrail Bypass: How easily can the model be "jailbroken"?

The Power of Enrichment

EVO goes beyond simple discovery by enriching every discovered LLM with risk intelligence signals. Instead of seeing a generic model name, you see a live risk profile. These signals are derived from Snyk’s proprietary testing of foundation models, identifying hidden threats like 'Attack Reconnaissance' or 'Bias' that wouldn't show up in a standard code scan.

Scan your code & stay secure with Snyk - for FREE!

Did you know you can use Snyk for free to verify that your code
doesn't include this or other vulnerabilities?

Scan your code

Turning Business Intent into Enforcement

The final step of AI-SPM is Governance. You will explore two methods for creating "Machine-Readable Guardrails":

  • The Manual Builder: Using the Policy UI to create custom, granular, multi-condition rules (up to 12 conditions) based on specific model attributes.
  • The Policy Agent (Natural Language): Converting a verbal requirement (for example, "We don't use unlicensed models in production") directly into an active security policy.

What You'll Do in the Demo:

In this demo, you will execute a full AI-SPM workflow:

  • Audit the AI-BOM: Use the Discovery Agent to identify a "Shadow AI" model (a model used in code but not registered in your inventory).
  • Assess the Risk Profile: Analyze the Snyk Risk Index to determine if a discovered model meets your organization's safety standards.
  • Deploy a Policy: Use the natural language interface to create a rule that automatically flags high-risk models in your CI/CD pipeline.
  • Review an Issue: Trace a policy violation from the dashboard directly back to the specific line of source code that triggered it.

Ready to secure your AI stack? Click "Start Demo" below.

Congratulations

Outstanding! You've mastered AI-SPM from auditing the AI-BOM to deploying natural language policies. Your AI stack is now much safer!