Best Practices for Securing AI Agents: Lessons from Casco

In the modern age of artificial intelligence, every innovation comes with an equivalent rise in potential vulnerabilities. While AI systems are transforming industries—from finance and healthcare to logistics and legal services—they are also opening doors to new categories of cyberattacks that traditional security tools were never built to handle.

Companies are deploying AI agents at an accelerating rate, but most are doing so without clear visibility into how these systems behave under adversarial conditions. As a result, the industry is seeing alarming trends: 73% of enterprises have experienced at least one AI-related security incident in the last 12 months, with an average loss of $4.8 million per breach. Even worse, these incidents take 40% longer to identify and resolve compared to traditional software breaches.

The root of the issue lies in the unique nature of AI systems. They are non-deterministic, heavily context-dependent, and capable of generating unpredictable results. Attackers are increasingly exploiting these characteristics using novel techniques such as prompt injection, data poisoning, and model inversion. These threats are not theoretical—they are already costing companies millions.

Casco was created to combat this growing problem. It offers a proactive security solution specifically built for AI systems—one that simulates sophisticated attacks, identifies behavioral vulnerabilities, and guides organizations through effective remediation.

How Does Casco’s Agentic Red Teaming Work?

Casco’s core offering is its agentic red teaming platform, an intelligent framework that mimics the tactics of a skilled human attacker. Unlike traditional penetration testing, which focuses on infrastructure vulnerabilities, Casco targets the behavioral logic and response patterns of AI systems—the very elements that make AI both powerful and unpredictable.

Here’s how Casco's red teaming process works:

  • AI-Powered Simulations: Casco’s agents generate multi-step attack chains using real-world adversarial techniques. These include prompt chaining, sandbox escape attempts, model fingerprinting, and malicious data injections. The system’s behavior is observed and logged throughout each simulated breach attempt.
  • Human Expertise in the Loop: Unlike fully automated scanners that produce low-quality, noisy results, Casco integrates human experts into the evaluation loop. Every security finding is vetted by specialists who have built and secured AI systems for organizations like AWS, Microsoft, and the U.S. government. This ensures the findings are not only accurate but relevant and reproducible.
  • Targeted Testing for Diverse Architectures: Casco supports a variety of AI architectures, from single LLM-based agents to complex multi-agent ecosystems and third-party plugin integrations. Whether you’re deploying a chatbot, a generative assistant, or a semi-autonomous workflow tool, Casco adapts its testing methodology to fit your system’s specific footprint.

The result is a high-fidelity evaluation that reveals genuine vulnerabilities and unsafe behaviors before attackers find them first.

What Kind of Insights Does Casco Deliver?

Casco’s value doesn’t stop at identifying threats—it goes further by empowering companies to remediate with clarity and speed. Its reports are structured for cross-functional consumption, so engineers, security leaders, and compliance officers can all act with confidence.

Each report includes:

  • Detailed Reproduction Steps: Every vulnerability comes with a step-by-step guide to recreate the attack, allowing teams to verify the issue independently and understand how it can be triggered.
  • Severity and Risk Scoring: Findings are categorized by impact, likelihood, and business criticality. This helps teams prioritize efforts and allocate resources effectively.
  • Tailored Remediation Recommendations: Casco provides concrete solutions that align with the company’s technology stack, including prompt restructuring techniques, access control adjustments, and architectural design improvements.
  • Regulatory Compliance Mapping: Reports align with major regulatory frameworks like SOC 2, ISO 27001, NIST AI RMF, and the EU AI Act, simplifying the process of documenting and demonstrating responsible AI usage.

In a compliance-heavy future, Casco’s security reports can become a crucial asset—not just for internal improvement, but for audits, due diligence, and stakeholder trust.

Who Is Behind Casco?

Casco’s founding team brings together a rare combination of AI development and cybersecurity expertise. Rene Brandel, CEO, and Ian Saultz, CTO, are both former members of the Generative AI and Security teams at AWS.

Rene led product strategy for some of AWS’s most widely adopted AI services, while Ian focused on designing and securing mission-critical infrastructure used by Fortune 500 companies. Both witnessed firsthand the disjoint between how fast AI was being adopted and how unprepared organizations were to secure it.

Their experience led them to a key realization: to build trustworthy AI systems, security must become part of the development lifecycle, not an afterthought. With that in mind, they launched Casco in 2025 to help companies embed red teaming into their AI development workflows and maintain a high standard of safety and resilience.

Why Does AI Security Demand a Specialized Approach?

Securing AI agents is fundamentally different from securing conventional software.

AI systems:

  • Interpret and generate content dynamically
  • Lack strict rulesets or expected outputs
  • Adapt based on input, context, and memory
  • Can be influenced through indirect channels (e.g., data poisoning)

As a result, threats emerge in ways traditional tools can’t anticipate. Take, for example:

  • A support chatbot that’s manipulated into sharing sensitive data through subtle phrasing
  • A recommendation engine exploited to favor adversarial content
  • An autonomous agent that combines multiple plug-ins and performs unintended actions in complex environments

These are not code bugs—they are failures of alignment, understanding, and contextual awareness.

Casco addresses these issues head-on by simulating attacker behavior from the user perspective, analyzing model behavior in edge cases, and evaluating both system-level and agent-level vulnerabilities.

Who Is Already Using Casco?

Despite being a young startup, Casco has quickly become a trusted partner for some of the world’s largest enterprises. As of 2025, 60% of Fortune 500 companies rely on Casco to evaluate their AI systems.

These organizations span industries:

  • Financial institutions use Casco to secure generative tools that assist with customer service and fraud detection.
  • E-commerce platforms deploy it to validate recommendation engines and conversational shopping assistants.
  • Healthcare innovators run red team exercises on AI-powered diagnostic tools to ensure patient safety and regulatory compliance.

Casco’s ability to deliver fast, accurate, and deeply contextual security assessments has made it an essential part of the AI deployment pipeline in these high-stakes sectors.

What Sets Casco Apart from Traditional Security Tools?

Casco isn’t a repackaged scanner—it’s an entirely new approach to evaluating AI behavior and trustworthiness.

Its core differentiators include:

  • Built for AI Systems: Designed from the ground up to understand LLMs, autonomous agents, vector databases, and prompt injection risks.
  • Agentic Red Teaming: Uses simulated adversaries to uncover vulnerabilities in realistic, multi-step attack scenarios.
  • Human-Backed Intelligence: Every finding is reviewed by a team of experts who understand both ML models and cloud infrastructure.
  • Audit-Ready Reports: Maps findings to recognized compliance standards and regulatory frameworks.

Casco doesn’t just protect software—it helps companies ship AI applications with confidence in a world where trust and security are business-critical.