Agentic Software Security Training

Now available at INNOQ: Two days of hands-on insights into Agentic Software Security.

Prompt injection, tool misuse, and uncontrolled tool interactions introduce new attack vectors in the operation and use of agentic systems and LLM-powered applications – leading to outcomes like data exfiltration or unauthorized system access. These are real risks that software architects and developers need to understand and address.

In our 2-day training “Agentic Software Security” (ASSEC), we’ll teach you and your team the fundamentals you need.

New training: Agentic Software Security

The Outcome

The ASSEC training equips you and your team to understand emerging attack vectors — including direct and indirect prompt injection, insecure tool interactions, supply chain risks (such as tool poisoning or rug pulls), and cross-context effects — to systematically assess risks in the development and integration of agentic applications, and to mitigate them effectively with appropriate security measures.

Who is this for?

This training is designed for software architects and developers who want to plan, build, and run AI-powered applications with a clear security focus across the entire lifecycle.

What you’ll achieve

After this training

  • you understand emerging attack vectors in generative AI and how to systematically evaluate them using threat modeling.
  • you can effectively implement protective measures such as guardrails, sandboxing, and robust authentication and authorization.
  • you have a clear concept for securely operating MCP-based agentic systems within your organization — from onboarding and day-to-day use through to offboarding.
  • you are able to embed security practices across architecture, development, and operations — independent of specific frameworks or programming languages.

Topics covered

Emerging Attack Vectors in Generative AI

  • Identifying emerging attack vectors targeting agentic systems, such as prompt injection attacks
  • Systematically identifying trust boundaries in common agentic architectures using threat modeling
  • Securing agentic systems through guardrails, sandboxing, robust identity management, and fine-grained authorization
  • Securing AI systems across the entire lifecycle – from onboarding (e.g., via an MCP registry) to offboarding

Agentic Systems and Protocols

  • Understanding how agents differ from assistants, with a strong focus on security implications
  • Evaluating the security mechanisms of emerging agentic standards such as MCP
  • Understanding the key technical and organizational considerations when implementing MCP
  • Applying IAM and authorization patterns effectively to build secure agentic systems

Prototyping with AI is straightforward. But in many projects, the security requirements for production use are unclear. This training equips engineering teams with the knowledge and practical guidance to run AI safely and reliably in production.

Dominik GuhrPrincipal Consultant, INNOQ

If you have any questions about Agentic Software Security and our offering, feel free to reach out anytime to Dominik Guhr, Principal Consultant at INNOQ.

Agentic Software Security training for your development team?

We’d be happy to offer you an in-house session!

Request a date

INNOQ Library

Avatar of Dominik Guhr
Senior Consultant

We’d love to assist you in your digitalization efforts from start to finish. Please do not hesitate to contact us.

Get in touch!

Kontaktformular