Workshop

AI SecureOps: Attacking & Defending GenAI Applications and Services

March 11th & 12th, 2025

2 days training, by Abhinav Singh
This training will be given in ENGLISH

Normal price: CHF 2000.-
Student price: CHF 1500.- (limited availability)

Description

Acquire hands-on experience in GenAI and LLM security through CTF-styled training, tailored to real-world attacks and defense scenarios. Dive into protecting both public and private GenAI & LLM solutions, crafting specialized defense scenarios for distinct security challenges related with AI services. This dense training will navigate you through areas like the red and blue team strategies, create robust LLM defenses, incident response in LLM attacks, implement a Responsible AI(RAI) program and enforce ethical AI standards across enterprise services. This training covers both “Securing GenAI” as well as “Using GenAI for security” for a well rounded understanding of the complexities involved in AI-driven security landscapes.

About the trainer

Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of “Metasploit Penetration Testing Cookbook” and “Instant Wireshark Starter,” his contributions span patents, open-source tools, and numerous publications. Recognized on security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of GenAI.

Course outline

Introduction

  • Introduction to LLM and GenAI.
  • LLM & GenAI terminologies and architecture.
  • Technology use-cases.
  • Agents, multi-agents and multi-modal models.

Elements of AI Security

  • Understanding AI vulnerabilities with case studies on AI security breaches.
  • Application of Security.
  • Deploying and running LLM model locally.
  • Principles of AI ethics and Safety. – OWASP LLM top 10.
  • MITRE mapping of attacks on GenAI Supply chain.
  • Prompt Generation for solving specific security cases.
  • Build defense against local and global models.

Adversarial LLM Attacks and Defenses

  • Direct and Indirect Prompt Injection attacks.
  • Advance prompt injections through obfuscation and cross-model injections.
  • Breaking instruction boundaries and trust criteria.
  • Advance LLM red teaming: Automating multi-agent conversation to prompt inject models at scale.
  • Attacking LLM Agents for task manipulation and risky behavior.
  • Adversarial examples, training data extraction, model extraction, and data poisoning.
  • Attack mapping through LLM top 10 and MITRE Atlas frameworks.
  • Defense automation through prompt output validation using GenAI as well as static lists.
  • Benchmarking LLMs from generating insecure code or aid In carrying out cyber attacks.

Building Enterprise grade LLM defenses

  • Deploying LLM Security scanner, adding custom rules, prompt block lists and guardrails.
  • Writing custom detection logic, trustworthiness check and filters.
  • Protecting RAG enabled GenAI agents from emitting sensitive data & confidential internal data.
  • Attack simulation and defense use-cases against financial fraud & agent manipulation.

Building LLM & GenAI SecOps process

  • Summarizing the learnings into building SecOps process.
  • Monitoring trustworthiness and safety of enterprise LLM applications.
  • Implementing NIST AI Risk management framework(RMF) for security monitoring.

Course requirements

Student Requirements

  • Familiarity with AI and machine learning concepts is beneficial but not required.
  • Ability to run python codes and notebooks.
  • Familiarity with common GenAI applications like OpenAI.

Organized by

Technology partners

Partner events

Scroll to Top