An informative talk about how LLMs open new security holes:
Speaker: Shai Alon | Director of AI Innovation at Orca Security
Code from the demos on GitHub: https://github.com/shaialon/ai-securi…
Timecodes
0:00 Intro: Shai Alon takes the stage to discuss the hidden dangers of AI.
2:04 The AI Revolution: A look at the rapid adoption of AI across industries and the exciting possibilities it offers.
5:38 The Security Nightmare: Exposing the vulnerabilities of AI that most people ignore.
8:13 Demo 1: Headset Support Center Refund Bot - Witnessing the dangers of LLM01: Prompt Injection in action.
14:00 LLM03: Training Data Poisoning - How malicious data can poison AI models and lead to unexpected outcomes.
17:08 Demo 2: AI Data Scientist - Exploring the power and peril of agentic AI apps.
19:07 LLM02: Insecure Output Handling - AI’s ability to write and execute code creates a massive attack surface.
23:27 Authorization Bypass: AI agents can be tricked into granting access to sensitive data.
25:14 SQL Injection and Remote Code Execution: Witnessing the devastating impact of exploiting AI code generation.
29:49 Mitigation Strategies: Practical steps to protect your AI applications from these emerging threats.
32:51 OWASP LLM Top 10: A deep dive into the most critical vulnerabilities facing AI applications.
34:53 AI Tooling, education and engineering
38:22 The Role of AI Firewalls: Evaluating the strengths and limitations of perimeter defenses.
39:49 Jailbreaking AI Models: Bypass AI model guardrails & examining the shared responsibility model between AI providers and developers.
41:54 The Unprepared Cybersecurity Industry: A call to action for developers and security professionals.
42:29 Q&A: Shai Alon answers questions from the audience, addressing practical concerns about the combination of human expertise, AI regulation