Live, Hands-on Deep-Dive into LLM Hacking:
Prompt Injection, Model Context Protocol and Skills

Most of us use LLM chatbots every day, and more of us are now being asked to secure systems that rely on them. You’ve heard about prompt injection and system prompt leakage, but have you actually tried it? Unless you run your own model, you’ve probably never seen a real system prompt. In this free, hands-on 90-minute session, we’re going to change that.

Randy Smith from Ultimate IT Security is joined by Joe Brinkley and John McShane from Cobalt, experienced LLM security pentesters who will lead most of the session. We’ll load a small LLM locally, apply system prompts that prohibit disclosure, then try to break it through prompt injection and other attacks and, if we succeed, we’ll harden the prompts and test again.

Here’s what’s in store:

  • Binary Injection & Fake OS Attacks: How we trick the LLM into thinking it’s a terminal. We’re not just talking text; we’re talking binary-to-shell escapes.
  • MCP JSON-RPC Streams: The hidden layer. Using the binary stream to slip prompts past the eyes of a standard monitor.
  • Skills vs. MCP: The skill is the workflow (the brain), while MCP is the tool (the hands). We’ll show how they hand off secrets.
  • System Prompt Weakness & Leakage: Using the Cobalt Strike system prompt check logic. We’ll demonstrate the leak of a medical document from a db backend.
  • Prompt Escape/Injections: Walking through the classic "ignore all previous instructions" but updated for 2026 agentic workflows.