The AI testing landscape is drowning in promises: "autonomous testing," "self-healing tests," "AI-generated coverage." After a year of building production agentic systems and watching teams struggle to move past demos, I've learned what actually works—and what's still vendor theater.
This hands-on tutorial cuts through the noise. You'll work with the open-source Agentic QE Fleet and Claude Code, progressing through three phases:
Build — Extend the Agentic QE Fleet to fit your context. You won't just run existing agents—you'll create or customize specialized agents that address your team's specific quality challenges. Learn the architectural patterns that make agents maintainable, not magical black boxes.
Test — Put agents to work on real artifacts. Use Agentic QE agents and skills to verify human-written code, validate agent-generated outputs, and catch the subtle failures that slip past traditional automation. Experience how agents and humans collaborate to find what neither would catch alone.
Orchestrate — Coordinate your quality ecosystem using PACT principles (Proactive, Autonomous, Collaborative, Targeted). Integrate the fleet into CI/CD pipelines, IDE workflows, or standalone exploration sessions. The same agents work across contexts—the orchestration determines the value.
You'll leave with:
A customized agent extending the Agentic QE Fleet
Hands-on experience testing both human and AI-generated artifacts
Integration patterns for embedding Agentic QE into your workflow
Who should attend: Engineers, architects, and tech leads ready to move beyond AI demos into production-ready quality workflows.

Dragan Spiridonov brings 29 years of IT experience—from computer repair and sysadmin in 1996 to establishing and leading QA/QE functions for the past 12 years. After 8 years at Alchemy, building QA/QE from the ground up, he founded Quantum Quality Engineering, a Serbian consultancy that bridges classical quality engineering with agentic AI approaches.A practitioner of context-driven testing, Holis...


