Talk details

In schedule:
Green Stage
May 30, 10:50 - 12:40 CET
AI engineering with open access LLMs that lie, curse, and steal
Topics:
Software Delivery Craft Matters
artificial intelligence
ai
architecture
design patterns
llms
software architecture
testing
Level: General

If you are one of the 46% of AI Engineers preferring open source LLMs going in 2024, you might have discovered that these open models can be a bit like unruly children. There are moments of joy, when they are behaving appropriately, and then there are moments of horror, when they are lying (hallucinating), stealing (enabling privacy and security breaches), and/or generally behaving in ways that harm others (e.g., spewing out toxic statements). In this talk, I will share some stories from those working in the trenches to reign in private model deployments of open access models. I’ll share an overview of the most impactful “vectors of attack/harm” associated with local, private models, so that you can categorize and understand when and how things like hallucinations, prompt injections, privacy breaches, and toxic outputs occur. Then I will share some practical tips (with live demos) to give you the skills you need to control your LLM apps via model-graded outputs, LLM critics, control vectors, hybrid NLP and GenAI systems, and curated domain-specific examples.

Speaker
Craft 2024 - Daniel Whitenack
Daniel Whitenack
Founder and CEO at Prediction Guard

Daniel Whitenack (aka Data Dan) is a Ph.D. trained data scientist and founder of Prediction Guard. He has more than ten years of experience developing and deploying machine learning models at scale, and he has built data teams at two startups and an international NGO with 4000+ staff. Daniel co-hosts the Practical AI podcast, has spoken at conferences around the world (ODSC, Applied Machine Learni...