Inhalt
Maximum participants: 35
Retrieval Augmented Generation (RAG) is an AI architecture that allows LLMs to retrieve information from an external source. Rather than relying on the inherent knowledge of the LLM that comes from training data or access to the Internet, RAG architecture relies on domain knowledge – the content stored in your business repositories. "RAG Hell" refers to that moment when the tech doesn't work, and someone is left trying to unravel what went wrong. That's where you come in.
In this interactive workshop, leverage your content expertise and gain knowledge to design RAG-ready content that drives reliable, intelligent experiences. You'll learn basic RAG architecture and related concepts, such as embedding model and LLM selection for optimal retrieval, chunking techniques, and how to provide guardrails using prompts.
What you'll do:
Use a simulated RAG environment to practice and test these concepts. You won't build an actual RAG, but you'll use GenAI and your documents to practice and test.
What you'll need:
- Computer/Internet access
- One or more documents (nothing private, proprietary, or personal)
- Email access to sign up for a tool (web-based; no installation required)
Das lernen Sie
- Learn basic RAG architecture and use cases
- Understand chunking, embedding model, and LLM selection for reliable outputs
- Learn strategies for embedding context and guardrails to improve content retrieval accuracy
Vorkenntnisse
The session is designed to benefit anyone interested in understanding the principles of RAG and how to leverage their expertise to drive reliable, accurate, AI-driven content experiences. A strong foundation in content structure and a desire to master the principles of Retrieval Augmented Generation are helpful.
Prerequisites:
Basic understanding of content structure and generative AI
Familiarity with RAG and its applications is not required, but a willingness to learn is essential!