Join us at the 2025 rvatech/ Data + AI Summit to hear our Chief Technology Officer, Josh Bartels, present on building multi-modal LLM chatbots.
In today’s rapidly evolving AI landscape, the ability to harness multiple cloud providers and language models is becoming increasingly crucial. This session will explore the journey of the creation of a Multi-Model LLM Chatbot that utilizes both Azure OpenAI and Amazon Bedrock.
Key highlights of this talk include:
- Architecture Overview: Explore the intricate architecture behind a multi-cloud, multi-model chatbot. We’ll discuss how to design a system that can seamlessly switch between different LLMs based on specific needs.
- Integration Showcase: Watch as we demonstrate live integration with Azure OpenAI’s GPT models and Amazon Bedrock’s array of foundation models. Learn how to leverage the strengths of each platform to create a more robust and versatile chatbot. We’ll cover authentication, API calls, and how to handle the unique features of each service.
- Live Chatbot Demo & Code Review: Experience the Multi-Model LLM Chatbot in action through a live demonstration. We’ll interact with the chatbot, showcasing its ability to seamlessly switch between Azure OpenAI and Amazon Bedrock models. Following the demo, we’ll dive into a code review of crucial components, examining the logic behind model selection, request handling, and response processing. This segment will provide you with practical insights into implementing a multi-model system.
- Best Practices & Pitfalls: Gain insights into best practices for multi-model implementations, common pitfalls to avoid, and strategies for maintaining consistency across different LLMs. We’ll discuss prompt engineering techniques, error handling, and how to ensure a coherent user experience despite using multiple models.