Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Build an LLM-powered Q&A App using LangChain, OpenAI and Python
Introduction
Project Demo (5:24)
Project FAQ
Project Resources
Exercise: Meet Your Classmates and Instructor
Set Your Learning Streak Goal
Deep Dive into LangChain and Pinecone
Introduction to LangChain (7:15)
Setting Up The Environment: LangChain, Pinecone, and Python-dotenv (11:01)
LLM Models (Wrappers): GPT-3 (6:13)
ChatModels: GPT-3.5-Turbo and GPT-4 (4:41)
Prompt Templates (5:10)
Simple Chains (5:49)
Sequential Chains (8:07)
Introduction to LangChain Agents (4:00)
LangChain Agents in Action (5:28)
Short Recap of Embeddings (1:52)
Introduction to Vector Databases (6:57)
Splitting and Embedding Text Using LangChain (9:19)
Inserting the Embeddings into a Pinecone Index (7:53)
Asking Questions (Similarity Search) (7:53)
Project: Q&A Application on Your Custom (or Private) Documents
Project Introduction (6:08)
Loading Your Custom (Private) PDF Documents (7:27)
Loading Different Document Formats (5:12)
Public and Private Service Loaders (4:37)
Chunking Strategies and Splitting the Documents (6:38)
Embedding and Uploading to a Vector Database (Pinecone) (11:17)
Asking and Getting Answers (10:33)
Adding Memory (Chat History) (9:05)
Where To Go From Here?
What's Next?
Review This Project!
Project Demo