Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Developing LLM Apps with LangChain
Introduction
Introduction (5:24)
Byte FAQ
Course Resources
Exercise: Meet Your Classmates and Instructor
Set Your Learning Streak Goal
Deep Dive into LangChain
Introduction to LangChain (7:15)
Setting Up the Environment: LangChain, Python-dotenv (7:05)
ChatModels: GPT-3.5-Turbo and GPT-4 (6:29)
Caching LLM Responses (4:56)
LLM Streaming (2:57)
Prompt Templates (5:35)
ChatPromptTemplate (5:54)
Simple Chains (6:55)
Sequential Chains (7:14)
Introduction to LangChain Agents (4:00)
LangChain Agents in Action: Python REPL (7:40)
LangChain Tools: DuckDuckGo and Wikipedia (11:07)
Creating a React Agent (13:29)
Testing the React Agent (4:49)
LangChain and Vector Stores (Pinecone)
Short Recap of Embeddings (1:52)
Introduction to Vector Databases (6:57)
Authenticating to Pinecone (4:26)
Working with Pinecone Indexes (9:31)
Working with Vectors (8:42)
Namespaces (6:43)
Splitting and Embedding Text Using LangChain (9:19)
Inserting the Embeddings into a Pinecone Index (8:49)
Asking Questions (Similarity Search) (7:53)
Project: RAG - Q&A Application on Your Private Documents (Pinecone & Chroma)
Project Introduction (6:08)
Loading Your Custom (Private) PDF Documents (7:27)
Loading Different Document Formats (5:12)
Public and Private Service Loaders (4:37)
Chunking Strategies and Splitting the Documents (6:38)
Embedding and Uploading to a Vector Database (Pinecone) (13:33)
Asking and Getting Answers (10:33)
Using Chroma as a Vector DB (11:10)
Adding Memory to the RAG System (Chat History) (9:25)
Using a Custom Prompt (8:09)
Where To Go From Here?
What's Next?
Review This Project!
What's Next?
This lecture is available exclusively for ZTM Academy members.
If you're already a member,
you'll need to login
.
Join ZTM To Unlock All Lectures