Example Curriculum
Introduction
Available in
days
days
after you enroll
Setting up our AWS Account
Available in
days
days
after you enroll
Setting Up AWS Sagemaker Environment
Available in
days
days
after you enroll
Gathering, Chunking, Tokenizing and Uploading our Dataset
Available in
days
days
after you enroll
- Sagemaker Sessions, Regions, and IAM Roles (7:50)
- Examining Our Dataset from HuggingFace (13:29)
- Tokenization and Word Embeddings (9:08)
- HuggingFace Authentication with Sagemaker (4:21)
- Applying the Templating Function to our Dataset (8:43)
- Attention Masks and Padding (15:55)
- Star Unpacking with Python (4:03)
- Chain Iterator, List Constructor and Attention Mask example with Python (10:22)
- Understanding Batching (8:11)
- Slicing and Chunking our Dataset (7:31)
- Creating our Custom Chunking Function (16:06)
- Tokenizing our Dataset (9:30)
- Running our Chunking Function (4:30)
- Understanding the Entire Chunking Process (8:32)
- Uploading the Training Data to AWS S3 (5:53)
- Course Check-In
Understanding LoRA and Setting up HuggingFace Estimator
Available in
days
days
after you enroll
- Setting Up Hyperparameters for the Training Job (6:47)
- Creating our HuggingFace Estimator in Sagemaker (6:45)
- Introduction to Low-rank adaptation (LoRA) (8:11)
- LoRA Numerical Example (10:55)
- LoRA Summarization and Cost Saving Calculation (9:08)
- (Optional) Matrix Multiplication Refresher (4:45)
- Understanding LoRA Programatically Part 1 (12:32)
- Understanding LoRA Programatically Part 2 (5:48)
- Unlimited Updates
Improving Training Speed with Bfloat 16
Available in
days
days
after you enroll
Setting up the QLoRA Training Script with Mixed Precision & Double Quantization
Available in
days
days
after you enroll
- Setting up Imports and Libraries for the Train Script (7:19)
- Argument Parsing Function Part 1 (7:56)
- Argument Parsing Function Part 2 (10:54)
- Understanding Trainable Parameters Caveats (14:30)
- Introduction to Quantization (7:35)
- Identifying Trainable Layers for LoRA (7:19)
- Setting up Parameter Efficient Fine Tuning (4:36)
- Implement LoRA Configuration and Mixed Precision Training (10:34)
- Understanding Double Quantization (4:21)
- Creating the Training Function Part 1 (14:14)
- Creating the Training Function Part 2 (7:16)
- Exercise: Imposter Syndrome (2:55)
- Finishing our Sagemaker Script (5:09)
- Gaining Access to Powerful GPUs with AWS Quotas (5:10)
- Final Fixes Before Training (3:54)
Running our Fine Tuning Script for our LLM
Available in
days
days
after you enroll
Deploying our Fine Tuned LLM
Available in
days
days
after you enroll
Cleaning up Resources
Available in
days
days
after you enroll
Where To Go From Here?
Available in
days
days
after you enroll