Autoplay
Autocomplete
Previous Lesson
Complete and Continue
AI Engineering Bootcamp: Build, Train & Deploy Models with AWS SageMaker
Section 0: Introduction
AI Engineering Bootcamp: Learn AWS SageMaker with Patrik Szepesi (1:35)
Course Introduction (8:42)
Exercise: Meet Your Classmates and Instructor
Course Resources
ZTM Plugin + Understanding Your Video Player
Set Your Learning Streak Goal
Section 1: Introduction to AWS, Environment Setup, and Best Practices
Setting Up Our AWS Account (4:31)
Set Up IAM Roles + Best Practices (7:39)
AWS Security Best Practices (7:01)
Set Up AWS SageMaker Domain (2:22)
UI Domain Change (0:42)
Setting Up SageMaker Environment (5:08)
SageMaker Studio and Pricing (8:44)
Let's Have Some Fun (+ More Resources)
Section 2: Possible Resource Limit Errors Before Training and Deployment
Quota Increase (7:35)
Section 3: A Gentle Introduction to HuggingFace in SageMaker
Setup: SageMaker Server + PyTorch (6:08)
HuggingFace Models, Sentiment Analysis, and AutoScaling (18:34)
Section 4: Gathering a Dataset for Our Multiclass Text Classification Project
Get Dataset for Multiclass Text Classification (6:03)
Link to the Dataset
Creating Our AWS S3 Bucket (3:52)
Uploading Our Training Data to S3 (1:26)
Unlimited Updates
Section 5: Exploratory Data Analysis
Exploratory Data Analysis - Part 1 (13:21)
Exploratory Data Analysis - Part 2 (6:07)
Data Visualization and Best Practices (11:08)
Section 6: Setting Up Our Training Notebook
Setting Up Our Training Job Notebook + Reasons to Use SageMaker (18:24)
Python Script for HuggingFace Estimator (13:36)
Section 7: Introduction to Tokenizations and Encodings
Creating Our Optional Experiment Notebook - Part 1 (3:21)
Creating Our Optional Experiment Notebook - Part 2 (4:01)
Encoding Categorical Labels to Numeric Values (13:24)
Understanding the Tokenization Vocabulary (15:05)
Encoding Tokens (10:56)
Practical Example of Tokenization and Encoding (12:48)
Course Check-In
Section 8: Setting Up Data Loading with PyTorch
Creating Our Dataset Loader Class (16:56)
Setting Pytorch DataLoader (15:09)
Implement a New Life System
Section 9: Choose Your Path
Which Path Will You Take? (1:31)
Section 10: Mathematics Behind Large Language Models and Transformers
DistilBert vs. Bert Differences (4:46)
Embeddings In A Continuous Vector Space (7:40)
Introduction To Positional Encodings (5:13)
Positional Encodings - Part 1 (4:14)
Positional Encodings - Part 2 (Even and Odd Indices) (10:10)
Why Use Sine and Cosine Functions (5:08)
Understanding the Nature of Sine and Cosine Functions (9:52)
Visualizing Positional Encodings in Sine and Cosine Graphs (9:24)
Solving the Equations to Get the Values for Positional Encodings (18:07)
Introduction to Attention Mechanism (3:02)
Query, Key and Value Matrix (18:10)
Getting Started with Our Step by Step Attention Calculation (6:53)
Calculating Key Vectors (20:05)
Query Matrix Introduction (10:20)
Calculating Raw Attention Scores (21:24)
Understanding the Mathematics Behind Dot Products and Vector Alignment (13:32)
Visualizing Raw Attention Scores in 2D (5:42)
Converting Raw Attention Scores to Probability Distributions with Softmax (9:16)
Normalization (3:19)
Understanding the Value Matrix and Value Vector (9:07)
Calculating the Final Context Aware Rich Representation for the Word "River" (10:45)
Understanding the Output (1:58)
Understanding Multi Head Attention (11:55)
Multi Head Attention Example and Subsequent Layers (9:51)
Masked Language Learning (2:29)
Exercise: Imposter Syndrome (2:56)
Section 11: Customizing our Model Architecture in PyTorch
Getting Back to SageMaker!
Creating Our Custom Model Architecture with PyTorch (17:14)
Adding the Dropout, Linear Layer, and ReLU to Our Model (15:31)
Section 12: Creating the Accuracy, Training, and Validation Function
Creating Our Accuracy Function (13:04)
Creating Our Train Function (19:08)
Finishing Our Train Function (8:17)
Setting Up the Validation Function (13:40)
Section 13: Optimizer Functions, Model Parameters, Cross Entropy Loss Function
Passing Parameters In SageMaker (4:05)
Setting Up Model Parameters For Training (4:27)
Understanding The Mathematics Behind Cross Entropy Loss (5:39)
Finishing Our Script.py File (6:56)
Section 14: Starting Our Training Job and Monitoring it in AWS CloudWatch
Starting Our Training Job (8:15)
Debugging Our Training Job With AWS CloudWatch (14:17)
Analyzing Our Training Job Results (5:46)
Link to the Finished Trained Model
Section 15: Deploying our Multiclass Text Classification Endpoint in SageMaker
Creating Our Inference Script For Our PyTorch Model (8:34)
Finishing Our PyTorch Inference Script (9:12)
Setting Up Our Deployment (7:30)
Deploying Our Model To A SageMaker Endpoint (8:54)
Section 16: Load Testing Our Machine Learning Model
Introduction to Endpoint Load Testing (4:19)
Information about the next Lesson
Creating Our Test Data for Load Testing (10:02)
Upload Testing Data to S3 (1:03)
Creating Our Model for Load Testing (3:58)
Starting Our Load Test Job (7:14)
Analyze Load Test Results (10:16)
Section 17: Production Grade Deployment of Our Machine Learning Model
Deploying Our Endpoint (3:50)
Creating Lambda Function to Call Our Endpoint (10:26)
Setting Up Our AWS API Gateway (5:27)
Testing Our Model with Postman, API Gateway and Lambda (5:39)
Section 18: Cleaning Up Resources
Cleaning Up Resources (2:51)
Where To Go From Here?
Thank You! (1:17)
Review This Course!
Become An Alumni
Learning Guideline
ZTM Events Every Month
LinkedIn Endorsements
Exploratory Data Analysis - Part 1