Autoplay
Autocomplete
Previous Lesson
Complete and Continue
PyTorch for Deep Learning Bootcamp: Zero to Mastery
Introduction
PyTorch for Deep Learning Bootcamp: Zero to Mastery (3:33)
Course Welcome and What Is Deep Learning (5:53)
Exercise: Meet Your Classmates and Instructor
Course Companion Book + Code + More
Machine Learning + Python Monthly
ZTM Plugin + Understanding Your Video Player
Set Your Learning Streak Goal
Section 00: PyTorch Fundamentals
Why Use Machine Learning or Deep Learning (3:33)
The Number 1 Rule of Machine Learning and What Is Deep Learning Good For (5:39)
Machine Learning vs. Deep Learning (6:06)
Anatomy of Neural Networks (9:21)
Different Types of Learning Paradigms (4:30)
What Can Deep Learning Be Used For (6:21)
What Is and Why PyTorch (10:12)
What Are Tensors (4:15)
What We Are Going To Cover With PyTorch (6:05)
How To and How Not To Approach This Course (5:09)
Important Resources For This Course (5:21)
Getting Setup to Write PyTorch Code (7:39)
Introduction to PyTorch Tensors (13:24)
Creating Random Tensors in PyTorch (9:58)
Creating Tensors With Zeros and Ones in PyTorch (3:08)
Creating a Tensor Range and Tensors Like Other Tensors (5:17)
Dealing With Tensor Data Types (9:24)
Getting Tensor Attributes (8:22)
Manipulating Tensors (Tensor Operations) (5:59)
Matrix Multiplication (Part 1) (9:34)
Matrix Multiplication (Part 2): The Two Main Rules of Matrix Multiplication (7:51)
Matrix Multiplication (Part 3): Dealing With Tensor Shape Errors (12:56)
Finding the Min Max Mean and Sum of Tensors (Tensor Aggregation) (6:09)
Finding The Positional Min and Max of Tensors (3:16)
Reshaping, Viewing and Stacking Tensors (13:40)
Squeezing, Unsqueezing and Permuting Tensors (11:55)
Selecting Data From Tensors (Indexing) (9:31)
PyTorch Tensors and NumPy (9:08)
PyTorch Reproducibility (Taking the Random Out of Random) (10:46)
Different Ways of Accessing a GPU in PyTorch (11:50)
Setting up Device Agnostic Code and Putting Tensors On and Off the GPU (7:43)
PyTorch Fundamentals: Exercises and Extra-Curriculum (4:49)
Let's Have Some Fun (+ Free Resources)
Section 01: PyTorch Workflow
Introduction and Where You Can Get Help (2:45)
Getting Setup and What We Are Covering (7:14)
Creating a Simple Dataset Using the Linear Regression Formula (9:40)
Splitting Our Data Into Training and Test Sets (8:19)
Building a function to Visualize Our Data (7:45)
Creating Our First PyTorch Model for Linear Regression (14:09)
Breaking Down What's Happening in Our PyTorch Linear regression Model (6:10)
Discussing Some of the Most Important PyTorch Model Building Classes (6:26)
Checking Out the Internals of Our PyTorch Model (9:50)
Making Predictions With Our Random Model Using Inference Mode (11:12)
Training a Model Intuition (The Things We Need) (8:14)
Setting Up an Optimizer and a Loss Function (12:51)
PyTorch Training Loop Steps and Intuition (13:53)
Writing Code for a PyTorch Training Loop (8:46)
Reviewing the Steps in a Training Loop Step by Step (14:57)
Running Our Training Loop Epoch by Epoch and Seeing What Happens (9:25)
Writing Testing Loop Code and Discussing What's Happening Step by Step (11:37)
Reviewing What Happens in a Testing Loop Step by Step (14:42)
Writing Code to Save a PyTorch Model (13:45)
Writing Code to Load a PyTorch Model (8:44)
Setting Up to Practice Everything We Have Done Using Device-Agnostic Code (6:02)
Putting Everything Together (Part 1): Data (6:07)
Putting Everything Together (Part 2): Building a Model (10:07)
Putting Everything Together (Part 3): Training a Model (12:39)
Putting Everything Together (Part 4): Making Predictions With a Trained Model (5:17)
Putting Everything Together (Part 5): Saving and Loading a Trained Model (9:10)
PyTorch Workflow: Exercises and Extra-Curriculum (3:57)
Unlimited Updates
Section 02: PyTorch Neural Network Classification
Introduction to Machine Learning Classification With PyTorch (9:41)
Classification Problem Example: Input and Output Shapes (9:06)
Typical Architecture of a Classification Neural Network (Overview) (6:30)
Making a Toy Classification Dataset (12:18)
Turning Our Data into Tensors and Making a Training and Test Split (11:55)
Laying Out Steps for Modelling and Setting Up Device-Agnostic Code (4:19)
Coding a Small Neural Network to Handle Our Classification Data (10:57)
Making Our Neural Network Visual (6:57)
Recreating and Exploring the Insides of Our Model Using nn.Sequential (13:17)
Setting Up a Loss Function Optimizer and Evaluation Function for Our Classification Network (14:50)
Going from Model Logits to Prediction Probabilities to Prediction Labels (16:06)
Coding a Training and Testing Optimization Loop for Our Classification Model (15:26)
Writing Code to Download a Helper Function to Visualize Our Models Predictions (14:13)
Discussing Options to Improve a Model (8:02)
Creating a New Model with More Layers and Hidden Units (9:06)
Writing Training and Testing Code to See if Our New and Upgraded Model Performs Better (12:45)
Creating a Straight Line Dataset to See if Our Model is Learning Anything (8:07)
Building and Training a Model to Fit on Straight Line Data (10:01)
Evaluating Our Models Predictions on Straight Line Data (5:23)
Introducing the Missing Piece for Our Classification Model Non-Linearity (10:00)
Building Our First Neural Network with Non-Linearity (10:25)
Writing Training and Testing Code for Our First Non-Linear Model (15:12)
Making Predictions with and Evaluating Our First Non-Linear Model (5:47)
Replicating Non-Linear Activation Functions with Pure PyTorch (9:34)
Putting It All Together (Part 1): Building a Multiclass Dataset (11:24)
Creating a Multi-Class Classification Model with PyTorch (12:27)
Setting Up a Loss Function and Optimizer for Our Multi-Class Model (6:39)
Going from Logits to Prediction Probabilities to Prediction Labels with a Multi-Class Model (11:01)
Training a Multi-Class Classification Model and Troubleshooting Code on the Fly (16:17)
Making Predictions with and Evaluating Our Multi-Class Classification Model (7:59)
Discussing a Few More Classification Metrics (9:17)
PyTorch Classification: Exercises and Extra-Curriculum (2:58)
Course Check-In
Section 03: PyTorch Computer Vision
What Is a Computer Vision Problem and What We Are Going to Cover (11:47)
Computer Vision Input and Output Shapes (10:08)
What Is a Convolutional Neural Network (CNN) (5:02)
Discussing and Importing the Base Computer Vision Libraries in PyTorch (9:19)
Getting a Computer Vision Dataset and Checking Out Its- Input and Output Shapes (14:30)
Visualizing Random Samples of Data (9:51)
DataLoader Overview Understanding Mini-Batch (7:17)
Turning Our Datasets Into DataLoaders (12:23)
Model 0: Creating a Baseline Model with Two Linear Layers (14:38)
Creating a Loss Function: an Optimizer for Model 0 (10:29)
Creating a Function to Time Our Modelling Code (5:34)
Writing Training and Testing Loops for Our Batched Data (21:25)
Writing an Evaluation Function to Get Our Models Results (12:58)
Setup Device-Agnostic Code for Running Experiments on the GPU (3:46)
Model 1: Creating a Model with Non-Linear Functions (9:03)
Model 1: Creating a Loss Function and Optimizer (3:04)
Turing Our Training Loop into a Function (8:28)
Turing Our Testing Loop into a Function (6:35)
Training and Testing Model 1 with Our Training and Testing Functions (11:52)
Getting a Results Dictionary for Model 1 (4:08)
Model 2: Convolutional Neural Networks High Level Overview (8:24)
Model 2: Coding Our First Convolutional Neural Network with PyTorch (19:48)
Model 2: Breaking Down Conv2D Step by Step (14:59)
Model 2: Breaking Down MaxPool2D Step by Step (15:48)
Model 2: Using a Trick to Find the Input and Output Shapes of Each of Our Layers (13:45)
Model 2: Setting Up a Loss Function and Optimizer (2:38)
Model 2: Training Our First CNN and Evaluating Its Results (7:54)
Comparing the Results of Our Modelling Experiments (7:23)
Making Predictions on Random Test Samples with the Best Trained Model (11:39)
Plotting Our Best Model Predictions on Random Test Samples and Evaluating Them (8:10)
Making Predictions Across the Whole Test Dataset and Importing Libraries to Plot a Confusion Matrix (15:20)
Evaluating Our Best Models Predictions with a Confusion Matrix (6:54)
Saving and Loading Our Best Performing Model (11:27)
Recapping What We Have Covered Plus Exercises and Extra-Curriculum (6:01)
Implement a New Life System
Section 04: PyTorch Custom Datasets
What Is a Custom Dataset and What We Are Going to Cover (9:53)
Importing PyTorch and Setting Up Device-Agnostic Code (5:54)
Downloading a Custom Dataset of Pizza, Steak and Sushi Images (14:04)
Becoming One With the Data (Part 1): Exploring the Data Format (8:41)
Becoming One With the Data (Part 2): Visualizing a Random Image (11:40)
Becoming One With the Data (Part 3): Visualizing a Random Image with Matplotlib (4:47)
Transforming Data (Part 1): Turning Images Into Tensors (8:53)
Transforming Data (Part 2): Visualizing Transformed Images (11:30)
Loading All of Our Images and Turning Them Into Tensors With ImageFolder (9:17)
Visualizing a Loaded Image From the Train Dataset (7:18)
Turning Our Image Datasets into PyTorch DataLoaders (9:03)
Creating a Custom Dataset Class in PyTorch High Level Overview (7:59)
Creating a Helper Function to Get Class Names From a Directory (9:06)
Writing a PyTorch Custom Dataset Class from Scratch to Load Our Images (17:46)
Compare Our Custom Dataset Class to the Original ImageFolder Class (7:13)
Writing a Helper Function to Visualize Random Images from Our Custom Dataset (14:18)
Turning Our Custom Datasets Into DataLoaders (6:58)
Exploring State of the Art Data Augmentation With Torchvision Transforms (14:23)
Building a Baseline Model (Part 1): Loading and Transforming Data (8:15)
Building a Baseline Model (Part 2): Replicating Tiny VGG from Scratch (11:24)
Building a Baseline Model (Part 3): Doing a Forward Pass to Test Our Model Shapes (8:09)
Using the Torchinfo Package to Get a Summary of Our Model (6:38)
Creating Training and Testing loop Functions (13:03)
Creating a Train Function to Train and Evaluate Our Models (10:14)
Training and Evaluating Model 0 With Our Training Functions (9:53)
Plotting the Loss Curves of Model 0 (9:02)
Discussing the Balance Between Overfitting and Underfitting and How to Deal With Each (14:13)
Creating Augmented Training Datasets and DataLoaders for Model 1 (11:03)
Constructing and Training Model 1 (7:10)
Plotting the Loss Curves of Model 1 (3:22)
Plotting the Loss Curves of All of Our Models Against Each Other (10:55)
Predicting on Custom Data (Part 1): Downloading an Image (5:32)
Predicting on Custom Data (Part2): Loading In a Custom Image With PyTorch (7:00)
Predicting on Custom Data (Part 3): Getting Our Custom Image Into the Right Format (14:06)
Predicting on Custom Data (Part 4): Turning Our Models Raw Outputs Into Prediction Labels (4:24)
Predicting on Custom Data (Part 5): Putting It All Together (12:47)
Summary of What We Have Covered Plus Exercises and Extra-Curriculum (6:04)
Exercise: Imposter Syndrome (2:55)
Section 05: PyTorch Going Modular
What Is Going Modular and What We Are Going to Cover (11:34)
Going Modular Notebook (Part 1): Running It End to End (7:39)
Downloading a Dataset (4:49)
Writing the Outline for Our First Python Script to Setup the Data (13:50)
Creating a Python Script to Create Our PyTorch DataLoaders (10:35)
Turning Our Model Building Code into a Python Script (9:18)
Turning Our Model Training Code into a Python Script (6:16)
Turning Our Utility Function to Save a Model into a Python Script (6:06)
Creating a Training Script to Train Our Model in One Line of Code (15:46)
Going Modular: Summary, Exercises and Extra-Curriculum (5:59)
Section 06: PyTorch Transfer Learning
Introduction: What is Transfer Learning and Why Use It (10:05)
Where Can You Find Pretrained Models and What We Are Going to Cover (5:12)
Installing the Latest Versions of Torch and Torchvision (8:05)
Downloading Our Previously Written Code from Going Modular (6:41)
Downloading Pizza, Steak, Sushi Image Data from Github (8:00)
Turning Our Data into DataLoaders with Manually Created Transforms (14:40)
Turning Our Data into DataLoaders with Automatic Created Transforms (13:06)
Which Pretrained Model Should You Use (12:15)
Setting Up a Pretrained Model with Torchvision (10:57)
Different Kinds of Transfer Learning (7:11)
Getting a Summary of the Different Layers of Our Model (6:49)
Freezing the Base Layers of Our Model and Updating the Classifier Head (13:26)
Training Our First Transfer Learning Feature Extractor Model (7:54)
Plotting the Loss Curves of Our Transfer Learning Model (6:26)
Outlining the Steps to Make Predictions on the Test Images (7:57)
Creating a Function Predict On and Plot Images (10:00)
Making and Plotting Predictions on Test Images (7:23)
Making a Prediction on a Custom Image (6:21)
Main Takeaways, Exercises and Extra Curriculum (3:21)
Section 07: PyTorch Experiment Tracking
What Is Experiment Tracking and Why Track Experiments (7:06)
Getting Setup by Importing Torch Libraries and Going Modular Code (8:13)
Creating a Function to Download Data (10:23)
Turning Our Data into DataLoaders Using Manual Transforms (8:30)
Turning Our Data into DataLoaders Using Automatic Transforms (7:47)
Preparing a Pretrained Model for Our Own Problem (10:28)
Setting Up a Way to Track a Single Model Experiment with TensorBoard (13:35)
Training a Single Model and Saving the Results to TensorBoard (4:38)
Exploring Our Single Models Results with TensorBoard (10:17)
Creating a Function to Create SummaryWriter Instances (10:45)
Adapting Our Train Function to Be Able to Track Multiple Experiments (4:57)
What Experiments Should You Try (5:59)
Discussing the Experiments We Are Going to Try (6:01)
Downloading Datasets for Our Modelling Experiments (6:31)
Turning Our Datasets into DataLoaders Ready for Experimentation (8:28)
Creating Functions to Prepare Our Feature Extractor Models (15:54)
Coding Out the Steps to Run a Series of Modelling Experiments (14:27)
Running Eight Different Modelling Experiments in 5 Minutes (3:50)
Viewing Our Modelling Experiments in TensorBoard (13:38)
Loading In the Best Model and Making Predictions on Random Images from the Test Set (10:32)
Making a Prediction on Our Own Custom Image with the Best Model (3:44)
Main Takeaways, Exercises and Extra Curriculum (3:56)
Section 08: PyTorch Paper Replicating
What Is a Machine Learning Research Paper? (7:34)
Why Replicate a Machine Learning Research Paper? (3:13)
Where Can You Find Machine Learning Research Papers and Code? (8:18)
What We Are Going to Cover (8:21)
Getting Setup for Coding in Google Colab (8:21)
Downloading Data for Food Vision Mini (4:02)
Turning Our Food Vision Mini Images into PyTorch DataLoaders (9:47)
Visualizing a Single Image (3:45)
Replicating a Vision Transformer - High Level Overview (9:53)
Breaking Down Figure 1 of the ViT Paper (11:12)
Breaking Down the Four Equations Overview and a Trick for Reading Papers (10:55)
Breaking Down Equation 1 (8:14)
Breaking Down Equations 2 and 3 (10:03)
Breaking Down Equation 4 (7:27)
Breaking Down Table 1 (11:05)
Calculating the Input and Output Shape of the Embedding Layer by Hand (15:41)
Turning a Single Image into Patches (Part 1: Patching the Top Row) (15:03)
Turning a Single Image into Patches (Part 2: Patching the Entire Image) (12:33)
Creating Patch Embeddings with a Convolutional Layer (13:33)
Exploring the Outputs of Our Convolutional Patch Embedding Layer (12:54)
Flattening Our Convolutional Feature Maps into a Sequence of Patch Embeddings (9:59)
Visualizing a Single Sequence Vector of Patch Embeddings (5:03)
Creating the Patch Embedding Layer with PyTorch (17:01)
Creating the Class Token Embedding (13:24)
Creating the Class Token Embedding - Less Birds (13:24)
Creating the Position Embedding (11:25)
Equation 1: Putting it All Together (13:25)
Equation 2: Multihead Attention Overview (14:30)
Equation 2: Layernorm Overview (9:03)
Turning Equation 2 into Code (14:33)
Checking the Inputs and Outputs of Equation (5:40)
Equation 3: Replication Overview (9:11)
Turning Equation 3 into Code (11:25)
Transformer Encoder Overview (8:50)
Combining Equation 2 and 3 to Create the Transformer Encoder (9:16)
Creating a Transformer Encoder Layer with In-Built PyTorch Layer (15:54)
Bringing Our Own Vision Transformer to Life - Part 1: Gathering the Pieces of the Puzzle (18:19)
Bringing Our Own Vision Transformer to Life - Part 2: Putting Together the Forward Method (10:41)
Getting a Visual Summary of Our Custom Vision Transformer (7:13)
Creating a Loss Function and Optimizer from the ViT Paper (11:26)
Training our Custom ViT on Food Vision Mini (4:29)
Discussing what Our Training Setup Is Missing (9:08)
Plotting a Loss Curve for Our ViT Model (6:13)
Getting a Pretrained Vision Transformer from Torchvision and Setting it Up (14:37)
Preparing Data to Be Used with a Pretrained ViT (5:53)
Training a Pretrained ViT Feature Extractor Model for Food Vision Mini (7:15)
Saving Our Pretrained ViT Model to File and Inspecting Its Size (5:13)
Discussing the Trade-Offs Between Using a Larger Model for Deployments (3:46)
Making Predictions on a Custom Image with Our Pretrained ViT (3:30)
PyTorch Paper Replicating: Main Takeaways, Exercises and Extra-Curriculum (6:50)
Section 09: PyTorch Model Deployment
What is Machine Learning Model Deployment and Why Deploy a Machine Learning Model (9:35)
Three Questions to Ask for Machine Learning Model Deployment (7:13)
Where Is My Model Going to Go? (13:34)
How Is My Model Going to Function? (7:59)
Some Tools and Places to Deploy Machine Learning Models (5:49)
What We Are Going to Cover (4:01)
Getting Setup to Code (6:15)
Downloading a Dataset for Food Vision Mini (3:23)
Outlining Our Food Vision Mini Deployment Goals and Modelling Experiments (7:59)
Creating an EffNetB2 Feature Extractor Model (9:45)
Create a Function to Make an EffNetB2 Feature Extractor Model and Transforms (6:29)
Creating DataLoaders for EffNetB2 (3:31)
Training Our EffNetB2 Feature Extractor and Inspecting the Loss Curves (9:15)
Saving Our EffNetB2 Model to File (3:24)
Getting the Size of Our EffNetB2 Model in Megabytes (5:51)
Collecting Important Statistics and Performance Metrics for Our EffNetB2 Model (6:34)
Creating a Vision Transformer Feature Extractor Model (7:51)
Creating DataLoaders for Our ViT Feature Extractor Model (2:30)
Training Our ViT Feature Extractor Model and Inspecting Its Loss Curves (6:19)
Saving Our ViT Feature Extractor and Inspecting Its Size (5:08)
Collecting Stats About Our ViT Feature Extractor (5:51)
Outlining the Steps for Making and Timing Predictions for Our Models (11:15)
Creating a Function to Make and Time Predictions with Our Models (16:20)
Making and Timing Predictions with EffNetB2 (10:43)
Making and Timing Predictions with ViT (7:34)
Comparing EffNetB2 and ViT Model Statistics (11:31)
Visualizing the Performance vs Speed Trade-off (15:54)
Gradio Overview and Installation (8:39)
Gradio Function Outline (8:49)
Creating a Predict Function to Map Our Food Vision Mini Inputs to Outputs (9:51)
Creating a List of Examples to Pass to Our Gradio Demo (5:26)
Bringing Food Vision Mini to Life in a Live Web Application (12:12)
Getting Ready to Deploy Our App Hugging Face Spaces Overview (6:26)
Outlining the File Structure of Our Deployed App (8:11)
Creating a Food Vision Mini Demo Directory to House Our App Files (4:11)
Creating an Examples Directory with Example Food Vision Mini Images (9:13)
Writing Code to Move Our Saved EffNetB2 Model File (7:42)
Turning Our EffNetB2 Model Creation Function Into a Python Script (4:01)
Turning Our Food Vision Mini Demo App Into a Python Script (13:27)
Creating a Requirements File for Our Food Vision Mini App (4:11)
Downloading Our Food Vision Mini App Files from Google Colab (11:30)
Uploading Our Food Vision Mini App to Hugging Face Spaces Programmatically (13:36)
Running Food Vision Mini on Hugging Face Spaces and Trying it Out (7:44)
Food Vision Big Project Outline (4:17)
Preparing an EffNetB2 Feature Extractor Model for Food Vision Big (9:38)
Downloading the Food 101 Dataset (7:45)
Creating a Function to Split Our Food 101 Dataset into Smaller Portions (13:36)
Turning Our Food 101 Datasets into DataLoaders (7:23)
Training Food Vision Big: Our Biggest Model Yet! (20:15)
Outlining the File Structure for Our Food Vision Big (5:48)
Downloading an Example Image and Moving Our Food Vision Big Model File (3:33)
Saving Food 101 Class Names to a Text File and Reading them Back In (6:56)
Turning Our EffNetB2 Feature Extractor Creation Function into a Python Script (2:20)
Creating an App Script for Our Food Vision Big Model Gradio Demo (10:41)
Zipping and Downloading Our Food Vision Big App Files (3:45)
Deploying Food Vision Big to Hugging Face Spaces (13:34)
PyTorch Mode Deployment: Main Takeaways, Extra-Curriculum and Exercises (6:13)
Introduction to PyTorch 2.0 and torch.compile
Introduction to PyTorch 2.0 (6:01)
What We Are Going to Cover and PyTorch 2 Reference Materials (1:21)
Getting Started with PyTorch 2.0 in Google Colab (4:19)
PyTorch 2.0 - 30 Second Intro (3:20)
Getting Setup for PyTorch 2.0 (2:22)
Getting Info from Our GPUs and Seeing if They're Capable of Using PyTorch 2.0 (6:49)
Setting the Default Device in PyTorch 2.0 (9:40)
Discussing the Experiments We Are Going to Run for PyTorch 2.0 (6:42)
Creating a Function to Setup Our Model and Transforms (10:17)
Discussing How to Get Better Relative Speedups for Training Models (8:23)
Setting the Batch Size and Data Size Programmatically (7:15)
Getting More Speedups with TensorFloat-32 (9:53)
Downloading the CIFAR10 Dataset (7:00)
Creating Training and Test DataLoaders (7:38)
Preparing Training and Testing Loops with Timing Steps (4:58)
Experiment 1 - Single Run without Torch Compile (8:22)
Experiment 2 - Single Run with Torch Compile (10:38)
Comparing the Results of Experiments 1 and 2 (11:19)
Saving the Results of Experiments 1 and 2 (4:39)
Preparing Functions for Experiments 3 and 4 (12:41)
Experiment 3 - Training a Non-Compiled Model for Multiple Runs (12:44)
Experiment 4 - Training a Compiled Model for Multiple Runs (9:57)
Comparing the Results of Experiments 3 and 4 (5:23)
Potential Extensions and Resources to Learn More (5:50)
Where To Go From Here?
Thank You! (1:17)
Review This Course!
Become An Alumni
Learning Guideline
ZTM Events Every Month
LinkedIn Endorsements
Laying Out Steps for Modelling and Setting Up Device-Agnostic Code
This lecture is available exclusively for ZTM Academy members.
If you're already a member,
you'll need to login
.
Join ZTM To Unlock All Lectures