Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Machine Learning with Hugging Face Bootcamp: Zero to Mastery
Introduction
Machine Learning with Hugging Face Bootcamp: Zero to Mastery (1:48)
Exercise: Meet Your Classmates and Instructor
ZTM Plugin + Understanding Your Video Player
Course Resources
Set Your Learning Streak Goal
Course Overview
Overview (5:02)
Project 0 - Introduction to Text Classification
Introduction to Text Classification (5:43)
What We're Going To Build! (7:21)
Project 0 - Let's Get Started!
Getting Setup: Adding Hugging Face Tokens to Google Colab (5:52)
Getting Setup: Importing Necessary Libraries to Google Colab (9:35)
Downloading a Text Classification Dataset from Hugging Face Datasets (16:00)
Project 0 - Preparing Text Data & Evaluation Metric
Preparing Text Data for Use with a Model - Part 1: Turning Our Labels into Numbers (12:48)
Preparing Text Data for Use with a Model - Part 2: Creating Train and Test Sets (6:18)
Preparing Text Data for Use with a Model - Part 3: Getting a Tokenizer (12:53)
Preparing Text Data for Use with a Model - Part 4: Exploring Our Tokenizer (10:26)
Preparing Text Data for Use with a Model - Part 5: Creating a Function to Tokenize Our Data (17:57)
Setting Up an Evaluation Metric (to measure how well our model performs) (8:53)
Let's Have Some Fun (+ More Resources)
Project 0 - Model Training
Introduction to Transfer Learning (a powerful technique to get good results quickly) (7:10)
Model Training - Part 1: Setting Up a Pretrained Model from the Hugging Face Hub (12:19)
Model Training - Part 2: Counting the Parameters in Our Model (12:27)
Model Training - Part 3: Creating a Folder to Save Our Model (3:53)
Model Training - Part 4: Setting Up Our Training Arguments with TrainingArguments (14:59)
Model Training - Part 5: Setting Up an Instance of Trainer with Hugging Face Transformers (5:05)
Model Training - Part 6: Training Our Model and Fixing Errors Along the Way (13:34)
Model Training - Part 7: Inspecting Our Models Loss Curves (14:39)
Model Training - Part 8: Uploading Our Model to the Hugging Face Hub (8:01)
Unlimited Updates
Project 0 - Making Predictions
Making Predictions on the Test Data with Our Trained Model (5:58)
Turning Our Predictions into Prediction Probabilities with PyTorch (12:48)
Sorting Our Model's Predictions by Their Probability (5:10)
Project 0 - Performing Inference
Performing Inference - Part 1: Discussing Our Options (9:40)
Performing Inference - Part 2: Using a Transformers Pipeline (one sample at a time) (10:01)
Performing Inference - Part 3: Using a Transformers Pipeline on Multiple Samples at a Time (Batching) (6:38)
Performing Inference - Part 4: Running Speed Tests to Compare One at a Time vs. Batched Predictions (10:33)
Performing Inference - Part 5: Performing Inference with PyTorch (12:06)
OPTIONAL - Putting It All Together: from Data Loading, to Model Training, to making Predictions on Custom Data (34:28)
Implement a New Life System
Project 0 - Launching Our Model!
Turning Our Model into a Demo - Part 1: Gradio Overview (3:47)
Turning Our Model into a Demo - Part 2: Building a Function to Map Inputs to Outputs (7:07)
Turning Our Model into a Demo - Part 3: Getting Our Gradio Demo Running Locally (6:46)
Making Our Demo Publicly Accessible - Part 1: Introduction to Hugging Face Spaces and Creating a Demos Directory (8:01)
Making Our Demo Publicly Accessible - Part 2: Creating an App File (12:14)
Making Our Demo Publicly Accessible - Part 3: Creating a README File (7:07)
Making Our Demo Publicly Accessible - Part 4: Making a Requirements File (3:33)
Making Our Demo Publicly Accessible - Part 5: Uploading Our Demo to Hugging Face Spaces and Making it Publicly Available (18:43)
Summary Exercises and Extensions (5:55)
Course Check-In
Project 1 - Building a Custom Object Detection Model with Hugging Face Transformers
Project 1 Overview and Resources
Introduction (10:03)
Setting Up Google Colab with Hugging Face Tokens (5:51)
Installing Necessary Dependencies (3:43)
Getting an Object Detection Dataset (7:37)
Inspecting the Features of Our Dataset (6:23)
Creating a Colour Palette to Visualize Our Classes (9:35)
Creating a Helper Function to Halve Our Image Sizes (4:24)
Creating a Helper Function to Halve Our Box Sizes (6:01)
Testing our Helper Functions (4:33)
Outlining the Steps to Draw Boxes on an Image (6:26)
Plotting Bounding Boxes on a Single Image Step by Step (19:04)
Different Bounding Box Formats (8:17)
Project 1 - Getting and Object Detection Model and Image Preprocessor
Getting an Object Detection Model (6:15)
Transfer Learning Overview (6:08)
Downloading our Model from the Hugging Face Hub and Trying it Out (9:26)
Inspecting the Layers of Our Model (6:53)
Counting the Number of Parameters in Our Model (10:54)
Creating a Function to Build Our Custom Model (13:15)
Passing a Single Image Sample Through Our Model - Part 1 (15:47)
OPTIONAL: Data Preprocessor Model Workflow (8:46)
Loading Our Models Image Preprocessor and Customizing it for Our Use Case (20:10)
Exercise: Imposter Syndrome (2:55)
Project 1 - Getting Hands-on with Different Bounding Box Formats
Bounding Box Formats 101
Discussing the Format Our Model Expects Our Annotations In (COCO) (6:17)
Creating Dataclasses to Hold the COCO Format (9:54)
Creating a Function to Turn Our Annotations into COCO Format (12:05)
Preprocessing a Single Image Sample and COCO Formatted Annotations (7:26)
Post Processing a Single Output (12:02)
Plotting a Single Post Processed Sample onto an Image (12:44)
Project 1 - OPTIONAL: Reproducing our Model's Post Processed Outputs by Hand
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 1: Overview (10:44)
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 2: Replicating Scores by Hand (28:32)
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 3: Replicating Labels by Hand (12:32)
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 4: Replicating Boxes by Hand Overview (10:23)
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 5: Replicating Boxes by Hand Implementation (17:41)
OPTIONAL: Reproducing Our Models Post Processed Outputs by Hand - Part 6: Plotting Our Manual Post Processed Outputs on an Image (6:44)
Project 1 - Preparing our Data for Training our Object Detection Model
Preparing Our Data at Scale - Part 1: Concept Overview (9:21)
Preparing Our Data at Scale - Part 2: Creating Train Validation and Test Splits (12:13)
Preparing Our Data at Scale - Part 3: Preprocessing Multiple Samples at a Time Overview (8:16)
Preparing our Data at Scale - Part 4: Making a Function to Preprocess Multiple Samples at a Time (21:37)
Preparing our Data at Scale - Part 5: Applying Our Preprocessing Function to Our Datasets (9:37)
Preparing Our Data at Scale - Part 6: Creating a Data Collation Function (12:20)
Project 1 - Training a Custom Object Detection Model for Trashify
Training a Custom Model - Part 1: Overview (7:42)
Training a Custom Model - Part 2: Creating a Model and Folder to Save Our Model to (4:11)
Training a Custom Model - Part 3: Creating TrainingArguments for Our Model Overview (12:53)
Training a Custom Model - Part 4: Creating our First TrainingArguments (11:11)
Training a Custom Model - Part 5: Finishing Off the TrainingArguments (12:39)
Training a Custom Model - Part 6: OPTIONAL - Creating a Custom Optimizer for Different Learning Rates (16:05)
Training a Custom Model - Part 7: Creating an Evaluation Function for Our Model Overview (13:09)
Training a Custom Model - Part 8: Creating an Evaluation Function for Our Model Targets Processing (22:49)
Training a Custom Model - Part 9: Creating an Evaluation Function for Our Model Predictions Processing (13:52)
Training a Custom Model - Part 10: Training Our Model with Trainer (12:53)
Training a Custom Model - Part 11: Plotting Our Models Loss Curves (8:35)
Project 1 - Evaluating our Trained Object Detection Model
Evaluating Our Model on the Test Dataset (11:13)
Making Predictions on Test Data and Visualizing Them (24:20)
Plotting Our Models Predictions vs. the Ground Truth Images (12:00)
Trying Our Model on Images from the Wild (9:49)
Uploading Our Trained Model to the Hugging Face Hub (10:46)
Project 1 - Bringing Trashify to Life: Turning our Custom Model into a Shareable Demo
Turning Our Model into a Demo - Part 1: Gradio and Hugging Face Spaces Overview (10:10)
Turning Our Model into a Demo - Part 2: Creating an App File Overview (7:10)
Turning Our Model into a Demo - Part 3: Building the Main Function of Our App File (27:32)
Turning Our Model into a Demo - Part 4: Finishing Off Our App File and Testing Our Demo (9:56)
Turning Our Model into a Demo - Part 5: Creating a Readme and Requirements File (3:31)
Turning Our Model into a Demo - Part 6: Getting Example Images for Our Demo (8:19)
Turning Our Model into a Demo - Part 7: Uploading Our Demo to the Hugging Face Hub (17:18)
Turning Our Model into a Demo - Part 8: Embedding Our Demo into Our Notebook (3:44)
Summary, Extensions and Extra-Curriculum (6:15)
Where To Go From Here?
Thank You! (1:17)
Review This Course!
Become An Alumni
Learning Guideline
ZTM Events Every Month
LinkedIn Endorsements
Downloading our Model from the Hugging Face Hub and Trying it Out
This lecture is available exclusively for ZTM Academy members.
If you're already a member,
you'll need to login
.
Join ZTM To Unlock All Lectures