Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Learn Hugging Face by Building a Custom AI Model
Introduction
Introduction (Hugging Face Ecosystem and Text Classification) (6:52)
More Text Classification Examples (4:40)
What We're Going To Build! (7:21)
Exercise: Meet Your Classmates and Instructor
Course Resources
Let's Get Started!
Getting Setup: Adding Hugging Face Tokens to Google Colab (5:52)
Getting Setup: Importing Necessary Libraries to Google Colab (9:35)
Downloading a Text Classification Dataset from Hugging Face Datasets (16:00)
Preparing Text Data & Evaluation Metric
Preparing Text Data for Use with a Model - Part 1: Turning Our Labels into Numbers (12:48)
Preparing Text Data for Use with a Model - Part 2: Creating Train and Test Sets (6:18)
Preparing Text Data for Use with a Model - Part 3: Getting a Tokenizer (12:53)
Preparing Text Data for Use with a Model - Part 4: Exploring Our Tokenizer (10:26)
Preparing Text Data for Use with a Model - Part 5: Creating a Function to Tokenize Our Data (17:57)
Setting Up an Evaluation Metric (to measure how well our model performs) (8:53)
Model Training
Introduction to Transfer Learning (a powerful technique to get good results quickly) (7:10)
Model Training - Part 1: Setting Up a Pretrained Model from the Hugging Face Hub (12:19)
Model Training - Part 2: Counting the Parameters in Our Model (12:27)
Model Training - Part 3: Creating a Folder to Save Our Model (3:53)
Model Training - Part 4: Setting Up Our Training Arguments with TrainingArguments (14:59)
Model Training - Part 5: Setting Up an Instance of Trainer with Hugging Face Transformers (5:05)
Model Training - Part 6: Training Our Model and Fixing Errors Along the Way (13:34)
Model Training - Part 7: Inspecting Our Models Loss Curves (14:39)
Model Training - Part 8: Uploading Our Model to the Hugging Face Hub (8:01)
Making Predictions
Making Predictions on the Test Data with Our Trained Model (5:58)
Turning Our Predictions into Prediction Probabilities with PyTorch (12:48)
Sorting Our Model's Predictions by Their Probability (5:10)
Performing Inference
Performing Inference - Part 1: Discussing Our Options (9:40)
Performing Inference - Part 2: Using a Transformers Pipeline (one sample at a time) (10:01)
Performing Inference - Part 3: Using a Transformers Pipeline on Multiple Samples at a Time (Batching) (6:38)
Performing Inference - Part 4: Running Speed Tests to Compare One at a Time vs. Batched Predictions (10:33)
Performing Inference - Part 5: Performing Inference with PyTorch (12:06)
OPTIONAL - Putting It All Together: from Data Loading, to Model Training, to making Predictions on Custom Data (34:28)
Launching Our Model!
Turning Our Model into a Demo - Part 1: Gradio Overview (3:47)
Turning Our Model into a Demo - Part 2: Building a Function to Map Inputs to Outputs (7:07)
Turning Our Model into a Demo - Part 3: Getting Our Gradio Demo Running Locally (6:46)
Making Our Demo Publicly Accessible - Part 1: Introduction to Hugging Face Spaces and Creating a Demos Directory (8:01)
Making Our Demo Publicly Accessible - Part 2: Creating an App File (12:14)
Making Our Demo Publicly Accessible - Part 3: Creating a README File (7:07)
Making Our Demo Publicly Accessible - Part 4: Making a Requirements File (3:33)
Making Our Demo Publicly Accessible - Part 5: Uploading Our Demo to Hugging Face Spaces and Making it Publicly Available (18:43)
Summary Exercises and Extensions (5:55)
Where To Go From Here?
Review This Project!
Review This Project!
This lecture is available exclusively for ZTM Academy members.
If you're already a member,
you'll need to login
.
Join ZTM To Unlock All Lectures