You are NOT signed in as a student! Your progress will not be saved and you won't see milestones on your ZTM Passport.
Make sure you are logged into
Academy
Autoplay
Autocomplete
Previous Lesson
Complete and Continue
TensorFlow for Deep Learning Bootcamp: Zero to Mastery
Introduction
TensorFlow for Deep Learning: Zero to Mastery (1:38)
Course Outline (4:16)
Exercise: Meet Your Classmates and Instructor
All Course Resources + Notebooks
Python + Machine Learning Monthly
ZTM Plugin + Understanding Your Video Player
Set Your Learning Streak Goal
Deep Learning and TensorFlow Fundamentals
What is deep learning? (4:38)
Why use deep learning? (9:38)
What are neural networks? (10:26)
What is deep learning already being used for? (8:36)
What is and why use TensorFlow? (7:56)
What is a Tensor? (3:37)
What we're going to cover throughout the course (4:29)
How to approach this course (5:34)
Need A Refresher?
Creating your first tensors with TensorFlow and tf.constant() (18:45)
Creating tensors with TensorFlow and tf.Variable() (7:07)
Creating random tensors with TensorFlow (9:40)
Shuffling the order of tensors (9:40)
Creating tensors from NumPy arrays (11:55)
Getting information from your tensors (tensor attributes) (11:57)
Indexing and expanding tensors (12:33)
Manipulating tensors with basic operations (5:34)
Matrix multiplication with tensors part 1 (11:53)
Matrix multiplication with tensors part 2 (13:29)
Matrix multiplication with tensors part 3 (10:03)
Changing the datatype of tensors (6:55)
Tensor aggregation (finding the min, max, mean & more) (9:49)
Tensor troubleshooting example (updating tensor datatypes) (6:13)
Finding the positional minimum and maximum of a tensor (argmin and argmax) (9:31)
Squeezing a tensor (removing all 1-dimension axes) (2:59)
One-hot encoding tensors (5:46)
Trying out more tensor math operations (4:47)
Exploring TensorFlow and NumPy's compatibility (5:43)
Making sure our tensor operations run really fast on GPUs (10:19)
TensorFlow Fundamentals challenge, exercises & extra-curriculum
Let's Have Some Fun (+ Free Resources)
Neural network regression with TensorFlow
Introduction to Neural Network Regression with TensorFlow (7:33)
Inputs and outputs of a neural network regression model (8:59)
Anatomy and architecture of a neural network regression model (7:55)
Creating sample regression data (so we can model it) (12:46)
Note: Code update for upcoming lecture(s) for TensorFlow 2.7.0+ fix
Endorsements On LinkedIn
The major steps in modelling with TensorFlow (20:15)
Steps in improving a model with TensorFlow part 1 (6:02)
Steps in improving a model with TensorFlow part 2 (9:25)
Steps in improving a model with TensorFlow part 3 (12:33)
Evaluating a TensorFlow model part 1 ("visualise, visualise, visualise") (7:24)
Evaluating a TensorFlow model part 2 (the three datasets) (11:01)
Evaluating a TensorFlow model part 3 (getting a model summary) (17:18)
Evaluating a TensorFlow model part 4 (visualising a model's layers) (7:14)
Evaluating a TensorFlow model part 5 (visualising a model's predictions) (9:16)
Evaluating a TensorFlow model part 6 (common regression evaluation metrics) (8:05)
Evaluating a TensorFlow regression model part 7 (mean absolute error) (5:52)
Evaluating a TensorFlow regression model part 7 (mean square error) (3:18)
Setting up TensorFlow modelling experiments part 1 (start with a simple model) (13:50)
Setting up TensorFlow modelling experiments part 2 (increasing complexity) (11:29)
Comparing and tracking your TensorFlow modelling experiments (10:20)
How to save a TensorFlow model (8:19)
How to load and use a saved TensorFlow model (10:15)
(Optional) How to save and download files from Google Colab (6:18)
Putting together what we've learned part 1 (preparing a dataset) (13:31)
Putting together what we've learned part 2 (building a regression model) (13:20)
Putting together what we've learned part 3 (improving our regression model) (15:47)
Preprocessing data with feature scaling part 1 (what is feature scaling?) (9:34)
Preprocessing data with feature scaling part 2 (normalising our data) (10:57)
Preprocessing data with feature scaling part 3 (fitting a model on scaled data) (7:40)
TensorFlow Regression challenge, exercises & extra-curriculum
Unlimited Updates
Neural network classification in TensorFlow
Introduction to neural network classification in TensorFlow (8:25)
Example classification problems (and their inputs and outputs) (6:38)
Input and output tensors of classification problems (6:21)
Typical architecture of neural network classification models with TensorFlow (9:36)
Creating and viewing classification data to model (11:34)
Checking the input and output shapes of our classification data (4:38)
Building a not very good classification model with TensorFlow (12:10)
Trying to improve our not very good classification model (9:13)
Creating a function to view our model's not so good predictions (15:08)
Note: Updates for TensorFlow 2.7.0
Make our poor classification model work for a regression dataset (12:18)
Non-linearity part 1: Straight lines and non-straight lines (9:38)
Non-linearity part 2: Building our first neural network with non-linearity (5:47)
Non-linearity part 3: Upgrading our non-linear model with more layers (10:18)
Non-linearity part 4: Modelling our non-linear data once and for all (8:37)
Non-linearity part 5: Replicating non-linear activation functions from scratch (14:26)
Getting great results in less time by tweaking the learning rate (14:47)
Using the TensorFlow History object to plot a model's loss curves (6:11)
Using callbacks to find a model's ideal learning rate (17:32)
Training and evaluating a model with an ideal learning rate (9:20)
Introducing more classification evaluation methods (6:04)
Finding the accuracy of our classification model (4:17)
Creating our first confusion matrix (to see where our model is getting confused) (8:27)
Making our confusion matrix prettier (14:00)
Putting things together with multi-class classification part 1: Getting the data (10:37)
Multi-class classification part 2: Becoming one with the data (7:07)
Multi-class classification part 3: Building a multi-class classification model (15:38)
Multi-class classification part 4: Improving performance with normalisation (12:43)
Multi-class classification part 5: Comparing normalised and non-normalised data (4:13)
Multi-class classification part 6: Finding the ideal learning rate (10:38)
Multi-class classification part 7: Evaluating our model (13:16)
Multi-class classification part 8: Creating a confusion matrix (4:26)
Multi-class classification part 9: Visualising random model predictions (10:42)
What "patterns" is our model learning? (15:33)
TensorFlow classification challenge, exercises & extra-curriculum
Course Check-In
Computer Vision and Convolutional Neural Networks in TensorFlow
Introduction to Computer Vision with TensorFlow (9:36)
Introduction to Convolutional Neural Networks (CNNs) with TensorFlow (7:59)
Downloading an image dataset for our first Food Vision model (8:27)
Becoming One With Data (5:05)
Becoming One With Data Part 2 (12:26)
Becoming One With Data Part 3 (4:22)
Building an end to end CNN Model (18:17)
Using a GPU to run our CNN model 5x faster (9:17)
Trying a non-CNN model on our image data (8:51)
Improving our non-CNN model by adding more layers (9:52)
Breaking our CNN model down part 1: Becoming one with the data (9:03)
Breaking our CNN model down part 2: Preparing to load our data (11:46)
Breaking our CNN model down part 3: Loading our data with ImageDataGenerator (9:54)
Breaking our CNN model down part 4: Building a baseline CNN model (8:02)
Breaking our CNN model down part 5: Looking inside a Conv2D layer (15:20)
Breaking our CNN model down part 6: Compiling and fitting our baseline CNN (7:14)
Breaking our CNN model down part 7: Evaluating our CNN's training curves (11:45)
Breaking our CNN model down part 8: Reducing overfitting with Max Pooling (13:40)
Breaking our CNN model down part 9: Reducing overfitting with data augmentation (6:52)
Breaking our CNN model down part 10: Visualizing our augmented data (15:04)
Breaking our CNN model down part 11: Training a CNN model on augmented data (8:49)
Breaking our CNN model down part 12: Discovering the power of shuffling data (10:01)
Breaking our CNN model down part 13: Exploring options to improve our model (5:21)
Downloading a custom image to make predictions on (4:54)
Writing a helper function to load and preprocessing custom images (10:00)
Making a prediction on a custom image with our trained CNN (10:08)
Multi-class CNN's part 1: Becoming one with the data (14:59)
Multi-class CNN's part 2: Preparing our data (turning it into tensors) (6:38)
Multi-class CNN's part 3: Building a multi-class CNN model (7:24)
Multi-class CNN's part 4: Fitting a multi-class CNN model to the data (6:02)
Multi-class CNN's part 5: Evaluating our multi-class CNN model (4:51)
Multi-class CNN's part 6: Trying to fix overfitting by removing layers (12:19)
Multi-class CNN's part 7: Trying to fix overfitting with data augmentation (11:45)
Multi-class CNN's part 8: Things you could do to improve your CNN model (4:23)
Multi-class CNN's part 9: Making predictions with our model on custom images (9:22)
Saving and loading our trained CNN model (6:21)
TensorFlow computer vision and CNNs challenge, exercises & extra-curriculum
Implement a New Life System
Transfer Learning in TensorFlow Part 1: Feature extraction
What is and why use transfer learning? (10:12)
Downloading and preparing data for our first transfer learning model (14:39)
Introducing Callbacks in TensorFlow and making a callback to track our models (10:01)
Exploring the TensorFlow Hub website for pretrained models (9:51)
Building and compiling a TensorFlow Hub feature extraction model (14:00)
Blowing our previous models out of the water with transfer learning (9:13)
Plotting the loss curves of our ResNet feature extraction model (7:35)
Building and training a pre-trained EfficientNet model on our data (9:42)
Different Types of Transfer Learning (11:40)
Comparing Our Model's Results (15:16)
TensorFlow Transfer Learning Part 1 challenge, exercises & extra-curriculum
Transfer Learning in TensorFlow Part 2: Fine tuning
Introduction to Transfer Learning in TensorFlow Part 2: Fine-tuning (6:16)
Importing a script full of helper functions (and saving lots of space) (7:35)
Exercise: Imposter Syndrome (2:55)
Downloading and turning our images into a TensorFlow BatchDataset (15:38)
Discussing the four (actually five) modelling experiments we're running (2:15)
Comparing the TensorFlow Keras Sequential API versus the Functional API (2:34)
Note: Fixes for EfficientNetB0 model creation + weight loading
Creating our first model with the TensorFlow Keras Functional API (11:38)
Compiling and fitting our first Functional API model (10:53)
Getting a feature vector from our trained model (13:39)
Drilling into the concept of a feature vector (a learned representation) (3:43)
Downloading and preparing the data for Model 1 (1 percent of training data) (9:51)
Building a data augmentation layer to use inside our model (12:06)
Note: Small fix for next video, for images not augmenting
Visualizing what happens when images pass through our data augmentation layer (10:55)
Building Model 1 (with a data augmentation layer and 1% of training data) (15:55)
Building Model 2 (with a data augmentation layer and 10% of training data) (16:37)
Creating a ModelCheckpoint to save our model's weights during training (7:25)
Fitting and evaluating Model 2 (and saving its weights using ModelCheckpoint) (7:14)
Loading and comparing saved weights to our existing trained Model 2 (7:17)
Preparing Model 3 (our first fine-tuned model) (20:26)
Fitting and evaluating Model 3 (our first fine-tuned model) (7:45)
Comparing our model's results before and after fine-tuning (10:26)
Downloading and preparing data for our biggest experiment yet (Model 4) (6:24)
Preparing our final modelling experiment (Model 4) (12:00)
Fine-tuning Model 4 on 100% of the training data and evaluating its results (10:19)
Comparing our modelling experiment results in TensorBoard (10:46)
How to view and delete previous TensorBoard experiments (2:04)
Transfer Learning in TensorFlow Part 2 challenge, exercises and extra-curriculum
Transfer Learning with TensorFlow Part 3: Scaling Up
Introduction to Transfer Learning Part 3: Scaling Up (6:19)
Getting helper functions ready and downloading data to model (13:34)
Outlining the model we're going to build and building a ModelCheckpoint callback (5:38)
Creating a data augmentation layer to use with our model (4:39)
Creating a headless EfficientNetB0 model with data augmentation built in (8:58)
Fitting and evaluating our biggest transfer learning model yet (7:56)
Unfreezing some layers in our base model to prepare for fine-tuning (11:28)
Fine-tuning our feature extraction model and evaluating its performance (8:23)
Saving and loading our trained model (6:25)
Downloading a pretrained model to make and evaluate predictions with (6:34)
Making predictions with our trained model on 25,250 test samples (12:46)
Unravelling our test dataset for comparing ground truth labels to predictions (6:05)
Confirming our model's predictions are in the same order as the test labels (5:17)
Creating a confusion matrix for our model's 101 different classes (12:07)
Evaluating every individual class in our dataset (14:16)
Plotting our model's F1-scores for each separate class (7:36)
Creating a function to load and prepare images for making predictions (12:08)
Making predictions on our test images and evaluating them (16:06)
Discussing the benefits of finding your model's most wrong predictions (6:09)
Writing code to uncover our model's most wrong predictions (11:16)
Plotting and visualizing the samples our model got most wrong (10:36)
Making predictions on and plotting our own custom images (9:49)
Transfer Learning in TensorFlow Part 3 challenge, exercises and extra-curriculum
Milestone Project 1: Food Vision Bigâ„¢
Introduction to Milestone Project 1: Food Vision Bigâ„¢ (5:44)
Making sure we have access to the right GPU for mixed precision training (10:17)
Getting helper functions ready (3:06)
Introduction to TensorFlow Datasets (TFDS) (12:03)
Exploring and becoming one with the data (Food101 from TensorFlow Datasets) (15:56)
Creating a preprocessing function to prepare our data for modelling (15:50)
Batching and preparing our datasets (to make them run fast) (13:47)
Exploring what happens when we batch and prefetch our data (6:49)
Creating modelling callbacks for our feature extraction model (7:14)
Note: Mixed Precision producing errors for TensorFlow 2.5+
Turning on mixed precision training with TensorFlow (10:05)
Creating a feature extraction model capable of using mixed precision training (12:42)
Checking to see if our model is using mixed precision training layer by layer (7:56)
Training and evaluating a feature extraction model (Food Vision Bigâ„¢) (10:19)
Introducing your Milestone Project 1 challenge: build a model to beat DeepFood (7:47)
Milestone Project 1: Food Vision Bigâ„¢, exercises and extra-curriculum
NLP Fundamentals in TensorFlow
Welcome to natural language processing with TensorFlow!
Introduction to Natural Language Processing (NLP) and Sequence Problems (12:51)
Example NLP inputs and outputs (7:22)
The typical architecture of a Recurrent Neural Network (RNN) (9:03)
Preparing a notebook for our first NLP with TensorFlow project (8:52)
Becoming one with the data and visualizing a text dataset (16:41)
Splitting data into training and validation sets (6:26)
Converting text data to numbers using tokenisation and embeddings (overview) (9:22)
Setting up a TensorFlow TextVectorization layer to convert text to numbers (17:10)
Mapping the TextVectorization layer to text data and turning it into numbers (11:02)
Creating an Embedding layer to turn tokenised text into embedding vectors (12:27)
Discussing the various modelling experiments we're going to run (8:57)
Model 0: Building a baseline model to try and improve upon (9:25)
Creating a function to track and evaluate our model's results (12:14)
Model 1: Building, fitting and evaluating our first deep model on text data (20:51)
Visualizing our model's learned word embeddings with TensorFlow's projector tool (20:43)
High-level overview of Recurrent Neural Networks (RNNs) + where to learn more (9:34)
Model 2: Building, fitting and evaluating our first TensorFlow RNN model (LSTM) (18:16)
Model 3: Building, fitting and evaluating a GRU-cell powered RNN (16:56)
Model 4: Building, fitting and evaluating a bidirectional RNN model (19:34)
Discussing the intuition behind Conv1D neural networks for text and sequences (19:31)
Model 5: Building, fitting and evaluating a 1D CNN for text (9:57)
Using TensorFlow Hub for pretrained word embeddings (transfer learning for NLP) (13:45)
Model 6: Building, training and evaluating a transfer learning model for NLP (10:45)
Preparing subsets of data for model 7 (same as model 6 but 10% of data) (10:52)
Model 7: Building, training and evaluating a transfer learning model on 10% data (10:04)
Fixing our data leakage issue with model 7 and retraining it (13:42)
Comparing all our modelling experiments evaluation metrics (13:14)
Uploading our model's training logs to TensorBoard and comparing them (11:14)
Saving and loading in a trained NLP model with TensorFlow (10:25)
Downloading a pretrained model and preparing data to investigate predictions (13:24)
Visualizing our model's most wrong predictions (8:28)
Making and visualizing predictions on the test dataset (8:27)
Understanding the concept of the speed/score tradeoff (15:01)
NLP Fundamentals in TensorFlow challenge, exercises and extra-curriculum
Milestone Project 2: SkimLit
Introduction to Milestone Project 2: SkimLit (14:20)
What we're going to cover in Milestone Project 2 (NLP for medical abstracts) (7:22)
SkimLit inputs and outputs (11:02)
Setting up our notebook for Milestone Project 2 (getting the data) (14:58)
Visualizing examples from the dataset (becoming one with the data) (13:18)
Writing a preprocessing function to structure our data for modelling (19:50)
Performing visual data analysis on our preprocessed text (7:55)
Turning our target labels into numbers (ML models require numbers) (13:15)
Model 0: Creating, fitting and evaluating a baseline model for SkimLit (9:25)
Preparing our data for deep sequence models (9:55)
Creating a text vectoriser to map our tokens (text) to numbers (14:07)
Creating a custom token embedding layer with TensorFlow (9:14)
Creating fast loading dataset with the TensorFlow tf.data API (9:49)
Model 1: Building, fitting and evaluating a Conv1D with token embeddings (17:21)
Preparing a pretrained embedding layer from TensorFlow Hub for Model 2 (10:53)
Model 2: Building, fitting and evaluating a Conv1D model with token embeddings (11:30)
Creating a character-level tokeniser with TensorFlow's TextVectorization layer (23:24)
Creating a character-level embedding layer with tf.keras.layers.Embedding (7:44)
Model 3: Building, fitting and evaluating a Conv1D model on character embeddings (13:45)
Discussing how we're going to build Model 4 (character + token embeddings) (6:04)
Model 4: Building a multi-input model (hybrid token + character embeddings) (15:36)
Model 4: Plotting and visually exploring different data inputs (7:32)
Crafting multi-input fast loading tf.data datasets for Model 4 (8:41)
Model 4: Building, fitting and evaluating a hybrid embedding model (13:18)
Model 5: Adding positional embeddings via feature engineering (overview) (7:18)
Encoding the line number feature to used with Model 5 (12:25)
Encoding the total lines feature to be used with Model 5 (7:56)
Model 5: Building the foundations of a tribrid embedding model (9:19)
Model 5: Completing the build of a tribrid embedding model for sequences (14:08)
Visually inspecting the architecture of our tribrid embedding model (10:25)
Creating multi-level data input pipelines for Model 5 with the tf.data API (9:00)
Bringing SkimLit to life!!! (fitting and evaluating Model 5) (10:35)
Comparing the performance of all of our modelling experiments (9:36)
Saving, loading & testing our best performing model (7:48)
Congratulations and your challenge before heading to the next module (12:33)
Milestone Project 2 (SkimLit) challenge, exercises and extra-curriculum
Time Series fundamentals in TensorFlow + Milestone Project 3: BitPredict
Welcome to time series fundamentals with TensorFlow + Milestone Project 3!
Introduction to Milestone Project 3 (BitPredict) & where you can get help (3:53)
What is a time series problem and example forecasting problems at Uber (7:45)
Example forecasting problems in daily life (4:52)
What can be forecast? (7:57)
What we're going to cover (broadly) (2:35)
Time series forecasting inputs and outputs (8:55)
Downloading and inspecting our Bitcoin historical dataset (14:58)
Different kinds of time series patterns & different amounts of feature variables (7:39)
Visualizing our Bitcoin historical data with pandas (4:52)
Reading in our Bitcoin data with Python's CSV module (10:58)
Creating train and test splits for time series (the wrong way) (8:37)
Creating train and test splits for time series (the right way) (7:12)
Creating a plotting function to visualize our time series data (7:57)
Discussing the various modelling experiments were going to be running (9:11)
Model 0: Making and visualizing a naive forecast model (12:16)
Discussing some of the most common time series evaluation metrics (11:11)
Implementing MASE with TensorFlow (9:38)
Creating a function to evaluate our model's forecasts with various metrics (10:11)
Discussing other non-TensorFlow kinds of time series forecasting models (5:06)
Formatting data Part 2: Creating a function to label our windowed time series (13:01)
Discussing the use of windows and horizons in time series data (7:50)
Writing a preprocessing function to turn time series data into windows & labels (23:35)
Turning our windowed time series data into training and test sets (10:01)
Creating a modelling checkpoint callback to save our best performing model (7:25)
Model 1: Building, compiling and fitting a deep learning model on Bitcoin data (16:58)
Creating a function to make predictions with our trained models (14:02)
Model 2: Building, fitting and evaluating a deep model with a larger window size-27 (17:43)
Model 3: Building, fitting and evaluating a model with a larger horizon size (13:15)
Adjusting the evaluation function to work for predictions with larger horizons (8:34)
Model 3: Visualizing the results (8:44)
Comparing our modelling experiments so far and discussing autocorrelation (9:44)
Preparing data for building a Conv1D model (13:21)
Model 4: Building, fitting and evaluating a Conv1D model on our Bitcoin data (14:51)
Model 5: Building, fitting and evaluating a LSTM (RNN) model on our Bitcoin data (16:05)
Investigating how to turn our univariate time series into multivariate (13:52)
Creating and plotting a multivariate time series with BTC price and block reward (12:12)
Preparing our multivariate time series for a model (13:37)
Model 6: Building, fitting and evaluating a multivariate time series model (9:25)
Model 7: Discussing what we're going to be doing with the N-BEATS algorithm (9:39)
Model 7: Replicating the N-BEATS basic block with TensorFlow layer subclassing (18:38)
Model 7: Testing our N-BEATS block implementation with dummy data inputs (15:02)
Model 7: Creating a performant data pipeline for the N-BEATS model with tf.data (14:09)
Model 7: Setting up hyperparameters for the N-BEATS algorithm (8:50)
Model 7: Getting ready for residual connections (12:55)
Model 7: Outlining the steps we're going to take to build the N-BEATS model (10:05)
Model 7: Putting together the pieces of the puzzle of the N-BEATS model (22:22)
Model 7: Plotting the N-BEATS algorithm we've created and admiring its beauty (6:46)
Model 8: Ensemble model overview (4:43)
Model 8: Building, compiling and fitting an ensemble of models (20:04)
Model 8: Making and evaluating predictions with our ensemble model (16:09)
Discussing the importance of prediction intervals in forecasting (12:56)
Getting the upper and lower bounds of our prediction intervals (7:57)
Plotting the prediction intervals of our ensemble model predictions (13:02)
(Optional) Discussing the types of uncertainty in machine learning (13:41)
Model 9: Preparing data to create a model capable of predicting into the future (8:24)
Model 9: Building, compiling and fitting a future predictions model (5:01)
Model 9: Discussing what's required for our model to make future predictions (8:30)
Model 9: Creating a function to make forecasts into the future (12:08)
Model 9: Plotting our model's future forecasts (13:09)
Model 10: Introducing the turkey problem and making data for it (14:15)
Model 10: Building a model to predict on turkey data (why forecasting is BS) (13:38)
Comparing the results of all of our models and discussing where to go next (12:59)
TensorFlow Time Series Fundamentals Challenge and Extra Resources
Where To Go From Here?
Thank You! (1:17)
Review This Course!
Become An Alumni
Learning Guideline
ZTM Events Every Month
LinkedIn Endorsements
Appendix: Machine Learning Primer
Quick Note: Upcoming Videos
What is Machine Learning? (6:52)
AI/Machine Learning/Data Science (4:51)
Exercise: Machine Learning Playground (6:16)
How Did We Get Here? (6:03)
Exercise: YouTube Recommendation Engine (4:24)
Types of Machine Learning (4:41)
Are You Getting It Yet?
What Is Machine Learning? Round 2 (4:44)
Section Review (1:48)
Appendix: Machine Learning and Data Science Framework
Quick Note: Upcoming Videos
Section Overview (3:08)
Introducing Our Framework (2:38)
6 Step Machine Learning Framework (4:59)
Types of Machine Learning Problems (10:32)
Types of Data (4:50)
Types of Evaluation (3:31)
Features In Data (5:22)
Modelling - Splitting Data (5:58)
Modelling - Picking the Model (4:35)
Modelling - Tuning (3:17)
Modelling - Comparison (9:32)
Overfitting and Underfitting Definitions
Experimentation (3:35)
Tools We Will Use (3:59)
Optional: Elements of AI(document)
Appendix: Pandas for Data Analysis
Quick Note: Upcoming Videos
Section Overview (2:27)
Downloading Workbooks and Assignments
Pandas Introduction (4:29)
Series, Data Frames and CSVs (13:21)
Data from URLs
Describing Data with Pandas (9:48)
Selecting and Viewing Data with Pandas (11:08)
Selecting and Viewing Data with Pandas Part 2 (13:06)
Manipulating Data (13:56)
Manipulating Data 2 (9:56)
Manipulating Data 3 (10:12)
Assignment: Pandas Practice
How To Download The Course Assignments (7:43)
Appendix: NumPy
Quick Note: Upcoming Videos
Section Overview (2:40)
NumPy Introduction (5:17)
Quick Note: Correction In Next Video
NumPy DataTypes and Attributes (14:05)
Creating NumPy Arrays (9:22)
NumPy Random Seed (7:17)
Viewing Arrays and Matrices (9:35)
Manipulating Arrays (11:31)
Manipulating Arrays 2 (9:44)
Standard Deviation and Variance (7:10)
Reshape and Transpose (7:26)
Dot Product vs Element Wise (11:45)
Exercise: Nut Butter Store Sales (13:04)
Comparison Operators (3:33)
Sorting Arrays (6:19)
Turn Images Into NumPy Arrays (7:37)
Assignment: NumPy Practice
Optional: Extra NumPy resources
Series, Data Frames and CSVs
This lecture is available exclusively for ZTM Academy members.
If you're already a member,
you'll need to login
.
Join ZTM To Unlock All Lectures