00:06 Day one of deep learning covers basics and interview preparation. 02:42 Understanding the importance of deep learning 07:21 Machine learning is a subset of AI 09:36 Machine learning is a subset of AI 13:54 Deep learning's popularity and its relation to the growth of social media and web applications. 16:11 Big data and its growing demand in storing and utilizing large data sets 20:13 Deep learning models can generate revenues through seamless experience and better decision-making. 22:09 Nvidia's RTX series GPUs and the advancements in deep learning. 26:11 Understanding input layers in binary classification 28:01 Neurons process signals through layers to predict outputs. 31:33 Signal processing and neuron function 33:16 Neurons need to be connected with proper weights for functioning. 37:16 Importance of weights in neural network training 39:02 Understanding the need for bias in deep learning 42:52 Introduction to basic topics in deep learning 44:52 Understanding forward propagation in deep learning. 48:58 Understanding the process of training in deep learning 51:06 Explanation of gradient descent as an optimizer 55:17 Neural network structure and layers 57:13 Deep learning topics covered and future plans 1:01:35 Overview of Neural Network Architecture 1:03:37 Sigmoid activation function and its role in converting y value to 0 or 1 1:07:32 Backward propagation and weight updation formula 1:09:29 Gradient descent involves finding the global minimum in the graph of weights and the loss function 1:13:14 Negative slope increases weights, positive slope decreases weights 1:15:10 Learning rate plays a crucial role in gradient descent 1:18:52 Understanding the derivative of loss with respect to derivative of w 1:20:49 Updating w4 using chain rule 1:24:36 Applying the chain rule for updating w1 in the equation 1:26:30 Derivative of loss with respect to derivative of weight updation formula 1:30:26 Derivative calculation using chain rule of differentiation 1:32:28 Understanding the vanishing gradient problem 1:36:40 Understanding chain rule in deep learning 1:38:42 Sigmoid activation function properties 1:42:19 Vanishing gradient problem and activation functions 1:44:14 Discussing the sigmoid activation function 1:48:08 Issues with Sigmoid Activation Function 1:49:56 Tanh activation function limits values between -1 to 1 1:53:15 ReLU activation function has the advantage of faster computation 1:55:07 Rectified Linear Unit (ReLU) is a popular activation function with advantages over sigmoid and tanh. 1:58:44 Understanding different activation functions 2:00:39 Use relu activation in hidden layer and sigmoid in output layer for binary classification. For multi-class classification, use softmax activation in output layer. 2:04:40 Different types of loss functions in deep learning 2:06:32 Multi-class classification problem with different loss functions 2:10:23 Mean Squared Error for regression 2:12:15 Advantages and disadvantages of the quadratic equation curve 2:16:20 Mean Absolute Error is robust to outliers 2:18:18 Introduction to Sub Gradient and Huber Loss 2:22:37 Log loss function in logistic regression 2:24:34 Categorical Cross Entropy for Multi-Class Classification 2:28:29 Soft max activation function explained 2:30:17 Understanding Softmax Activation Function 2:34:10 Day Three - Deep Learning Optimizers 2:36:27 Explaining the gradient descent and weight updation formula in back propagation. 2:40:29 An epoch represents one forward and backward propagation cycle. 2:42:23 Gradient descent has a major disadvantage in terms of resource usage 2:46:13 Stochastic gradient descent has slow convergence 2:48:16 Using mini-batches reduces resource intensity and improves convergence 2:52:09 Movement towards global minima and noise in gradient descent. 2:54:01 Momentum helps smoothen the optimization process and reduce noise 2:58:05 Understanding exponential weighted average 3:00:00 Exponential weighted average and the importance of beta value. 3:03:52 Exponential moving average for smoothening the curve 3:05:55 Gradient descent methods and its variations 3:10:02 Adaptive gradient descent replaces learning rate with a new value. 3:11:57 Epsilon is a small number added to avoid divide by 0 condition. 3:15:43 Preventing huge changes in learning rate 3:17:34 Control alpha of t to avoid huge values 3:21:31 Introducing smoothening in SGD with momentum 3:23:31 Adam optimizer combines momentum and rms prop for adaptive learning rate. 3:27:45 Day four of deep learning agenda 3:30:03 Early stopping is an important concept for training deep learning models 3:34:05 Installing TensorFlow and checking the version 3:36:03 Predict customer churn in a binary classification problem 3:39:54 Utilizing one hot encoding or get dummies for fixing categorical features 3:41:48 Explaining the purpose of get_dummies and drop_first parameter 3:45:28 Handling categorical features and performing train-test split 3:47:21 Importance of feature scaling in machine learning algorithms. 3:51:34 Standard Scalar usage and initialization 3:53:27 Transformed data for training and testing 3:57:33 Importing activation functions and dropout in TensorFlow 3:59:33 Explanation of hidden layers, sequential and dense layers 4:03:05 Dropout layer deactivates neurons during training to prevent overfitting 4:04:49 Initializing the classifier and adding input layers 4:08:33 Binary classification uses sigmoid activation function. 4:10:25 Setting learning rate and optimizer in TensorFlow 4:14:02 Validation split and stopping point in deep learning training 4:15:50 Early stopping helps to automatically stop training when the accuracy doesn't improve. 4:19:37 Implementing early stopping in model training process 4:21:38 Visualizing model history for accuracy and loss 4:25:41 Storing and accessing weights in a neural network 4:27:38 Implementing dropout in a deep learning model 4:32:01 XGBoost uses many decision trees, which makes it a black box model. 4:33:44 Introduction to CNN and its applications in image processing. 4:37:58 Deep learning helps in processing and understanding various objects in an image or video. 4:40:03 Deep learning layers perform different tasks. 4:43:51 RGB image consists of three channels: red, green, and blue. 4:45:43 Image channels combine to create different colors in RGB format. 4:49:30 Min max scaling converts pixels to a range of 0 to 1 4:51:22 Understanding convolution operation in deep learning 4:54:59 Understanding calculations and values 4:56:44 Convolution operation with filters and their impact on image dimensions 5:00:49 Reverting back feature scaling affects the values 5:02:55 Filter operation to extract information 5:06:34 Padding prevents loss of information due to decreasing image size. 5:08:37 Padding in deep learning 5:12:31 Backpropagation updates filters based on input images and uses activation functions for derivatives. 5:14:28 Impact of stride on convolution output 5:18:40 Introduction to max pooling in deep learning 5:20:38 Max pooling extracts clear information from objects in a location-invariant manner 5:24:22 Max pooling, min pooling, and average pooling can be used in a pooling layer. 5:26:18 Summarized steps for image classification 5:30:19 Setting up Google Colab for TensorFlow with GPU 5:32:16 Normalization of training and test images 5:35:54 Filters in convolutional neural networks 5:37:38 Training a neural network using dense layers and optimizations 5:41:31 Training can be improved with more epochs and additional layers Crafted by Merlin AI.
00:06 Day one of deep learning - covering basics and interview preparation 06:46 Chatbots, machine learning, and deep learning explained. 18:36 Importance of data and hardware advancement in deep learning 23:57 Neural network with input and output layers 33:58 Neural networks use weights and bias to determine neuron activation levels. 39:15 Forward propagation involves multiplying the inputs by weights, adding bias, and applying an activation function. 50:11 Simple linear regression and gradient descent 55:50 Today's agenda covers forward propagation, chain rule of derivatives, vanishing gradient problem, and loss functions. 1:07:01 Weight updation formula is the key to reducing loss in backward propagation. 1:12:13 Weight updation formula in gradient descent 1:22:31 Z is an activation function applied to o1 times w4 plus bias. 1:27:46 Neural network structure and weight updation formula 1:38:50 Sigmoid activation function has a derivative ranging from 0 to 0.25. 1:43:53 The first activation function discussed is sigmoid. 1:53:42 Relu activation function solves the dead neuron problem. 1:58:43 Different activation functions for different types of problems 2:09:19 Mean squared error (MSE) is a quadratic loss function used for regression tasks. 2:14:34 Mean squared error penalizes outliers, resulting in a major shift in the regression line. 2:25:48 The first step is to assign values to columns indicating good, bad, and neutral. 2:30:57 Today's session covered the topic of optimizers in deep learning 2:41:59 Using gradient descent for training deep learning models with large datasets requires extensive resources. 2:47:12 Mini batch stochastic gradient descent improves convergence and reduces resource complexity 2:58:01 Exponential weighted average is a technique to smooth data and give more importance to previous values. 3:03:19 Exponential Moving Average for Smoothing Curve in Gradient Descent 3:14:04 The value will decrease as the timestamp goes 3:19:14 Exponential weighted average helps control learning rate increase 3:30:53 Implementing artificial neural network using TensorFlow GPU and reading dataset 3:36:09 Predict whether a customer will exit a bank or not. 3:46:21 Perform train test split and feature scaling 3:51:57 Using standard scalar to transform data 4:02:26 Dropout layer is used in neural networks to reduce overfitting. 4:07:21 Neural network training process with dense layers and Adam optimizer. 4:17:25 Early stopping helps to automatically stop the training process when the monitored metric stops improving 4:22:52 You can generate a confusion matrix and calculate the accuracy for the test data. 4:33:46 We will be discussing about convolution neural network (CNN) in this session. 4:39:31 Visual cortex has multiple layers responsible for different tasks 4:49:52 Convolution operation explained 4:54:53 The calculation results in a pattern of zeros and negative numbers. 5:05:30 Padding prevents loss of information during convolution operation. 5:10:48 Padding and activation function are important in convolutional operations 5:21:46 Extract the highest numbers from each output 5:26:48 Image classification using Convolutional Neural Networks 5:37:07 The model is created with a flattening layer and a fully connected neural network
I have to say this!! We have our own Andrew NG now, your way to explaining says it all, you are feeling these concepts not just reading and teaching us. Definitely you should write a book 🔥 Thank you so much Krish sir for ur efforts ❤️
Mr Krish. This is a great tutorial. I'm a retired engineer, and needed to understand DL/AI fundamentals for a personal project (mobile robot). I found your style of teaching warm and enjoyable. May you be blessed with your career and life goals into the future. Lots of love, from South Africa.
You are such an amazing teacher, Thanks for coming up with such nice course. I never feel like stop watching your tutorial. You are like a magician of data science concepts.
Okay, I'm doing my masters in Data Science and I pay 26 lakhs for just sitting in class and trying to understand lectures, to be honest, lectures are quite boring and hard to understand because of the professor's Chinese accent. I always dream of becoming data scientist and my friend suggested me Krish Naik I will be completing this in 3 days. Wish me luck guys!!!
I wanna be the first one to complete this tutorial. There you go guys I am not up for 6-7 hours straight. Will make an update by 7 pm today evening on whether I completed this tutorial or not. Plus will add time stamps!!!! LET'S GOOOOOOOO Edit :- Just completed the course and it is 6.23 pm I made it!!! The course is pretty decent for beginners in Deep Learning. I got 75% of the course I would say because I have a background in Machine Learning. It would have been better if activation functions were explained more deeply. Rest of it was good. I would revise it again ( Not at 2x speed this time ) and will add time stamps as I am very tired nowwwwww adios amigos. Thanks Krish
Hi Krish, I noticed there is something wrong in the formulation at 2:30:20 . In the softmax function. The dominator terms will add instead of multiplication (but you add the exponent term).
Amazing lecture! Solid Foundation! Neat Handwriting! Great Plots! Step-by-Step progession. I just loved watching these tutorials to make my base stronger. Highly recommneded!
This video is phenomenal. It covers most of the concepts, and each concept is obvious. I thoroughly enjoyed it. I will recommend this video to anyone. Thank you so much for the effort you put into this video! You are a phenomenal teacher!
correction at time 4:55, the picture showing vertical edge, kernal used is for horizontal edge, so every value of convo is zero as there is no horizontal edge. thanks
For beginners like me its a great video. I was anxious becz i couldn't understand a thing in the class and when I watched this video it actually arose an interest in me for this subject.
Oh My God I saw Andrew NG and your Video , and i must say ur video is same Challenging as Andrew Sir Video. I Just Loved the way why y = mx + c . I was knowing the Equation but why it is like that was not Known and u Told me the Basic today. i will never forget this for my life time due to the example tthat u told and why it is mx + c . Thankssssssssssssssss - Gayatri Naik
there is a mistake at 2:26:03 , where krish is encoding the example dataset with one-hot encoding but as it is ordinal categorical data it will get encoded using ordinal encoder , one hot encoder is used for nominal categorical data , and also as it is the o/p so its better to use label encoder but as he just tried to explain and it is just an example, its fine. rest the video is amazing.
Damn sir! hats off to you for explaining every concept in so much details. Before going through this course I thought I know good DL but after going through this I came to know the intuition behind every concept, now I can say I know much better than before.
This is such a nice video, completed today, learned a lot. Thank you Krish sir for sharing such an interesting video. You are rocking. Pls, create a video on how to deploy a large deep learning project on a server. I tried on deploying on Heroku, but it gave slug size exceeding. Which could I am not able to upload.. Request you to pls create a video on how to deploy on EC2 using S3.
3:53:39, many train values of features after squandered scaler are greater than 1. i.e. in first feature we have 1.74309049, values must be in range (-1,1)
This was an amazing session. But at 2:30:42 , in the softmax activation function in the denominator the expression you have given is bit wrong I think. It should be (e^10)+(e^20)+... but you have written it like e^(10+20+...). Please resolve the doubt. Thank you sir
Hi. You have an amazing plan that is going to help me discover myself. I would like you to give me some advise in my data Science dissertation as well.
You have done a very good job here. Most tutorials currently on internet just show how to use the packages without any mathematical concept or background on these algorithms. Your have covered most of the fundamental principles here.
Hi Krish - Thanks for this awesome DL videos. Can you pls point us to next part of this video series where you explained RNN and NLP in a similar approach?. Thanks
Thank you Krish , I started Studying this Topic Let's go Will see in how many hours i will be able to complete it . Krish Just a request, it would be great if you compile the videos together for Statistics and Machine Learning also Finally completed the video on day 3 :D
00:06 Day one of deep learning covers basics and interview preparation.
02:42 Understanding the importance of deep learning
07:21 Machine learning is a subset of AI
09:36 Machine learning is a subset of AI
13:54 Deep learning's popularity and its relation to the growth of social media and web applications.
16:11 Big data and its growing demand in storing and utilizing large data sets
20:13 Deep learning models can generate revenues through seamless experience and better decision-making.
22:09 Nvidia's RTX series GPUs and the advancements in deep learning.
26:11 Understanding input layers in binary classification
28:01 Neurons process signals through layers to predict outputs.
31:33 Signal processing and neuron function
33:16 Neurons need to be connected with proper weights for functioning.
37:16 Importance of weights in neural network training
39:02 Understanding the need for bias in deep learning
42:52 Introduction to basic topics in deep learning
44:52 Understanding forward propagation in deep learning.
48:58 Understanding the process of training in deep learning
51:06 Explanation of gradient descent as an optimizer
55:17 Neural network structure and layers
57:13 Deep learning topics covered and future plans
1:01:35 Overview of Neural Network Architecture
1:03:37 Sigmoid activation function and its role in converting y value to 0 or 1
1:07:32 Backward propagation and weight updation formula
1:09:29 Gradient descent involves finding the global minimum in the graph of weights and the loss function
1:13:14 Negative slope increases weights, positive slope decreases weights
1:15:10 Learning rate plays a crucial role in gradient descent
1:18:52 Understanding the derivative of loss with respect to derivative of w
1:20:49 Updating w4 using chain rule
1:24:36 Applying the chain rule for updating w1 in the equation
1:26:30 Derivative of loss with respect to derivative of weight updation formula
1:30:26 Derivative calculation using chain rule of differentiation
1:32:28 Understanding the vanishing gradient problem
1:36:40 Understanding chain rule in deep learning
1:38:42 Sigmoid activation function properties
1:42:19 Vanishing gradient problem and activation functions
1:44:14 Discussing the sigmoid activation function
1:48:08 Issues with Sigmoid Activation Function
1:49:56 Tanh activation function limits values between -1 to 1
1:53:15 ReLU activation function has the advantage of faster computation
1:55:07 Rectified Linear Unit (ReLU) is a popular activation function with advantages over sigmoid and tanh.
1:58:44 Understanding different activation functions
2:00:39 Use relu activation in hidden layer and sigmoid in output layer for binary classification. For multi-class classification, use softmax activation in output layer.
2:04:40 Different types of loss functions in deep learning
2:06:32 Multi-class classification problem with different loss functions
2:10:23 Mean Squared Error for regression
2:12:15 Advantages and disadvantages of the quadratic equation curve
2:16:20 Mean Absolute Error is robust to outliers
2:18:18 Introduction to Sub Gradient and Huber Loss
2:22:37 Log loss function in logistic regression
2:24:34 Categorical Cross Entropy for Multi-Class Classification
2:28:29 Soft max activation function explained
2:30:17 Understanding Softmax Activation Function
2:34:10 Day Three - Deep Learning Optimizers
2:36:27 Explaining the gradient descent and weight updation formula in back propagation.
2:40:29 An epoch represents one forward and backward propagation cycle.
2:42:23 Gradient descent has a major disadvantage in terms of resource usage
2:46:13 Stochastic gradient descent has slow convergence
2:48:16 Using mini-batches reduces resource intensity and improves convergence
2:52:09 Movement towards global minima and noise in gradient descent.
2:54:01 Momentum helps smoothen the optimization process and reduce noise
2:58:05 Understanding exponential weighted average
3:00:00 Exponential weighted average and the importance of beta value.
3:03:52 Exponential moving average for smoothening the curve
3:05:55 Gradient descent methods and its variations
3:10:02 Adaptive gradient descent replaces learning rate with a new value.
3:11:57 Epsilon is a small number added to avoid divide by 0 condition.
3:15:43 Preventing huge changes in learning rate
3:17:34 Control alpha of t to avoid huge values
3:21:31 Introducing smoothening in SGD with momentum
3:23:31 Adam optimizer combines momentum and rms prop for adaptive learning rate.
3:27:45 Day four of deep learning agenda
3:30:03 Early stopping is an important concept for training deep learning models
3:34:05 Installing TensorFlow and checking the version
3:36:03 Predict customer churn in a binary classification problem
3:39:54 Utilizing one hot encoding or get dummies for fixing categorical features
3:41:48 Explaining the purpose of get_dummies and drop_first parameter
3:45:28 Handling categorical features and performing train-test split
3:47:21 Importance of feature scaling in machine learning algorithms.
3:51:34 Standard Scalar usage and initialization
3:53:27 Transformed data for training and testing
3:57:33 Importing activation functions and dropout in TensorFlow
3:59:33 Explanation of hidden layers, sequential and dense layers
4:03:05 Dropout layer deactivates neurons during training to prevent overfitting
4:04:49 Initializing the classifier and adding input layers
4:08:33 Binary classification uses sigmoid activation function.
4:10:25 Setting learning rate and optimizer in TensorFlow
4:14:02 Validation split and stopping point in deep learning training
4:15:50 Early stopping helps to automatically stop training when the accuracy doesn't improve.
4:19:37 Implementing early stopping in model training process
4:21:38 Visualizing model history for accuracy and loss
4:25:41 Storing and accessing weights in a neural network
4:27:38 Implementing dropout in a deep learning model
4:32:01 XGBoost uses many decision trees, which makes it a black box model.
4:33:44 Introduction to CNN and its applications in image processing.
4:37:58 Deep learning helps in processing and understanding various objects in an image or video.
4:40:03 Deep learning layers perform different tasks.
4:43:51 RGB image consists of three channels: red, green, and blue.
4:45:43 Image channels combine to create different colors in RGB format.
4:49:30 Min max scaling converts pixels to a range of 0 to 1
4:51:22 Understanding convolution operation in deep learning
4:54:59 Understanding calculations and values
4:56:44 Convolution operation with filters and their impact on image dimensions
5:00:49 Reverting back feature scaling affects the values
5:02:55 Filter operation to extract information
5:06:34 Padding prevents loss of information due to decreasing image size.
5:08:37 Padding in deep learning
5:12:31 Backpropagation updates filters based on input images and uses activation functions for derivatives.
5:14:28 Impact of stride on convolution output
5:18:40 Introduction to max pooling in deep learning
5:20:38 Max pooling extracts clear information from objects in a location-invariant manner
5:24:22 Max pooling, min pooling, and average pooling can be used in a pooling layer.
5:26:18 Summarized steps for image classification
5:30:19 Setting up Google Colab for TensorFlow with GPU
5:32:16 Normalization of training and test images
5:35:54 Filters in convolutional neural networks
5:37:38 Training a neural network using dense layers and optimizations
5:41:31 Training can be improved with more epochs and additional layers
Crafted by Merlin AI.
is this playlist enough to learn the complete deep learning?
@@user-Rania-n7m no
@@user-Rania-n7m no
00:06 Day one of deep learning - covering basics and interview preparation
06:46 Chatbots, machine learning, and deep learning explained.
18:36 Importance of data and hardware advancement in deep learning
23:57 Neural network with input and output layers
33:58 Neural networks use weights and bias to determine neuron activation levels.
39:15 Forward propagation involves multiplying the inputs by weights, adding bias, and applying an activation function.
50:11 Simple linear regression and gradient descent
55:50 Today's agenda covers forward propagation, chain rule of derivatives, vanishing gradient problem, and loss functions.
1:07:01 Weight updation formula is the key to reducing loss in backward propagation.
1:12:13 Weight updation formula in gradient descent
1:22:31 Z is an activation function applied to o1 times w4 plus bias.
1:27:46 Neural network structure and weight updation formula
1:38:50 Sigmoid activation function has a derivative ranging from 0 to 0.25.
1:43:53 The first activation function discussed is sigmoid.
1:53:42 Relu activation function solves the dead neuron problem.
1:58:43 Different activation functions for different types of problems
2:09:19 Mean squared error (MSE) is a quadratic loss function used for regression tasks.
2:14:34 Mean squared error penalizes outliers, resulting in a major shift in the regression line.
2:25:48 The first step is to assign values to columns indicating good, bad, and neutral.
2:30:57 Today's session covered the topic of optimizers in deep learning
2:41:59 Using gradient descent for training deep learning models with large datasets requires extensive resources.
2:47:12 Mini batch stochastic gradient descent improves convergence and reduces resource complexity
2:58:01 Exponential weighted average is a technique to smooth data and give more importance to previous values.
3:03:19 Exponential Moving Average for Smoothing Curve in Gradient Descent
3:14:04 The value will decrease as the timestamp goes
3:19:14 Exponential weighted average helps control learning rate increase
3:30:53 Implementing artificial neural network using TensorFlow GPU and reading dataset
3:36:09 Predict whether a customer will exit a bank or not.
3:46:21 Perform train test split and feature scaling
3:51:57 Using standard scalar to transform data
4:02:26 Dropout layer is used in neural networks to reduce overfitting.
4:07:21 Neural network training process with dense layers and Adam optimizer.
4:17:25 Early stopping helps to automatically stop the training process when the monitored metric stops improving
4:22:52 You can generate a confusion matrix and calculate the accuracy for the test data.
4:33:46 We will be discussing about convolution neural network (CNN) in this session.
4:39:31 Visual cortex has multiple layers responsible for different tasks
4:49:52 Convolution operation explained
4:54:53 The calculation results in a pattern of zeros and negative numbers.
5:05:30 Padding prevents loss of information during convolution operation.
5:10:48 Padding and activation function are important in convolutional operations
5:21:46 Extract the highest numbers from each output
5:26:48 Image classification using Convolutional Neural Networks
5:37:07 The model is created with a flattening layer and a fully connected neural network
is this playlist enough to learn complete deep learning?
@@user-Rania-n7m do you got the answer?,what do you think?
@@usereternalyt Nope, I didn't get the answer ☹️, but I learned deep learning from CampusX channel "100 days of deep learning"
@@user-Rania-n7m it is better than learning from a university by paying lakhs believe me.
@@user-Rania-n7mit is good start for NLP and CV
I highly recommend this playlist to master DL and grow intuition and comprehension. Thanks again Kris!
I have to say this!!
We have our own Andrew NG now, your way to explaining says it all, you are feeling these concepts not just reading and teaching us. Definitely you should write a book 🔥
Thank you so much Krish sir for ur efforts ❤️
In-depth DL!! Perfect Sunday plan!!
Haha, yess, me watching it in the middle of Sunday with a lot of completion work on head but the way he is teaching is so good
Best data science channel ever. Love from Bangladesh.
Thanks for editing and compiling the videos together krish!
It would be great also if there were timestamps !
Great work as always
What a session it was!! You are literally a saviour for all data science aspirants.
Thank you Krish for this amazing session ❤️
do u have the notes of this lecture?
if yes please send me the link or file
but last part o/p is pending no, that classification of Images ??
@@abhaykumarsingh6932 were you able to get the notes? if yes, can you plz share it to me as well?
@@navya.sswamy6048 @abhay kumar singh I have made the notes will share with you guys please give me the email
please sent notes of above lecture
Mr Krish. This is a great tutorial. I'm a retired engineer, and needed to understand DL/AI fundamentals for a personal project (mobile robot). I found your style of teaching warm and enjoyable. May you be blessed with your career and life goals into the future. Lots of love, from South Africa.
Amazing wow love from India❤❤
You are such an amazing teacher, Thanks for coming up with such nice course. I never feel like stop watching your tutorial. You are like a magician of data science concepts.
the only thing I can say about your explanation is ... BEAUTIFUL !!!!
Krish sir is the best teacher for data science
Okay, I'm doing my masters in Data Science and I pay 26 lakhs for just sitting in class and trying to understand lectures, to be honest, lectures are quite boring and hard to understand because of the professor's Chinese accent.
I always dream of becoming data scientist and my friend suggested me Krish Naik
I will be completing this in 3 days.
Wish me luck guys!!!
Are you studying in India or Abroad?
I wanna be the first one to complete this tutorial. There you go guys I am not up for 6-7 hours straight. Will make an update by 7 pm today evening on whether I completed this tutorial or not. Plus will add time stamps!!!!
LET'S GOOOOOOOO
Edit :- Just completed the course and it is 6.23 pm I made it!!! The course is pretty decent for beginners in Deep Learning. I got 75% of the course I would say because I have a background in Machine Learning. It would have been better if activation functions were explained more deeply. Rest of it was good. I would revise it again ( Not at 2x speed this time ) and will add time stamps as I am very tired nowwwwww
adios amigos. Thanks Krish
Great job, man! Done with passion. Thank you. Be blessed.
Crystal clear explaination and hands on lectures on ANN CNN. Thanks a lot krish. well done.
Excellent..Thanks Krish..Loving the way you explain concepts, crystal clear..
Link For Dataset :
drive.google.com/file/d/1ZRXIpzLhbtBccjauzROH5AuY3aZl7faW/view?pli=1
Thank you for the dataset link
its showing as blocked
So efficient deep learning tutorial, I've ever explored in the TH-cam. Thank you so much Krish Naik. You are undoubtedly a best educator.
You are the man you come here for teaching, for expressing and explaining things easily and understandably!!
Best and detailed explaination of Optimizers understood like never before Thank you for wonderful explantion Krish
This is much of layman understanding. My whole concept got clear now. Thank you!
Hi Krish, I noticed there is something wrong in the formulation at 2:30:20 . In the softmax function. The dominator terms will add instead of multiplication (but you add the exponent term).
Man you're beautiful. Not only you clear out the concepts but you actually enjoy doing it and that motivates us. Thank you a lot
I highly recommend this playlist to master DL and grow intuition and comprehension. Thanks again, Kris!
Amazing lecture! Solid Foundation! Neat Handwriting! Great Plots! Step-by-Step progession. I just loved watching these tutorials to make my base stronger. Highly recommneded!
Enjoying a lot all day 1 to day 3...
Nice one...
This video is phenomenal. It covers most of the concepts, and each concept is obvious. I thoroughly enjoyed it. I will recommend this video to anyone. Thank you so much for the effort you put into this video! You are a phenomenal teacher!
Krish, ur so passionate about teaching, heads off for that, ur energy level is so high for the complete session
correction at time 4:55, the picture showing vertical edge, kernal used is for horizontal edge, so every value of convo is zero as there is no horizontal edge. thanks
One of the best video for deep learning in-depth 👍👍
Hi Krish, I followed the amazing session, Many thanks for such a valuable session.
For beginners like me its a great video. I was anxious becz i couldn't understand a thing in the class and when I watched this video it actually arose an interest in me for this subject.
Oh My God I saw Andrew NG and your Video , and i must say ur video is same Challenging as Andrew Sir Video. I Just Loved the way why y = mx + c . I was knowing the Equation but why it is like that was not Known and u Told me the Basic today. i will never forget this for my life time due to the example tthat u told and why it is mx + c . Thankssssssssssssssss - Gayatri Naik
sir you have made my life easier with all the hardships now I feel relieved. Thank you sir keep up the good work.
Andrew ng of India, very premium content
there is a mistake at 2:26:03 , where krish is encoding the example dataset with one-hot encoding but as it is ordinal categorical data it will get encoded using ordinal encoder , one hot encoder is used for nominal categorical data , and also as it is the o/p so its better to use label encoder but as he just tried to explain and it is just an example, its fine. rest the video is amazing.
I highly recommend, one of the best lecture to understand deep learning and data science
One of the best video of dnn
Amazing explanation sir. Literally applaud your teaching. Planning to watch all of your important full length videos.
Awesome lecture, thank you Krish [1:37 , check equation please.]
sir you are best . its so detailed i understand how and why of everything. thank you sir
Sir 1:39:00 check graph x axis and y axis
I am a fan of both the Krish one was in my childhood and one is now but the love is constant ❤🔥
superb video. thanks a lot sir. i covered half part and it make me more curious. i will post next comment when i completed whole video.
Damn sir! hats off to you for explaining every concept in so much details. Before going through this course I thought I know good DL but after going through this I came to know the intuition behind every concept, now I can say I know much better than before.
Thanks for editing and compiling the videos together krish! ....you are doing a noble work..
Sir your god gifted lecture for us. your explaining each and every concept very clearly and awesome
So amazing lecture series.. thanks for generating interest in deep learning and data science.. we appreciate your efforts.. 😇
Ossom i will just say ur teaching is beyond a level
Deep learning basic concepts got cleared with this. Thanku Sir☺️
This is such a nice video, completed today, learned a lot. Thank you Krish sir for sharing such an interesting video. You are rocking.
Pls, create a video on how to deploy a large deep learning project on a server. I tried on deploying on Heroku, but it gave slug size exceeding. Which could I am not able to upload..
Request you to pls create a video on how to deploy on EC2 using S3.
Sir you are a god for Data science students, Seriously 😊
thank you krish sir for your best teaching in sort time😊😊😊😊
very clear explanation and nice handwriting
3:53:39, many train values of features after squandered scaler are greater than 1. i.e. in first feature we have 1.74309049, values must be in range (-1,1)
😃Best video on deep learning
if anyone is reading this comment please correct me ..........: Bias is a constant and derivative of constant is always zero. @ 1:23:40
Hello Krish,
This was an amazing DL series you explained it so well.
thank you so much❤
This video was very helpful. I appreciate your hard work. Keep doing what your doing
Best video on deep learning!
Great work sir 👍 best Deep learning video on TH-cam ❤️❤️
Vanishing gradient problem explanation is God level 😊❤
excellent explanation👍 Many thanks.
This was an amazing session.
But at 2:30:42 , in the softmax activation function in the denominator the expression you have given is bit wrong I think.
It should be (e^10)+(e^20)+... but you have written it like e^(10+20+...).
Please resolve the doubt.
Thank you sir
2:16:53 I think we don't need to write 1/2 with loss function
Hi sir... request to put timestamp in this video.. so that it's easy to navigate through the video
what a great session. thank you so much sir
Hi. You have an amazing plan that is going to help me discover myself. I would like you to give me some advise in my data Science dissertation as well.
sir u r simply amazing teacher...such a great help for person like me who is not from IT or maths background
You have done a very good job here. Most tutorials currently on internet just show how to use the packages without any mathematical concept or background on these algorithms. Your have covered most of the fundamental principles here.
Neurons -> students data set, camera identify example as input output , weights example
Hi Krish - Thanks for this awesome DL videos. Can you pls point us to next part of this video series where you explained RNN and NLP in a similar approach?. Thanks
Thanks for wonderful session. Within 5 hr so dared to start with and One request can we have such a session for 5G and beyond
Thank you. Thank you. You are really our inspiration.
It was a great explanation sir, absolutely loved your explanation 🔥
Thank you Very Much Mr Krish for putting in so much effort to teach us.
Your video help full me keep it up sir
Thank you Krish , I started Studying this Topic Let's go Will see in how many hours i will be able to complete it .
Krish Just a request, it would be great if you compile the videos together for Statistics and Machine Learning also
Finally completed the video on day 3 :D
Excellent, no words to express
Beautiful handwriting
Indeed very nice explanation
Hi krish,
you are an amazing mentor
well explained .
Greatttttt Expalanation. cant thank you enough
Thanks for the lecture, It would be better if you share pdf of books you shown in the lecture.
Again Thankyou very much such a amazing content.😇😇
incredibly helpful......Thank you Krish naik
tq u boss, ur teaching style and the way you make understand concepts are rocking.
Just love the way he makes us understand all equations
A bigg thanks Krish
Excellent, very clear explanation, Amazing
Best Video Ever♥ Thank you so much for your efforts.
This is insanely amazing 😍
Thanks for this amazing video on deep learning❤❤❤
You're the best Krish
Amazing Krish😍long waited video
Wow. Very helpful. Thank you @KrishNaik Sir.
Thanks to all, for wonderful sessions
You are doing a great work sir...
49:30 CLOSING SUMMARY
Thanks for the video. It's very helpful.
thanks sir i appreciate your hard work. in this 6 hours video.