Complete Machine Learning In 6 Hours| Krish Naik

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ต.ค. 2024

ความคิดเห็น • 402

  • @krishnaik06
    @krishnaik06  ปีที่แล้ว +49

    All the materials are given below github.com/krishnaik06/The-Grand-Complete-Data-Science-Materials/tree/main

  • @SahilRajput-xe6uj
    @SahilRajput-xe6uj 7 หลายเดือนก่อน +138

    Not recommended for beginners, but if you already have some knowledge and wants to revise concepts this is the best video. very clear and concise explation

    • @ahmedhaigi2900
      @ahmedhaigi2900 7 หลายเดือนก่อน +11

      Any suggestions for beginner

    • @SahilRajput-xe6uj
      @SahilRajput-xe6uj 7 หลายเดือนก่อน

      @@ahmedhaigi2900 Machine learning ka syllabus chakko kahi se agr nahi hai to mujhe mail dedo mai de dunga and topic wise padho hr cheez

    • @MridulChaudhary17
      @MridulChaudhary17 6 หลายเดือนก่อน

      ​@@ahmedhaigi2900 Andrew ng course on Coursera you can Audit that course

    • @tricksforsolving2804
      @tricksforsolving2804 6 หลายเดือนก่อน +2

      Thanks

    • @Dubsmashvideoss4099
      @Dubsmashvideoss4099 5 หลายเดือนก่อน +6

      Then recommend for beginners

  • @ytritik-st7gr
    @ytritik-st7gr 10 หลายเดือนก่อน +117

    When you draw the line on your screen and then it will automatically become straight , that is the best example of application of best fit line (linear regression)

    • @pratyush_779
      @pratyush_779 3 หลายเดือนก่อน +6

      you are definitely an anime watcher aren't you!? That observation was something!!!

    • @swatisingh-yw1fw
      @swatisingh-yw1fw 2 หลายเดือนก่อน

      How come it is an example of linear regression?😊

    • @anubhavmishra4960
      @anubhavmishra4960 2 หลายเดือนก่อน

      @@swatisingh-yw1fw bro is on weeds

    • @Noob_Girl_Anjali
      @Noob_Girl_Anjali หลายเดือนก่อน +1

      i think it draw line inbetween start point and endpoint

    • @explore.with.darshan
      @explore.with.darshan หลายเดือนก่อน

      wah bete

  • @moinalisyed4515
    @moinalisyed4515 11 หลายเดือนก่อน +75

    ALERT!!!!!
    For a new person who is here to explore ML and thinking whether this video is good or its just another video which will waste your time. So believe me its best ever video on youtube from a Indian. Its totally worth to watch this and make notes. From Now onwards I am a big fan of Krish Naik

    • @sandeepyadav8397
      @sandeepyadav8397 8 หลายเดือนก่อน +1

      Is it enough? Please reply

    • @JohnCena-uf8sz
      @JohnCena-uf8sz 6 หลายเดือนก่อน +2

      hoping this is not a paid comment I'm gonna watch this video

    • @MoosaMemon.
      @MoosaMemon. 6 หลายเดือนก่อน

      @@sandeepyadav8397 Yes, trust me it is more than enough.

    • @MoosaMemon.
      @MoosaMemon. 6 หลายเดือนก่อน +7

      @@JohnCena-uf8sz I assure you that it is not. I've been religiously following his ML and AI related content and I'm just so grateful that I found him. You can learn entire ML and AI by watching his videos with simple explanations. No need for any other channel.

    • @JesúsCastillo-n1h
      @JesúsCastillo-n1h 5 หลายเดือนก่อน +5

      bruh i'm from chile and watching this. This is the best teacher and clear explanation i could find! all of his courses!!

  • @hedithameur2383
    @hedithameur2383 8 หลายเดือนก่อน +23

    The world should have so many people like you sir, your way of teaching is outstanding thank you for your time to educate the world

  • @ssubodh6854
    @ssubodh6854 2 หลายเดือนก่อน +23

    🎯 Key points for quick navigation:
    00:48 *🤖 Introduction to AI vs ML vs DL vs Data Science*
    - Explanation of AI as creating applications without human intervention.
    - Supervised ML focuses on regression and classification problems.
    18:21 *📈 Linear Regression Basics*
    - Definition of linear regression and its purpose in modeling relationships between variables.
    22:30 *📉 Understanding Linear Regression Basics*
    - Understanding intercept (theta 0) and slope (theta 1),
    25:17 *📊 Cost Function in Linear Regression*
    - Definition and significance of the cost function,
    34:35 *📉 Impact of Theta 1 on Cost Function*
    - Demonstrating the effect of different theta 1 values on the cost function,
    41:50 *🔄 Gradient Descent and Convergence Algorithm*
    - Introduction to gradient descent as an optimization technique,
    45:25 *📈 Gradient Descent Basics*
    - Understanding gradient descent in machine learning,
    47:31 *🏔️ Dealing with Local Minima*
    - Addressing challenges posed by local minima in gradient descent,
    49:37 *🔄 Iterative Convergence*
    - Iterative convergence process in gradient descent algorithms,
    55:20 *📊 Performance Metrics in Linear Regression*
    - Explaining the importance of R-squared and adjusted R-squared in evaluating model performance,
    01:07:05 *🔍 Overview of Regression Techniques*
    - Introduction to ridge and lasso regression as regularization techniques,
    01:09:16 *📊 Understanding Overfitting and Underfitting*
    - Understanding overfitting and underfitting in machine learning,
    01:16:13 *🧮 Introducing Ridge and Lasso Regression*
    - Introducing ridge and lasso regression for regularization purposes,
    01:30:56 *📊 Linear Regression Assumptions and Standardization*
    - Linear regression assumes linearity between variables and the target.
    01:33:18 *📈 Introduction to Logistic Regression*
    - Logistic regression is used for binary classification tasks.
    01:40:16 *🎯 Understanding Logistic Regression's Decision Boundary*
    - Logistic regression's decision boundary is determined by the sigmoid function.
    01:51:28 *📉 Logistic Regression Cost Function and Gradient Descent*
    - Logistic regression cost function derivation and explanation,
    01:59:06 *📊 Performance Metrics: Confusion Matrix and Accuracy Calculation*
    - Detailed explanation of the confusion matrix in binary classification,
    02:03:08 *⚖️ Handling Imbalanced Data in Classification*
    - Definition and identification of imbalanced datasets in classification problems,
    02:08:57 *📈 Precision, Recall, and F-Score: Choosing Metrics for Different Problems*
    - Explanation of precision and recall metrics in classification evaluation,
    02:14:13 *📊 Introduction to sklearn Linear Regression*
    - Introduction to sklearn's linear regression model.
    02:16:15 *📈 Dataset Loading and Preparation*
    - Loading the Boston house pricing dataset from sklearn.
    02:22:08 *📉 Data Splitting for Regression*
    - Separating the dataset into independent (X) and dependent (y) features.
    02:24:04 *📊 Cross Validation and Mean Squared Error Calculation*
    - Explanation of cross-validation importance in machine learning model evaluation.
    02:28:31 *🔄 Introduction to Ridge Regression and Hyperparameter Tuning*
    - Introduction to Ridge Regression as a method to mitigate overfitting in linear regression.
    02:34:00 *📊 Ridge Regression Hyperparameter Tuning*
    - Understanding Ridge Regression and its role in reducing overfitting,
    02:37:30 *📉 Impact of Hyperparameters on Model Performance*
    - Exploring the effect of different alpha values on Ridge Regression's performance,
    02:45:30 *🔄 Logistic Regression for Classification*
    - Introduction to Logistic Regression for binary classification tasks,
    02:55:14 *🎲 Probability Fundamentals*
    - Probability basics: Understanding independent and dependent events.
    02:56:34 *📊 Conditional Probability*
    - Explaining conditional probability using the example of drawing marbles.
    02:58:12 *🧮 Bayes' Theorem*
    - Introduction to Bayes' Theorem and its significance in probability.
    03:05:14 *📊 Applying Probability in Classification*
    - Applying probability concepts (e.g., conditional probability) in classification problems.
    03:17:30 *📊 Understanding Distance Metrics in Machine Learning*
    - Understanding Euclidean and Manhattan distances,
    03:20:18 *🌳 Exploring Decision Trees for Classification and Regression*
    - Decision tree structure and node representation,
    03:24:15 *🔍 Information Gain and Splitting Criteria in Decision Trees*
    - Explaining entropy and Gini impurity as measures of impurity,
    03:39:40 *📊 Understanding Entropy and Information Gain*
    - Explained the concept of entropy in decision trees and how it relates to determining pure splits.
    03:41:19 *📈 Using Information Gain for Feature Selection*
    - Detailed the process of calculating information gain for different features in decision tree nodes.
    03:49:17 *📉 Understanding Gini Impurity vs. Entropy*
    - Explained the concept of Gini impurity as an alternative to entropy for decision tree construction.
    03:54:01 *🧮 Handling Numerical Features in Decision Trees*
    - Explored how decision trees handle continuous (numerical) features using sorted feature values.
    03:59:34 *⚙️ Hyperparameters in Decision Trees*
    - Defined hyperparameters and their role in controlling decision tree complexity.
    04:06:03 *🌳 Decision Tree Visualization and Pruning Techniques*
    - Understanding the structure of a decision tree through visualization.
    04:09:16 *🛠️ Ensemble Techniques: Bagging and Boosting*
    04:21:31 *🌲 Random Forest Classifier and Regressor*
    - Solving overfitting in decision trees through ensemble learning.
    04:24:20 *🌳 Random Forest: Overview and Working*
    - Random Forest combines multiple decision trees to create a generalized model with low bias and low variance.
    - Combines predictions from multiple decision trees (ensemble method).
    - Uses bootstrapping and feature sampling to train each tree on different subsets of data.
    - Prevents overfitting present in individual decision trees.
    04:29:27 *🚀 Boosting Techniques: Introduction to Adaboost*
    - Adaboost is a boosting technique that sequentially combines weak learners to form a strong learner.
    - Begins by assigning equal weights to all training examples.
    - Focuses on correcting misclassified examples in subsequent models.
    - Uses weighted voting to combine outputs of weak learners into a final prediction.
    04:42:27 *📊 Adaboost: Training Process and Weight Update*
    - Adaboost updates weights of training examples based on the performance of each weak learner.
    - Calculates the total error of each weak learner to determine performance.
    - Adjusts weights of training examples to emphasize incorrectly classified instances.
    - Normalizes weights to ensure they sum up to 1 for the next iteration of training.
    04:45:24 *🌲 Decision between Black Box and White Box Models*
    - Decision trees are considered white box models because their splits are visible and interpretable.
    04:47:15 *🎯 Introduction to K-means Clustering*
    - K-means clustering is an unsupervised learning method used to group similar data points together.
    04:50:00 *📊 Understanding Centroids in K-means*
    - Centroids in K-means represent the center of each cluster and are initially placed randomly.
    04:56:31 *📉 Determining Optimal K in K-means Clustering*
    - The elbow method is used to determine the optimal number of clusters (k) by plotting within-cluster sum of squares (WCSS) against different k values.
    05:05:22 *🌐 Hierarchical Clustering Overview*
    - Understanding hierarchical clustering involves identifying clusters based on the longest vertical lines without horizontal intersections.
    05:07:30 *🕰️ Time Complexity in Clustering Algorithms*
    - Hierarchical clustering generally takes longer with large datasets due to dendrogram construction, compared to faster performance by k-means.
    05:09:04 *📊 Validating Clustering Models*
    - For clustering validation, methods like silhouette scores are crucial, quantifying cluster quality.
    05:17:21 *🌌 DBSCAN Clustering Essentials*
    - DBSCAN identifies core points, border points, and noise points based on defined parameters like epsilon and min points.
    05:26:37 *📊 Exploring K-Means Clustering and Silhouette Score*
    - Explains the process of using K-Means clustering and evaluating it with silhouette scores.
    05:35:30 *🧠 Understanding Bias and Variance*
    - Defines bias as a phenomenon influencing algorithm results towards or against a specific idea or training data.
    05:48:51 *🌳 Decision Tree Construction*
    - Understanding binary decision tree creation in XGBoost,
    05:51:39 *📊 Similarity Weight Calculation*
    05:57:22 *📈 Information Gain Computation*
    06:05:01 *🚀 XGBoost Classifier Inference Process*
    06:09:39 *🌳 Decision Tree - Splitting Based on Experience*
    06:11:31 *📊 Calculation of Similarity Weight and Information Gain*
    06:18:59 *🌳 Regression Tree - Inference and Output*
    06:26:24 *🚀 SVM - Marginal Planes and Hyperplanes*
    06:30:51 *📈 SVM Margin Maximization*
    06:31:34 *🛠️ SVM Optimization Objectives*
    06:32:29 *🔍 SVM Decision Boundary Clarity*
    Made with HARPA AI

    • @FamJaam
      @FamJaam หลายเดือนก่อน

      and Credits to Andrew NG 20:23

  • @RabiaAbdulQahar
    @RabiaAbdulQahar 2 ปีที่แล้ว +115

    I'm amazed by seeing your understanding with every algorithm👏👏. one day I'll also be able to do the same.

  • @tirtharoy4542
    @tirtharoy4542 2 ปีที่แล้ว +112

    One of the best ML videos available in the internet. This video is crisp yet covers most of the topics of ML.. Also I like the way Krish explains theory part first and then explains the same using practical examples.

    • @zaafirc369
      @zaafirc369 2 ปีที่แล้ว +21

      The video is an aggregation of the live machine learning community sessions that Krish did.
      But he has edited out all the time wasting discussions and kept only the most important bits where the topics are explained.
      Lots of time and efforts have been placed in compiling and editing these videos. Kudos to him for that.

    • @anexocelisia9377
      @anexocelisia9377 ปีที่แล้ว +1

      Brother can you tell, is this ml video covers the whole syllabus of ml?

    • @yes.0
      @yes.0 ปีที่แล้ว

      @@anexocelisia9377 ofc not, theres so much more to ml

    • @Kavi-learn
      @Kavi-learn ปีที่แล้ว

      @@yes.0 is the content in this video enough to crack data science interviews?

  • @navaneethstark5966
    @navaneethstark5966 ปีที่แล้ว +161

    6hrs ago, I don't know machine learning 💀💥. Classic✨

    • @Krish_krishna3.
      @Krish_krishna3. 8 หลายเดือนก่อน +2

      Really??

    • @roninbromine1670
      @roninbromine1670 8 หลายเดือนก่อน +1

      *I didn't knew

    • @vipprmudgal712
      @vipprmudgal712 7 หลายเดือนก่อน +17

      It u take only 6 hours that means u did not understand it fully I just watched it not practicingb so u can say that u still did not know ml

    • @manaschopra8998
      @manaschopra8998 6 หลายเดือนก่อน

      ​@@roninbromine1670didn't know*

    • @vlogwithtanishaa__
      @vlogwithtanishaa__ 5 หลายเดือนก่อน

      ​@@roninbromine1670I didn't Know!!

  • @kamalch8928
    @kamalch8928 2 หลายเดือนก่อน +2

    An excellent and valuable 6 hours session on ML Algos. Very handy to make the learning process on ML smoother for the people who are new to it. Thank you Sir!!

  • @solomonrajkumar5537
    @solomonrajkumar5537 2 ปีที่แล้ว +8

    the way you teach is cake wake coaching... even a ground scratching beginner can shine in DS if they watch all your Video... Thank you!!!

  • @ishuman
    @ishuman 10 หลายเดือนก่อน +10

    04:08:29
    The Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset. The Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. It reaches its minimum (zero) when all cases in the node fall into a single target category.
    In the case of the Iris dataset, the root node contains all the instances, and if they are evenly distributed among the three classes (setosa, versicolor, virginica), the Gini impurity will be 0.667. This is because the probability of choosing an instance from any class is 1/3, and the probability of misclassifying it is 2/3 (since there are two other classes). The calculation is as follows:
    Gini Impurity = 1 - (1/3)^2 - (1/3)^2 - (1/3)^2 = 0.667
    This indicates that there is a 66.7% chance of misclassifying a randomly chosen element from the dataset if it was labeled according to the distribution of labels in the entire dataset.
    The code you provided is plotting the decision tree. The Gini impurity for each node is calculated during the creation of the decision tree, not during the plotting. The Gini impurity is shown on the plot for each node.

    • @mdserajali538
      @mdserajali538 9 หลายเดือนก่อน

      How to get note of this video?

  • @vipinsainilol
    @vipinsainilol 2 ปีที่แล้ว +7

    Excellent session...everything about ML is summarised in a single video, which provides the complete picture of the elephant!

  • @darshedits1732
    @darshedits1732 หลายเดือนก่อน +4

    Hello sir, I'm delighted to inform you that I've secured my dream job as an AI/ML developer, despite being a fresher. Your video on machine learning was instrumental in my success, and I'm extremely grateful for your contribution to my learning journey.

    • @rameesmonk8986
      @rameesmonk8986 หลายเดือนก่อน +1

      hi bro tell me how i starts i have a gap

  • @omikacooray5333
    @omikacooray5333 2 หลายเดือนก่อน +1

    This course is very easy to understand if you’ll have a high school maths background . Thank you Krish Naik sir for explaining the concepts clearly.

  • @deepcontractor6968
    @deepcontractor6968 2 ปีที่แล้ว +19

    Perfect binge watch for interview preparation. Thanks for uploading this Krish.

    • @amrdel2730
      @amrdel2730 2 ปีที่แล้ว +1

      at least u have interviews and work places in such field u are lucky to get to apply and earn livin with your knowledge // where i live there non

    • @rishav144
      @rishav144 ปีที่แล้ว +2

      @@amrdel2730 apply to other countries bro...Simple....if opportunities are not there in your place , u have to go outside

  • @PriyaMishraEngineer
    @PriyaMishraEngineer 2 ปีที่แล้ว +6

    Thank you Krish and Team , a million of course in free of cost . Thank you

  • @shekharawate5898
    @shekharawate5898 2 ปีที่แล้ว +7

    For me this is the best video on krish channel...The knowledge and its presentation at class level...The mastery over major and minute things at its best.May lord Shiva bless you with happiness brother. Kudos...

  • @alabibusuyi4492
    @alabibusuyi4492 ปีที่แล้ว +3

    Your presesentation and teaching is excellent!

  • @huntingboy4278
    @huntingboy4278 ปีที่แล้ว +4

    1:13:13. Underfitting me High-bias & Low-var aayega @krish naik

  • @ShlokaGupta-p6r
    @ShlokaGupta-p6r 8 วันที่ผ่านมา

    Love the way Krish teaches!

  • @theadnhsn
    @theadnhsn ปีที่แล้ว +6

    Really great content right here; from the rudiments to the practical application is covered here regarding all the traditional ML Algorithms! Just Amazing Period.

  • @thusharapadmanabhan9356
    @thusharapadmanabhan9356 7 หลายเดือนก่อน +1

    You are great!! That's all I need to say after this class.

  • @mindofmagnet3373
    @mindofmagnet3373 11 หลายเดือนก่อน +2

    Please be patient in this course . Definitely a awesome course

  • @triptabhattacharjee7004
    @triptabhattacharjee7004 ปีที่แล้ว +10

    Thoroughly enjoyed the videos. I was able to get over the fear of learning ML as it made my learning process smooth. Thank you ❤️

  • @Rustincohle88
    @Rustincohle88 หลายเดือนก่อน +1

    52:00 The partial derivative for Theta 0(bias) is 1/m sigma (i=1 to n)(y(hat)-y) there will be no square after doing partial derivative

  • @syedfayeqjeelani54
    @syedfayeqjeelani54 10 หลายเดือนก่อน +1

    Krish, thank you for these wonderful lectures! Much love.

  • @TreBlass
    @TreBlass ปีที่แล้ว +5

    I have a question around 1:25:40. You mentioned that we use Lasso to avoid less important features. The lower the slope, the lower is the modulus of that slope (or theta).
    If I consider the mathematical definition,
    in L2 Regularization: Cost is is J(theta) + lambda (sum of squares of thetas)
    and in L1 Regularization: Cost is is J(theta) + lambda (sum of modulus of thetas)
    So, if the absolute value of the slope is less than one, the square of it would be lesser, and hence we would be able to discard that feature more prominently.
    Eg., (0.5)^2 = 0.25 < |0.5|
    Correct me if my understanding is wrong. Thanks

    • @TheWaylays
      @TheWaylays ปีที่แล้ว +1

      So, that is partially true, but the logic is flawed a bit. Yes - x^2 makes numbers less than 1 smaller, and numbers greater than 1 larger. And that's the whole point. If we want to decide whether a certain theta parameter is suitable to omit (meaning we don't want to select that feature), we want to look at the sole value of that parameter (or the absolute value, in this case), not the square, the reason being that squaring makes small errors smaller and large errors larger. Discarding a certain feature based on the square of the parameter would be more prone to mistakes. In other words, it gets increasingly more difficult to tell well suited and badly suited parameters apart based on squares of their values, rather than modulus of their values, as the values grow large or small (basically when they start to deviate from 1 more and more). That's how L1 differs from using L2 and why it can help with feature selection. We can square the value of the slope, but that doesn't change the slope's value itself, just how we look at it. Otherwise, we could just raise the slope to some astronomical power, and discard all slopes that were smaller than 1 (because all of them would end up close to 0 after raising to some huge power). But that does not reflect reality. If we want to look at slope values in L1, to imply some feature selection, we don't want to make those values artificially smaller or larger, because there is no benefit to that - we would basically be losing information. You usually want to apply that transformation to errors, because when it comes to predictions, the error of 4 (2^2) is obviously worse than error of 2, and error of 0.1 is not that bad, so making it 0.01 (0.1^2) isn't a big deal. So you focus on minimizing the error of 4 rather then the error of 0.01 (actual errors are 2 and 0.1). So Ridge basically treats slopes in the same way as it treats errors, and Lasso does not.
      And by the way, that is a big reason behind choosing loss functions to be (error)^2. We punish large errors and deminish small errors. Because at the end, when we look at our cost function, and the value it produces (when we sum up our losses), the small errors/losses don't add up to that much, but the large errors/losses do - so we want to focus on them a bit more. So (error)^2 is especially good for linear regression, beause it serves 3 purposes. One - squaring makes negative values positive, so the errors don't cancel out, but add up. Two - as stated previously, squaring helps disregard small errors and focus on large errors, because that's where the gain in performance is. Three, it's convex, because y_hat (the estimator of y) is linear, and linear functions are both convex and concave, so L(theta)=[ y_hat(theta) - y ]^2 is also convex (y_hat(theta) simply doesn't impact convexity, which is not true in general, like in DL or logistic regression). That grants us the abililty to use regular gradient descent algorithm without any issues. This cannot be said for things like logistic regression, or many square loss functions in deep learning, because the estimate itself is not linear, so the square may not be convex, and we might introduce multiple local minima (Loss function L(theta, y_i) is basically a composition of estimation function for ith observation minus that observation and some other function, like x^2). Therefore, for logistic regression, you adjust loss and cost functions (in reality they come directly from MLE), and for neural networks, you can use stuff like Adam optimizer and so on, so x^2 in this case is still nice and still leaves us with benefits from points 1 and 2.
      Hope that clears it up, but if not, I'm sure there's someone better than me at relaying this information somewhere on the internet. Cheers.

  • @rishibakshi2004
    @rishibakshi2004 ปีที่แล้ว +8

    thankyou for this amazing lecture sir..its currently 2:30 am at night and i just finished this whole lecture .... i must say i gained a lot ..thankyou ❤❤❤❤

  • @Musk-Singhal
    @Musk-Singhal 6 หลายเดือนก่อน +1

    2:49:15 -> We set dataset='balanced' when we want the class weights to be automatically assigned as per the class distribution.

  • @entertainment8067
    @entertainment8067 2 ปีที่แล้ว +28

    Sir, make a separate playlist on, Reinforcement learning, Deep reinforcement learning and imitaiton learning. thanks

  • @sagarbadiger5554
    @sagarbadiger5554 11 หลายเดือนก่อน +2

    Hi Krish, I really appretiate your work, Your delivery is great, easy to undertand and remember.
    thanks for the great content.

  • @ShobhaSharma-kq9hy
    @ShobhaSharma-kq9hy 10 หลายเดือนก่อน +1

    Thanks krish. Superb delivery.

  • @rajeshdronavalli3636
    @rajeshdronavalli3636 2 ปีที่แล้ว +13

    Your explanation is really good and content wise excellent sir. Thanks for sharing your videos and roadmaps and End2end explanation interview point of view .

  • @pratikjanani743
    @pratikjanani743 7 หลายเดือนก่อน +1

    Great video, thanks Krish!

  • @devangijuneja1790
    @devangijuneja1790 8 หลายเดือนก่อน +1

    Thank you sir for explaining the concepts in such a manner that they seem easy to understand...

  • @Mani_Ratnam
    @Mani_Ratnam ปีที่แล้ว +6

    Explanation of logistic regression was the most awesome explanation that i ever found.Thank you for the session Krish.

  • @shailendrasen602
    @shailendrasen602 2 ปีที่แล้ว +7

    that's exactly what I'm waiting for. Thankyouu Soo Muuch Sir for Sparing That Muuchh Knowledge . 😍🙏🏼🙏🏼

  • @rosnawatiabdulkudus6435
    @rosnawatiabdulkudus6435 ปีที่แล้ว +2

    You are the best teacher 🥰. Regards from Malaysia.

  • @zeroxia3642
    @zeroxia3642 ปีที่แล้ว +3

    Perfect Ml video all over TH-cam... You're explanation is just amazing 🤩... Thank you so much ( I'm now only at the beginning😅... many more to go )

  • @jiyabyju
    @jiyabyju ปีที่แล้ว +2

    while you might encounter Gini impurity values higher than 0.5 in the context of the Iris dataset, this is due to the multiclass nature of the problem and the specific calculation used for multiclass Gini impurity. It doesn't imply that the maximum impurity for multiclass problems is 0.5; that limit applies to the binary case.

  • @padhaidotcom9276
    @padhaidotcom9276 ปีที่แล้ว +1

    very nice voice, no confusion for listening

  • @taufiq12334
    @taufiq12334 7 หลายเดือนก่อน +2

    5:38:30 youre interchanging the definition of high bias and low bias

  • @garvitsapra1328
    @garvitsapra1328 10 หลายเดือนก่อน +2

    For underfitting fitting models we have high bias and low variance, as bias means wrong prediction and variance means how the model is flexible enough to adapt to different datasets.

  • @HirvaMehta01
    @HirvaMehta01 2 ปีที่แล้ว +4

    Thank you soo much Krish for summarising everything here.

    • @anexocelisia9377
      @anexocelisia9377 ปีที่แล้ว

      Brother can you tell, is this ml video covers the whole syllabus of ml?

  • @101_avikghosh6
    @101_avikghosh6 2 ปีที่แล้ว +1

    Much needed video sir.....sab video hain par apka....❤️🔥🔥

  • @khatiwadaAnish
    @khatiwadaAnish 4 หลายเดือนก่อน

    Thanks by bringing multiple live stream into a same video👍👍

  • @ivanrubnenkov919
    @ivanrubnenkov919 หลายเดือนก่อน

    Highly recommend this video. Dont look any further for a classical ML methods.
    Couple of notes:
    Should've explain logistic regression from the odds, log odds part imo.
    Also best ridge/lasso regression explanation i've seen and i've seen a lot. the only thing to add is since we have squares the whole 'slope' parameter will be a hypersphere in an n dimensional space of weights. And so it will not touch 0 at any point basically, will stop before.

  • @AnuragGupta-19
    @AnuragGupta-19 4 หลายเดือนก่อน +1

    Sunday spent well !! ♥️

  • @neerajasrinivasan3429
    @neerajasrinivasan3429 ปีที่แล้ว +15

    Hi Krish. This video is very helpful and lots of fun to watch and it’s amazing that within such a short span of time you’ve completed sort of a bridge course on ML. Kudos to you 👏🏻! However, I had a doubt that I would like to raise here. You mentioned in your video that Lasso Regularisation helps feature selection. If the theta or slope values are negligible, say close to zeros, then squaring them wouldn’t increase the values but decrease further right? Why can’t we do feature selection using ridge regularisation then? But for slopes greater than 1 this would make sense, however, in those cases we would not be able to neglect those right?

    • @AdhirajGupta-m8b
      @AdhirajGupta-m8b 10 หลายเดือนก่อน

      we use mod in lasso to do feature selection and for lasso we will have multiple features and slope so on adding them it will itself neglect those features that are not of use

    • @Abhishek-we4xg
      @Abhishek-we4xg 9 หลายเดือนก่อน

      we do feature selection using lasso regularisation because it makes some of the coefficient to zero which are not important to our analysis. may be that's the reason we used lasso for feature selection.

  • @zaafirc369
    @zaafirc369 2 ปีที่แล้ว +3

    Great job krish!
    Thanks for adding the timestamp 💯

  • @mukunthans3600
    @mukunthans3600 ปีที่แล้ว +6

    Great explanations, Krish. I just started my data science prep and have been following you for a few days. This will be my second marathon after just finishing your statistics tutorial. It is a fun learning experience watching your lectures. Thanks again for your efforts!
    Please let me know if I am wrong. I have a query in the adjusted R square performance metric, explained around 1 hour into the video. According to the formula, When we substitute p=2, the value of adjusted Rsquare should be the same as that of Rsquare, right? However, you've shown it as lesser in your example, or is there a condition that we should only use adjusted Rsquare when the p-value is greater than 2.

    • @vira5995
      @vira5995 5 หลายเดือนก่อน

      did you get a job ???

  • @a2yautomobile931
    @a2yautomobile931 ปีที่แล้ว +2

    wow! very useful content❤❤

  • @nsgodgaming
    @nsgodgaming ปีที่แล้ว +7

    Hi Krish, thanks for making this. In this video you missed out PCA topic can you please make a video of that? And some detailed videos on model selection, feature selection & feature engineering.

  • @deepakvdesale
    @deepakvdesale 5 หลายเดือนก่อน

    Krish, you have been like a brother to me when it came to understanding machine learning. Hope to meet you some day.

  • @rupalikhare3330
    @rupalikhare3330 10 หลายเดือนก่อน +1

    You are really good teacher

  • @pratiknaikwade95
    @pratiknaikwade95 2 ปีที่แล้ว +2

    Very well explained....thank you sir 🥰💐💐

  • @ewnetuabebe5059
    @ewnetuabebe5059 ปีที่แล้ว +14

    Thank you, Krish, for such an incredible Tutorial, Have you made all the PDF files available?

  • @muhammadzakiahmad8069
    @muhammadzakiahmad8069 2 ปีที่แล้ว +1

    Why is Random forest not effected by Outlier?
    Ans on Google:
    The intuitive answer is that a decision tree works on splits and splits aren't sensitive to outliers: a split only has to fall anywhere between two groups of points to split them.

  • @tarabalam9962
    @tarabalam9962 ปีที่แล้ว +1

    great explanation of so many algorithms in a short time

  • @DevangiJuneja
    @DevangiJuneja หลายเดือนก่อน

    Thank you sir for making it this concise

  • @footballfreez3846
    @footballfreez3846 3 หลายเดือนก่อน

    Amazing teaching skills
    🤝🤝

  • @rajmachawalanddahi3719
    @rajmachawalanddahi3719 3 หลายเดือนก่อน +2

    @krish, I think there is a mistake in the practical implementation of the linear, ridge regression, Since u used negative MSE, ur rdige regression was actually better than ur linear one. In normal MSE scores, more the value - worse the model, in negative MSE scores this would change.

  • @QasimsDesk
    @QasimsDesk ปีที่แล้ว +1

    Wah Wah - Excellent Video

  • @ewnetuabebe5059
    @ewnetuabebe5059 ปีที่แล้ว +2

    What an amazing tutorial ever seen, Thank you, Krish, but Have you put all the pdf materials kindly.

  • @sohamnaik8264
    @sohamnaik8264 2 ปีที่แล้ว +7

    Sir I just want to say Thank you to help us gain this knowledge and encourage us to start our data science journey

  • @mohomednafras8509
    @mohomednafras8509 2 ปีที่แล้ว +6

    Clear information, clarify every important point cover all topics. Thanks krish I participated you live session also... 👍

  • @drbrk8789
    @drbrk8789 2 ปีที่แล้ว +2

    Really appreciate to you sir. your explanation is very understandable sir.
    Thank you sir.

  • @littlecreative4097
    @littlecreative4097 9 หลายเดือนก่อน +1

    thank you for such a informative video Krish Naik.can you make video on standard scalar ,feature transformation another preprocessing on data before model implementation

  • @ryanondocin3731
    @ryanondocin3731 ปีที่แล้ว +1

    Great video! I think your Bias definition is backwards @5:38:37

  • @explore.with.darshan
    @explore.with.darshan หลายเดือนก่อน +1

    2:32:33 ye badhiya tha guru

  • @pragneshsolanki8243
    @pragneshsolanki8243 2 ปีที่แล้ว +2

    Please upload NLP in depth tutorials in 6-7 hours.

  • @chinmay4452
    @chinmay4452 4 หลายเดือนก่อน +1

    Definitely not the beginner-friendly!! But you will get an idea of all the Algorithms. Watch if you are revising concepts. He never starts from basics. I needed to do many searches, pausing this video.

    • @NKF-PARSHANT69
      @NKF-PARSHANT69 3 หลายเดือนก่อน

      Can I do if I am a beginner

    • @NKF-PARSHANT69
      @NKF-PARSHANT69 3 หลายเดือนก่อน

      Plz suggest me some youtube channel

  • @sethusaim1250
    @sethusaim1250 2 ปีที่แล้ว +6

    Thank you for putting everything together ☺️

  • @jagankarukonda
    @jagankarukonda ปีที่แล้ว +1

    Definitely good and great refresher who has exposure in ML,STATS and MATH(calculus and Algebra)but not for absolute beginners.... , if you want to learn ML without prior knowledge, Andrew's course in coursera is the best, you can audit the course for free over there.

    • @ritamsantra2372
      @ritamsantra2372 ปีที่แล้ว

      i've completed the stats part, should I watch this , or should i learn the extra math parts then starts here,i mean the algebra and calculus part

    • @Kavi-learn
      @Kavi-learn ปีที่แล้ว

      do you know any free resources to learn machine learning?

    • @Kavi-learn
      @Kavi-learn ปีที่แล้ว

      @@ritamsantra2372 did you watch this video or how you went about it? and where did you learn stats part?

  • @kmishy
    @kmishy 2 ปีที่แล้ว +1

    Thanks sir, thank you for merging all videos

  • @arvinthsss7959
    @arvinthsss7959 2 ปีที่แล้ว +2

    This is an excellent collection, thanks krish for this:)))

    • @Rahul-lg2xn
      @Rahul-lg2xn ปีที่แล้ว

      bro do you have the notes for this lecture?

  • @codewithemmaprime
    @codewithemmaprime ปีที่แล้ว

    Best Machine learning content out there.😊😊😊

    • @DeepuDeepu-wz4fe
      @DeepuDeepu-wz4fe ปีที่แล้ว

      Can this course help for the complete beginners?? Pls reply😊

  • @mdodamani642
    @mdodamani642 2 ปีที่แล้ว +2

    Thank you Krish, so helpfull, as mine is commerce background i feel tuff but understanding the concepts

  • @tanvirhossain5475
    @tanvirhossain5475 2 ปีที่แล้ว +1

    no option to give more likes...love from bangladesh.

  • @srikanthnimmala4457
    @srikanthnimmala4457 ปีที่แล้ว +1

    thank you soo much sir for your great explanation

  • @chinmay4452
    @chinmay4452 4 หลายเดือนก่อน

    05:38:37 If data performs well on training data it is Low Bias right?
    06:25:30 How you are multiplying the maatrices? 2*1 and 1*2 will not give a constant value it gives 2*2 matrix.

    • @sam-uw3gf
      @sam-uw3gf 4 หลายเดือนก่อน

      your 1st question is right...dude

    • @sam-uw3gf
      @sam-uw3gf 4 หลายเดือนก่อน

      for your2 question go through the vector operation it is correct dude

    • @chinmay4452
      @chinmay4452 4 หลายเดือนก่อน

      @@sam-uw3gf vector multiplication are you saying

    • @sam-uw3gf
      @sam-uw3gf 4 หลายเดือนก่อน

      @@chinmay4452 yes

  • @adipurnomo5683
    @adipurnomo5683 2 ปีที่แล้ว +1

    Clear explained 👍

  • @chandramoulireddy9636
    @chandramoulireddy9636 2 ปีที่แล้ว +1

    sir, it's very useful algorithm. i am following this . thanks

  • @bajrangsharma3308
    @bajrangsharma3308 ปีที่แล้ว +4

    I am watching this video now but could not fetch this boston housing prices data set as sk learn maintainers are telling us strongly not to use this dataset..how can i complete this tutorial now??@krishnaik sir

  • @Sanyat100
    @Sanyat100 2 ปีที่แล้ว +1

    u r da best !!!!!

  • @akashyadav5891
    @akashyadav5891 2 ปีที่แล้ว +1

    Thank you soo much sir for ur efforts ☺

  • @harshitalalwani8127
    @harshitalalwani8127 3 หลายเดือนก่อน

    AMAZING CONTENT!!!

  • @ajaykushwaha4233
    @ajaykushwaha4233 2 ปีที่แล้ว +3

    Hi Krish, we have pycaret library which you have shown in one of your video, then is it advisable to use or we need to create individual model and compare them and finalise one, kindly advise.

  • @ScirelSage
    @ScirelSage 11 หลายเดือนก่อน +2

    This lecture series is amazing ,a slight correction I found in confusion matrix, please correct it ( your TN should be FN and FN should be TN) ...I think so

  • @maths_impact
    @maths_impact 2 ปีที่แล้ว +2

    Hello sir, can you please make a video of improved Gini index algorithm for feature selection. I have read so many research papers where Improved Gini index algorithm used not simply Gini index. I know you have a very good knowledge and you can make it easily. Me and my friends will wait for the video.

  • @SantoshKumar-hr3jz
    @SantoshKumar-hr3jz 2 ปีที่แล้ว

    Yes The Best Video on ML

  • @tiyasadey2211
    @tiyasadey2211 หลายเดือนก่อน

    Thank you so much Sir.

  • @eswarchandvuppala621
    @eswarchandvuppala621 2 ปีที่แล้ว

    Thanks a lot for these complete ML lectures

    • @anexocelisia9377
      @anexocelisia9377 ปีที่แล้ว

      Brother can you tell, is this ml video covers the whole syllabus of ml?

  • @CodeSnap01
    @CodeSnap01 2 ปีที่แล้ว

    a thousand dollar course just free. thankyou krish sir.

  • @manishkumargupta4368
    @manishkumargupta4368 5 หลายเดือนก่อน

    48:57
    56:00
    1:02:00
    1:05:00
    1:30:00
    1:48:00
    1:52:00
    2:08:00
    3:19:00
    3:33:45
    3:51:20
    3:53:10
    3:57:10
    4:27:40
    4:56:20
    5:07:40
    5:27:16

  • @genai142Kumar
    @genai142Kumar ปีที่แล้ว +1

    Thank you Krish, this is very helpful. I'm beginner, is it possible to get the notes of the video?

  • @mohammadashrafulhoque7231
    @mohammadashrafulhoque7231 2 ปีที่แล้ว +4

    Could you mind to share the notes you have used for this amazing video I have ever seen in the internet. Please as It will help us a lot to go with your lecture I think.

    • @anexocelisia9377
      @anexocelisia9377 ปีที่แล้ว +1

      Brother can you tell, is this ml video covers the whole syllabus of ml?

    • @kevinmitnick1301
      @kevinmitnick1301 ปีที่แล้ว +1

      @@anexocelisia9377 Does it ?

  • @jaishivaji6702
    @jaishivaji6702 ปีที่แล้ว +1

    Sir , please upload NLP tutorial for beginners

  • @vaibhavkumarbiradar4776
    @vaibhavkumarbiradar4776 23 วันที่ผ่านมา

    I have interview for associate decision scientist I am watching this for preparation