Logistic Regression in Python - Machine Learning From Scratch 03 - Python Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 30 ก.ย. 2024
  • Get my Free NumPy Handbook:
    www.python-eng...
    In this Machine Learning from Scratch Tutorial, we are going to implement the Logistic Regression algorithm, using only built-in Python modules and numpy. We will also learn about the concept and the math behind this popular ML algorithm.
    ~~~~~~~~~~~~~~ GREAT PLUGINS FOR YOUR CODE EDITOR ~~~~~~~~~~~~~~
    ✅ Write cleaner code with Sourcery: sourcery.ai/?u... *
    📓 Notebooks available on Patreon:
    / patrickloeber
    ⭐ Join Our Discord : / discord
    If you enjoyed this video, please subscribe to the channel!
    The code can be found here:
    github.com/pat...
    Further readings:
    ml-cheatsheet....
    towardsdatasci...
    You can find me here:
    Website: www.python-eng...
    Twitter: / patloeber
    GitHub: github.com/pat...
    #Python #MachineLearning
    ----------------------------------------------------------------------------------------------------------
    * This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

ความคิดเห็น • 132

  • @RaunakAgarwallawragAkanuaR
    @RaunakAgarwallawragAkanuaR 6 หลายเดือนก่อน +1

    Although in the material a logarithmic loss function was shared but the gradient descent implementation is done using the square/entropy loss function

  • @ubaidhunts
    @ubaidhunts 3 ปีที่แล้ว

    You save me. thankyou

  • @ireneashamoses4209
    @ireneashamoses4209 4 ปีที่แล้ว +11

    This is the most clear explanation I have seen!! Thank you so much !! :)

    • @patloeber
      @patloeber  4 ปีที่แล้ว +2

      thank you!

  • @satyakikc9152
    @satyakikc9152 3 ปีที่แล้ว +1

    I have 2 qs:
    1.why we are transposing x(i checked from numpy documentation it is used to change the dimension, but i cannot get the point here)
    2.how we r getting the summation without applying np.sum
    Can you please ans ?

  • @kstyle8546
    @kstyle8546 ปีที่แล้ว +2

    there is small typo in sigmoid fuction (1:00)
    As-is: h_hat = 1 / (1 * e^ "-wx+b")
    To-be: h_hat = 1 / (1 * e^ "-(wx+b)")
    Always appreciate you these great videos~

  • @global_southerner
    @global_southerner 4 ปีที่แล้ว +2

    -(wx+b) instead of -wx+b

  • @justAdancer
    @justAdancer 3 ปีที่แล้ว +1

    just started learning this and try running the code on jupyter notebook, It keeps saying no module named logistic regression
    it might be stupid one, but please let me know why it's happening

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      Import statement might be different when using Windows and/or jupyter notebook. try from .logistic_regression import LogisticRegression (with the dot)

  • @OK-bu2qf
    @OK-bu2qf 2 ปีที่แล้ว +1

    You have explained this very easily. Keep it going on. :)
    You saved my Ass!!!

  • @PaulWalker-lk3gi
    @PaulWalker-lk3gi 4 ปีที่แล้ว +2

    Is learning calculus a pre-requisite to this series -- I am learning, but feel a bit lost when it comes to the implementations because it is difficult for me to understand the underlying mathematical concepts. I do appreciate the videos!

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      A little bit of calculus would be good here. Have a look at some free math courses here: www.python-engineer.com/posts/ml-study-guide/

  • @anmolvarshney8938
    @anmolvarshney8938 4 ปีที่แล้ว +2

    Sir, my code(sigmoid function) is giving exp overflow error in its iteration.How can I overcome it?

    • @patloeber
      @patloeber  4 ปีที่แล้ว +3

      this is because your x has a very high number, so np.exp(-x) is getting too high for your datatype. you could cut your x in this case and set a maximum value, or try to use np.float64 instead of np.float32

  • @armaanzshaikh1958
    @armaanzshaikh1958 4 หลายเดือนก่อน

    Guys for those wondering the gradient descent used here is same as of linear regression so the answer is the derivative of log loss will have same value as X.t*wieghts + bias including 1/n

  • @marilyncancino4875
    @marilyncancino4875 3 ปีที่แล้ว +1

    thank you !!! your videos help me a lot :)

  • @sushil_kokil
    @sushil_kokil 2 ปีที่แล้ว +2

    Thank you buddy! This gives me a lot of sense after my self study of Machine Learning, and using a inbuild sklearn models.

  • @dhivakarsomasundaram21
    @dhivakarsomasundaram21 2 ปีที่แล้ว

    so to evaluate test data we should not use fit_transform. ....... transform only requires??

  • @robosergTV
    @robosergTV 3 ปีที่แล้ว

    isnt it a single layer neural net with a sigmoid activation function?

  • @nackyding
    @nackyding 4 ปีที่แล้ว +1

    When I code and run this model on the "advertising data set" from Kaggle the accuracy is only in the 40-50% range while the sklearn LogisticRegression model is over 90%. I've tried varying the number of iterations and learning but I can't get an accuracy score above 50%.

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      Please not that this code is not optimized at all. You can try to apply feature scaling before training.

  • @TanmayShrivastava17
    @TanmayShrivastava17 3 ปีที่แล้ว +1

    Thanks for the video, now everything makes sense that what is going on in the behind.

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      glad to hear that :)

  • @nadabu.274
    @nadabu.274 4 ปีที่แล้ว +2

    Excellent video. I really start to have a good understanding of the ML algorithm after I watch your videos.

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      that's great!

  • @akhadtop2067
    @akhadtop2067 3 ปีที่แล้ว

    congrat because lot of people do not do it from scratch

  • @ranitbandyopadhyay
    @ranitbandyopadhyay 5 ปีที่แล้ว +1

    Things are explained much more elaborately

  • @jayantmalhotra1449
    @jayantmalhotra1449 4 ปีที่แล้ว +1

    Great work bro , I am sure you will reach 100K soon . Best of luck

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      Thank you :)

  • @reetikagour1203
    @reetikagour1203 4 ปีที่แล้ว

    Hi...i am using the below code line to update weight and bias but is giving me the error..could u please help
    here is the code.
    w = w - (alpha_lr_rate * dw)
    b = b - (alpha_lr_rate * db)
    where w = np.random.normal(loc=0.0, scale=1, size = X.shape[1])
    b=0
    error: operands could not be broadcast together with shapes (15,) (15,37500)

  • @arungovindarajan93
    @arungovindarajan93 4 ปีที่แล้ว +1

    Hi bro,
    you have used squared loss function . But logistic regression has log loss .If we derivate square loss wrt to w and b ,do we get same as log loss derivative?

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Hi. The loss I'm showing is the log loss (or better known as cross-entropy). However, the gradient is the same as for square loss in this case. You can check the further readings I provided in the description for a detailed gradient calculation :)

    • @armaanzshaikh1958
      @armaanzshaikh1958 4 หลายเดือนก่อน

      No you have used the linear regression cost function you have shown log loss but the derivative answer is of linear regression

  • @yukiyoshimoto502
    @yukiyoshimoto502 2 ปีที่แล้ว

    I wonder how can I plot the logistic regression line calculated from that (the boundary )

  • @hoami8320
    @hoami8320 ปีที่แล้ว

    i was looking for a basic form logistic regression model using algorithmic modeling. thanks you very much . i like your video

  • @mrfrozen97-despicable
    @mrfrozen97-despicable 3 ปีที่แล้ว +1

    Glad I discovered this video.))))))

    • @patloeber
      @patloeber  3 ปีที่แล้ว +1

      glad to have you here :)

  • @jaimehumbertorinconespinos3790
    @jaimehumbertorinconespinos3790 3 ปีที่แล้ว

    TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'

  • @matthewking8468
    @matthewking8468 3 ปีที่แล้ว

    Where is the loss applied pleased?

  • @shaikrasool1316
    @shaikrasool1316 3 ปีที่แล้ว

    How implement one vs rest from scratch and intigrate with logistic regression

  • @4wanys
    @4wanys 3 ปีที่แล้ว

    i think you use the linear Regression cost function not the logistic Regression cost function in your code

  • @sat4GD
    @sat4GD 3 ปีที่แล้ว

    what about loss function?

  • @_inetuser
    @_inetuser 3 ปีที่แล้ว

    thx dude was searching all over the web if you have to put the truncating mechanism with 0.5 into the predict function which is used by gradient descent/ cost f but you successfully showed me that its just for the prediction hypo which is used afterwards

  • @priyanj7010
    @priyanj7010 2 ปีที่แล้ว

    THANK YOU TO THE MOON AND BACK... BEST EVER EXPLANATION I HAD SEEN

  • @drrbalasubramanianmsu1593
    @drrbalasubramanianmsu1593 ปีที่แล้ว

    Very nice Explanation ... Thanks

  • @amirhosseintalebi6770
    @amirhosseintalebi6770 2 ปีที่แล้ว

    hello, in which environment the python codes are written ?

  • @redouanebelkadi5068
    @redouanebelkadi5068 3 ปีที่แล้ว

    i thank you good sir !

  • @gurekodok
    @gurekodok 3 ปีที่แล้ว +1

    Thank you for summarization

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      glad you like it

  • @sz8558
    @sz8558 3 ปีที่แล้ว

    Great insight thank you. It would be even better if you could have shown us the raw data and just explained the variables and what exactly we were trying to predict etc....thanks

  • @umarmughal5922
    @umarmughal5922 2 ปีที่แล้ว

    Where is the entropy loss implemented in this code?

  • @burcakotlu7858
    @burcakotlu7858 3 ปีที่แล้ว

    Thank you very much for your video. I wonder why you are not checking your model at each iteration and returning the model with lowest error, instead of returning the model with the last w and b parameters of the for loop.

    • @patloeber
      @patloeber  3 ปีที่แล้ว +1

      good point. I wanted to keep it simple here, but in practice of course you can/should check for the best model

  • @KUNALSINGH-zk8su
    @KUNALSINGH-zk8su 3 ปีที่แล้ว

    Can you explain me why i am getting this error
    ValueError: not enough values to unpack (expected 2, got 1)
    def fit(self, X, y):
    ---> 12 n_samples, n_features = X.shape
    in this 12 number line
    Edit : when i am doing this with normal logistic regression function from sklearn it works but why not with the one we created

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      because sklearn can handle incorret shapes and then transforms it to the correct one for you. here you have to do this yourself. X does not have the correct shape here

  • @kritamdangol5349
    @kritamdangol5349 4 ปีที่แล้ว

    I used this Logistic Regression model algorithm for prediciton of disease and while pickling i got this error for this model . Can you please explain what kind of error is this and how to overcome . Plz help me out
    UnpicklingError: invalid load key, '\xe2'.

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      I guess you try to unpickle something that has not been pickled correctly...

  • @babaabba9348
    @babaabba9348 3 ปีที่แล้ว

    I'm a bit confused the teacher showed us a different way where the gradient descent is calculated by an ugly formula that involves logarithm

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      the cost function I showed also involves the log. Not sure which final formula your teaches uses, but is is very common to keep the logarithm in order to avoid overflow for large numbers...

  • @goodboytobi8202
    @goodboytobi8202 3 ปีที่แล้ว

    Thank you sir. if we want to use elastic net regularization along with this logistic regression.....how should we approach?

  • @adbeelomiunu
    @adbeelomiunu 4 ปีที่แล้ว

    Hello, please is there any way we could have access to the JUPYTER NOTEBOOK you made reference to in this video.

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Hi, not yet, but I'm planning to release them on my website soon.

  • @unfinishedsentenc9864
    @unfinishedsentenc9864 3 ปีที่แล้ว +1

    thank you so much.

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      glad you like it!

  • @adithyarajagopal1288
    @adithyarajagopal1288 4 ปีที่แล้ว

    SHOUDNT DW HAVE AN NP.SUM OUTSIDE AS WELL, YOU HAVE SUMMED UP THE DB's BUT NOT THE DW's

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      the dot product already includes a sum (np.dot applied for dw)

  • @passionatedevs8158
    @passionatedevs8158 4 ปีที่แล้ว

    Logistic Regression reduces Log Loss. U r reducing Square Loss... Why it is so?

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Hi. The loss I'm showing is in fact the log loss (or better known as cross-entropy). However, the gradient is the same as for square loss in this case. You can check the further readings I provided in the description for a detailed gradient calculation :)

  • @abhisekhagarwala9501
    @abhisekhagarwala9501 4 ปีที่แล้ว

    Hi
    I wrote your code and tried applying on the below custom dataset. But it raised an error. and i also tried reshaping the vector into 3 features that also gave me error it only work on the code you gave why is that????help me
    again i tried loading boston dataset and tried your code after that also I faced en error .could you tell me why is that
    X = np.arange(10)
    X_train = np.arange(7)
    y_train = np.arange(7)
    X_test = np.array([8,9,10])
    y_test = np.array([8.5,9.5,10.5])

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      your X does note have the correct dimension! You have to add one more axis to X_train and x_test: X_test= X_test = np.array([[8,9,10]]). Or use np.newaxis to add a new one (have a look at my new numpy tutorial there I show this.)

    • @abhisekhagarwala9501
      @abhisekhagarwala9501 4 ปีที่แล้ว

      @@patloeber thank you so much for your response. I will use your suggestion. I will share my inputs with you , then you can show me where and why i am making the mistake. That way i will get more insight

  • @nadabu.274
    @nadabu.274 4 ปีที่แล้ว

    I have questions regards random_state, some sources set it to 42 or 95 and when I changed this number, accuracy change as well, for example in the make_blobs dataset if I set it to 95, the classifier gave a good accuracy(~99%) but when I set it to 42 it gave around 88%. Also, I got this error (RuntimeWarning: overflow encountered in exp return 1 / (1 + np.exp(-x)) with exp when I change the learning rate value.

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      random state let's use reproduce your result. It does not matter which number you use (some people just like 42) . It affects the training and test samples. some splits work better than others. if your x is too large, you can get an overflow because exp(-x) is too large. try to clip your x to a maximum, or try to use datatype float64 instead of float32

    • @nadabu.274
      @nadabu.274 4 ปีที่แล้ว

      @@patloeber Thank you very much, I will try these solutions, I tried absolute (abs) but the plotting of loss function look wired (got values under zeros)

  • @alexanderperegrinaochoa7491
    @alexanderperegrinaochoa7491 4 ปีที่แล้ว

    Hi, Can this algorithm be extended to a multi class problem?..

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Yes it can. For multinomial logistic regression you have to use the softmax function instead of the sigmoid function to approximate the y. So y_predicted = self._softmax(linear_model). Furthermore you have to apply the cross-entropy as loss function and then calculate the gradients. Then again you can use the update rule with the correct gradient: self.weights -= self.lr * gradient

  • @arieljiang8198
    @arieljiang8198 2 ปีที่แล้ว

    this is great thank you!!!

  • @keerthyeran1742
    @keerthyeran1742 4 ปีที่แล้ว

    Can I use this in writer identification??
    Can U respond fast

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Yes LR can be used for classification tasks

  • @prashantsharmastunning
    @prashantsharmastunning 4 ปีที่แล้ว

    hi , i am getting a runtime warning
    RuntimeWarning: overflow encountered in exp
    return 1/(1+np.exp(-x))
    0.8947368421052632
    what should i do to avoid this?

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      you probably have values in x with very large negative numbers. Try applying a standard scaler

    • @prashantsharmastunning
      @prashantsharmastunning 4 ปีที่แล้ว

      @@patloeber you are the best thnkx..

    • @prashantsharmastunning
      @prashantsharmastunning 4 ปีที่แล้ว

      @@patloeber thankx that solved the issue..

  • @dhananjaykansal8097
    @dhananjaykansal8097 4 ปีที่แล้ว +1

    Lovelyyyyy. Cheers!

  • @Mar-kb8yq
    @Mar-kb8yq 3 ปีที่แล้ว

    Hi your videos are just awesome! One question: the iterations of the fit method's for loop should correspond to neural network's hidden layers . it's true?

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      No, this training loop here has nothing todo with neural networks, it's simply how long this optimization should try to improve. However, you can compare it with the number of epochs when training a neural net

  • @paulbrown5839
    @paulbrown5839 3 ปีที่แล้ว

    Nice video, thanks a lot. Very good compared to comparable ones that i have looked at.

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      Thanks! Glad you like it

  • @nikolayandcards
    @nikolayandcards 4 ปีที่แล้ว

    How can I plot the sigmoid curve the same way you plotted the fitted line in Linear Regression at the end?

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      sigmoid = lambda x: 1 / (1 + np.exp(-x))
      x=np.linspace(-10,10,100)
      fig = plt.figure()
      plt.plot(x,sigmoid(x),'b', label='linspace(-10,10,100)')

    • @nikolayandcards
      @nikolayandcards 4 ปีที่แล้ว +1

      ​@@patloeber Thank you! I also added X and y to the plot to see how they go with the sigmoid curve. Works like a charm.

  • @damianwysokinski3285
    @damianwysokinski3285 4 ปีที่แล้ว

    We dont need to define accuracy function. We can use sklearn.metric.accuracy_score instead

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Of course you can. But I want to implement it from scratch ;)

    • @damianwysokinski3285
      @damianwysokinski3285 4 ปีที่แล้ว

      @@patloeber my bad :P I forgot about main goal of the playlist

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      @@damianwysokinski3285 No problem. It's actually good that you know about these sklearn functions :)

  • @kougamishinya6566
    @kougamishinya6566 3 ปีที่แล้ว

    The way you relate Linear Regression to Logistic Regression makes it so clear thank you so much!

    • @patloeber
      @patloeber  3 ปีที่แล้ว +1

      Glad it was helpful!

  • @thecros1076
    @thecros1076 4 ปีที่แล้ว

    Why use gradient decent ... can't we make the derivative with respect to the parameter equal to zero and find out the parameter w and b ....by solving the equations... please answer this question I really need the answer

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Good question! Gradient descent is an iterative approach to this solution. In theory your analytical method is optimal. However, in practice, this requires solving complex equations which is too expensive for higher dimensions. Moreover, in the real world, many cost functions don't have valid derivatives everywhere.

    • @thecros1076
      @thecros1076 4 ปีที่แล้ว

      Is there any resources for this with you ....any blog which can be read....I read the blog on linear regression but could not find the same method for logistic regression

    • @thecros1076
      @thecros1076 4 ปีที่แล้ว

      please bro do u have any blogs for this

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      @@thecros1076 I have a blog at python-engineer.com. Unfortunately, at this moment I do not have articles for the machine learning tutorials, but they will be added in the future

    • @arungovindarajan93
      @arungovindarajan93 4 ปีที่แล้ว

      It is very hard to equate the derivative to zero and find minima value because in machine learning optimisation derivatives will be in complex forms. let say if you have derivative of some function as 5x+5, you can equate this to 0 and find minima for inputting x values. So that's why GD/SGD algorithms are used in ML

  • @bassamal-kaaki3253
    @bassamal-kaaki3253 4 ปีที่แล้ว

    I follow the exact thing you did, yet I get an error that says “object has no attribute ‘sigmoid’ although I typed the exact thing? In addition your code in video and in github is different and needs updating example learning rate with lr :)

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Compare again with code on GitHub? There must be a typo somewhere. Do you use ‘self’?

    • @bassamal-kaaki3253
      @bassamal-kaaki3253 4 ปีที่แล้ว

      Python Engineer i copied exactly!! Where is the code on github. Can you kindly provide link.

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      github.com/python-engineer/MLfromscratch

    • @bassamal-kaaki3253
      @bassamal-kaaki3253 4 ปีที่แล้ว

      Python Engineer Thank u I got the link. I love your videos by the way!

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Thanks!

  • @stevewang5112
    @stevewang5112 4 ปีที่แล้ว

    Could you please tell me how to do the logistic regression with L2 regularization?

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Hello. Please check out my video about SVM to see how a regularization term is applied. Basically you add the regularization term to your cost function, and then calculate the gradients. You also have to use a regularization parameter to balance the effect of the regularization during optimization. You can also check out this code: github.com/pickus91/Logistic-Regression-Classifier-with-L2-Regularization

    • @bryanchambers1964
      @bryanchambers1964 3 ปีที่แล้ว

      @@patloeber Doesn't the bias term do the same thing as a a regularization term though?

  • @keshavarzpour
    @keshavarzpour 4 ปีที่แล้ว

    how we can update it to multiclass version, more 2 lable
    ?

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Hello. For multinomial logistic regression you have to use the softmax function instead of the sigmoid function to approximate the y. So y_predicted = self._softmax(linear_model). Furthermore you have to apply the cross-entropy as loss function and then calculate the gradients. Then again you can use the update rule with the correct gradient: self.weights -= self.lr * gradient

    • @dattijomakama9703
      @dattijomakama9703 4 ปีที่แล้ว

      @@patloeber Hello, thank you for putting these great tutorials online. We're earning a lot from them.
      I have implemented the Logistic regression using softmax function as you described above, but its return a score/ accuracy of zero on the iris dataset.
      for i in range(self.n_iterations):
      model = np.dot(X, self.weights) + self.bias
      y_predicted = self.softmax(model)
      dw = (1/n_samples)*np.dot(X.T,(y_predicted - y))
      db = (1/n_samples)*np.sum(y_predicted - y)
      self.weights -= self.lr*dw
      self.bias -= self.lr*db
      def predict(self, X):
      model = np.dot(X, self.weights) + self.bias
      y_predicted = self.softmax(model)
      print(y_predicted)
      return y_predicted
      def softmax(self, x):
      return np.exp(x) / float(sum(np.exp(x)))

    • @PenAndPickaxe
      @PenAndPickaxe 4 ปีที่แล้ว

      @@dattijomakama9703 same it doesnt work for me too. My model is predicting only one class despite having a balanced dataset.
      EDIT: Run it for about 1000 iterations, it worked for me.

  • @HuyNguyen-fp7oz
    @HuyNguyen-fp7oz 3 ปีที่แล้ว

    Love your video!!!

  • @fahimfaisal4660
    @fahimfaisal4660 4 ปีที่แล้ว

    Excellent!

  • @lucasmrtiins_
    @lucasmrtiins_ 4 ปีที่แล้ว

    Why do we use the bias?

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      The approximation is w*x + b. You can think of this in 2D, then this is equivalent to a line equation m*x + t. The bias is the intercept. It can shift the whole data up or down. So if your data is not centered around the origin, then we need to shift it to get the correct prediction. Hence we also try to learn the bias/intercept.

  • @ahmadtarawneh2990
    @ahmadtarawneh2990 3 ปีที่แล้ว

    Very nice!

  • @DanielDaniel-hk9el
    @DanielDaniel-hk9el 4 ปีที่แล้ว

    Very good!

  • @parismollo7016
    @parismollo7016 4 ปีที่แล้ว

    This is a great tutorial! Thank you!

    • @patloeber
      @patloeber  4 ปีที่แล้ว

      Thank you! Glad you like it :)

    • @parismollo7016
      @parismollo7016 4 ปีที่แล้ว +1

      @@patloeber I am reading the book '"Data Science from scratch, Joel Grus" which is really good too, but sometimes I have some trouble understanding the use of some formulas. Your channel is great because it has the same approach "from scratch" and I think it is really useful for those who are interested in the statistics/math behind all the magic of ML

    • @patloeber
      @patloeber  4 ปีที่แล้ว +1

      @@parismollo7016 Yes some topics can be challenging, but keep going! I'm happy if this is helpful :)

  • @bryanchambers1964
    @bryanchambers1964 3 ปีที่แล้ว

    Great job. Your code is so concise and logical, but does require a solid background. Just wish I got better accuracy. I used it for the "Titanic" dataset on Kaggle and could only get 66%. Thats the lowest of all the models I have tried. Monte Carlo Markov Chain gave me the best so far at 78%. Any idea of how I can get a better score?

    • @patloeber
      @patloeber  3 ปีที่แล้ว

      Thanks! Yes the model is not optimized at all. Titanic dataset is all about cleaning and preprocessing your data, so maybe that could improve it :)