09:00 - Linear Model with an example of an Non Linear Feature 12:00 - a new Feature Engineered - independent variable - h(x) 15:00 - SIFT features 16:00 - Deep learning algorithms - learning to feature engineer . 20:30 - Given data is Linear Separable , Logistic Regression ( Classification ) is a good choice . 25:30 - Logistic Regression --> is a ONE CELL Neural Network - its just ONE NEURON 26:30 - Loss Function --> Optimizing for w(hat) - by picking the RIGHT w(hat) , we ensure that the Y(hat)i is close to most Yi 28:40 - ENTROPY - both the classes - 0 and 1 have equal probability - 0.5 ( 50%) and thus its the Highest Uncertaintiy or Hihghest Entropy
09:00 - Linear Model with an example of an Non Linear Feature
12:00 - a new Feature Engineered - independent variable - h(x)
15:00 - SIFT features
16:00 - Deep learning algorithms - learning to feature engineer .
20:30 - Given data is Linear Separable , Logistic Regression ( Classification ) is a good choice .
25:30 - Logistic Regression --> is a ONE CELL Neural Network - its just ONE NEURON
26:30 - Loss Function --> Optimizing for w(hat) - by picking the RIGHT w(hat) , we ensure that the Y(hat)i is close to most Yi
28:40 - ENTROPY - both the classes - 0 and 1 have equal probability - 0.5 ( 50%) and thus its the Highest Uncertaintiy or Hihghest Entropy
Thanks for the additional time stamps