This material is mostly based on Tom Mitchell's book Machine Learning (Chapter 6: Bayesian Learning), including the cancer example. The book is honestly very hard for someone getting introduced to the field. Slightly more introductory and more accessible material on Bayesian Concept Learning is provided by Kevin Murphy, Machine Learning, A probabilistic perspective, Chapter 3.
you can use pattern recognition by cm bishop's probability concepts where you know much about the prior and posterior probabilities mentioned in the above example
So ironic, did engineering for four years and got real knowledge here.. Indian institutions are in pathetic conditions and structural changes are much needed.. its a high time to expose students to research oriented quality education with not more than 5 subjects in 4 years such that subjects are relevant to present industry trends. For instance. machine learning came to me as a subject in my last semester whereas, practically it should have been introduced in third semester itself making me a more competent part of workforce with respect to latest trends and opportunity.
what the duck ??? NO concept...no application backgraound..nothing...just solving questions...we are doing that from our school time....will you please give some descriptive information in a practically manner !!!!
go to www.nptel.ac.in, search machine learning course, or search by her name, u will get full videos by the way link is:http: //www.nptel.ac.in/courses/106105152/
Many thing are wrong here, please don't watch. epsilon (the noise) is assumed to comes form a zero mean unit variance normal distribution, not the di (which we call f_hat)
Try learning the basic maths associated like i.i.d events, Baye's theorem, Normal distribution. I'm sure you'll understand it much better then. I too had difficulties following the video.
its the normal that occurs and in machine learning they are called IID's ,they occur independently without depending on the other terms and it occurs because whatever we had predicted does not actually occurs because the data may contains some unwanted noise or unecessary terms and to deal with we have different techniques and gaussian/normal distribution deals best with it .
This material is mostly based on Tom Mitchell's book Machine Learning (Chapter 6: Bayesian Learning), including the cancer example. The book is honestly very hard for someone getting introduced to the field. Slightly more introductory and more accessible material on Bayesian Concept Learning is provided by Kevin Murphy, Machine Learning, A probabilistic perspective, Chapter 3.
Correct. That book is not for beginners.
you can use pattern recognition by cm bishop's probability concepts where you know much about the prior and posterior probabilities mentioned in the above example
thanks. this has been helpful
Thanks a lot maam for providing such quality lectures. Those who have to study proper theoretical ML will truly appreciate these lectures.
So ironic, did engineering for four years and got real knowledge here.. Indian institutions are in pathetic conditions and structural changes are much needed.. its a high time to expose students to research oriented quality education with not more than 5 subjects in 4 years such that subjects are relevant to present industry trends. For instance. machine learning came to me as a subject in my last semester whereas, practically it should have been introduced in third semester itself making me a more competent part of workforce with respect to latest trends and opportunity.
couldn't agree more
@@hannanbaig7888 iits are doing well just because of extremely talented students
what the duck ??? NO concept...no application backgraound..nothing...just solving questions...we are doing that from our school time....will you please give some descriptive information in a practically manner !!!!
Why youtube doesn't have ×10 fast forward ⏩ options
at 9.43 it should be:
P(~cancer)*P(+ | ~cancer) divided by p(+)
written incorrectly but ratios are calculated correctly
even P(cancer)=0.008 instead of 0.02
At 8:47 p( cancer) is .008 instead of 0.02
Guys update accordingly.
All went from upper side of my head😣
Very nice... It helps me a lot... Thank you madam
Mam you are teaching like everyone in the audience is from IIT, no background about anything just completing slides.
These Video lectures are very good, but after mod04lec13, mod04lec14 and mod04lec15 are missing, Please create playlist in a sequence
did they send you any link?
go to www.nptel.ac.in, search machine learning course, or search by her name, u will get full videos
by the way link is:http: //www.nptel.ac.in/courses/106105152/
Really god knows what the mam is saying ....
Why?
no worry bro u know the concepts of prior and posterior probabilities and the dependencies ,then you can know it much easier.
lol😇
just watch it at 1.5x speed..
Wonderful video. Thank you!
You told P(D) is Likelihood, I think P(D|H) is Likelihood and P(D) is the Evidence.
19:20 why variance(sigma square) is constant??
Its linear regression
it is an assumption in here ,but may vary and we are using for gaussian or normal distribution in here
what is difference between hypothesis and classes then?
Hypotheses is a function
Many thing are wrong here, please don't watch. epsilon (the noise) is assumed to comes form a zero mean unit variance normal distribution, not the di (which we call f_hat)
I think u need to revise probability theory
8:48...Probability of of Prior Cancer is 0.08, not 0.02... Dunno if i'm wrong...
correct I guess
u are correct
0.008 instead of 0.08
Mam you are going too fast.....at that speed it is very difficult to interpret maths...
Try learning the basic maths associated like i.i.d events, Baye's theorem, Normal distribution. I'm sure you'll understand it much better then. I too had difficulties following the video.
8:51 it would be 0.008 for P(cancer)
is here any who is working on mixture distribution in bayesian ...please contact
thanks that's awesome really useful
from where the Gaussian function suddenly came in computing maximum likely-hood hypothesis
its the normal that occurs and in machine learning they are called IID's ,they occur independently without depending on the other terms and it occurs because whatever we had predicted does not actually occurs because the data may contains some unwanted noise or unecessary terms and to deal with we have different techniques and gaussian/normal distribution deals best with it .
mam ke haath me jo Notes hai uske pdf hi share kar dete toh time kam waste hota galat dekhke
Second one should be P(+|~cancer) not P(~|cancer)
you are right about second being wrong at 9:43 but what answer you gave is also wrong. It should be P(~|~cancer)P(~cancer)
@@likithkb8599 yeah you are right
before of this class you have not teach probability if you teach the probability so give the link of that class
th-cam.com/video/5Pck0Cqw-zc/w-d-xo.html&ab_channel=Ch-13ComputerScienceandEngineering
Thank you mam😊
8:47 its not 0.008
bohot jaada ho gya mam
good
Itnaa buraa padayaa naa kii kyaa hi boluu .....
Im will not going for IIT now,
Dump teacher
This is not expected of an IIT teacher, on IIT platform. Disappointing.
script readers
A teacher who doesn't remember the basic gaussian distribution in machine learning is teaching us machine learning
And she is IIT KGP professor of CS
😂, true I was looking for solution, but here I came😅
Writing what is written in the book and reading the same thing is not what teaching is.
Useless Lecture. She is even referring to the materials and writing wrong on board.