Because the probability distribution is also conditioned by its parameters theta (by definition), which now are functions of the input x (since we set the network to predict them), so we can just say it is conditioned by theta in short.
Sir, can you share the presentation (pdf) ? It would immensely help for revision. Although I could see the slides in the udl website, but I felt your slides are more condensed. Thank you once again
First time, I understood what MLE to its core. Thank you so much Prof Tamer.
You're most welcome
sir wonderful lecture
Thank you 😊
Thank you sir .I have one question plz explain the second line of the maximum likelihood equation at 6:50 m why it is replaced with theta
Because the probability distribution is also conditioned by its parameters theta (by definition), which now are functions of the input x (since we set the network to predict them), so we can just say it is conditioned by theta in short.
@@tamer_elsayed Thank you sir
Sir, can you share the presentation (pdf) ? It would immensely help for revision. Although I could see the slides in the udl website, but I felt your slides are more condensed.
Thank you once again
Sure, I will add a link to every lecture in the description. I already did in lecture 9.
@@tamer_elsayed yes sir, thank you so much 🙏🏻