Thank you for such a nice video, But there's something not yet clear to me, when you have an imbalanced dataset, which is better macro or micro f1-score?
Hi Niraj! I think there can be some error (or some confussion in me) in the procedure, because every time you'll get the same number in FP and FN because what you are doing is sum the row (or column) and then substract the element of the diagonal that correspond to the row (or column) you are working with. If it were done in R, then (FN is the sum of every FN_i . Same thing for FP): FN_mic
Micro averaging: In micro-averaging method, we sum up the individual true positives, false positives, and false negatives of the system for different sets and the apply them. Finally, the micro-average F1-Score will be simply the harmonic mean of above two equations. In Macro-averaging - we just take the average of the precision and recall of the system on different sets. May be due to the selected data in the current case - it gives the same result - and it is possible. However, as discussed in the video also - both measures gives some useful information about the accuracy - Case-1: We generally use the Macro-averaging method, when we want to know how the system's overall performance across the sets of data. Case-2: On the other hand, micro-averaging can be a useful measure when your dataset varies in size.
@@DrNirajRKumar hi! Thank you for the answer. Even that, I think maybe my point was not clear: the micro P and micro R (which I use for the micro F1) will be always equal
yeah that what i said in my comment, thx for agreeing with me :). But also, precision_micro and recall_micro are also always the same as the accuracy since we divide the true_positives by the complete sum of the confusion matrix in both cases. Then why having the micro_averaging metrics at all?
Just stumpled upon the fact that recall_micro and precision_micro is the same with my case. Was happy that this video confirmed it. But it should in fact always be the same since we divide in one case by the rowsums minus the diagonal and in the other case by the columnsums minus the diagonal, which are the same for every matrix. Correct me if im wrong though :)
Micro averaging: In micro-averaging method, we sum up the individual true positives, false positives, and false negatives of the system for different sets and the apply them. Finally, the micro-average F1-Score will be simply the harmonic mean of above two equations. In Macro-averaging - we just take the average of the precision and recall of the system on different sets. May be due to the selected data in the current case - it gives the same result - and it is possible. However, as discussed in the video also - both measures gives some useful information about the accuracy - Case-1: We generally use the Macro-averaging method, when we want to know how the system's overall performance across the sets of data. Case-2: On the other hand, micro-averaging can be a useful measure when your dataset varies in size.
Nice video sir. I am following your Google website, where you have created ebook for ML and DL. But it is half done. So do you have any plan of completing the ebook so I can follow along?
That is the old website.... Please go through this new one: ai-leader.com/ I am trying to add more tutorials. However, everything depends on the free time.
Thank you for such a nice video, But there's something not yet clear to me, when you have an imbalanced dataset, which is better macro or micro f1-score?
Thanks man this helped out👊
Thank you so much, Neeraj. I am forever grateful
Hi Niraj!
I think there can be some error (or some confussion in me) in the procedure, because every time you'll get the same number in FP and FN because what you are doing is sum the row (or column) and then substract the element of the diagonal that correspond to the row (or column) you are working with.
If it were done in R, then (FN is the sum of every FN_i . Same thing for FP):
FN_mic
Micro averaging: In micro-averaging method, we sum up the individual true positives, false positives, and false negatives of the system for different sets and the apply them. Finally, the micro-average F1-Score will be simply the harmonic mean of above two equations.
In Macro-averaging - we just take the average of the precision and recall of the system on different sets.
May be due to the selected data in the current case - it gives the same result - and it is possible.
However, as discussed in the video also - both measures gives some useful information about the accuracy -
Case-1: We generally use the Macro-averaging method, when we want to know how the system's overall performance across the sets of data.
Case-2: On the other hand, micro-averaging can be a useful measure when your dataset varies in size.
@@DrNirajRKumar hi! Thank you for the answer.
Even that, I think maybe my point was not clear: the micro P and micro R (which I use for the micro F1) will be always equal
yeah that what i said in my comment, thx for agreeing with me :). But also, precision_micro and recall_micro are also always the same as the accuracy since we divide the true_positives by the complete sum of the confusion matrix in both cases. Then why having the micro_averaging metrics at all?
Thank you very much, but I need how we can compute Weighted Precision,Weighted recall and Weighted fi-score please
Please follow: scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html
You saved me thank you so much.
Just stumpled upon the fact that recall_micro and precision_micro is the same with my case. Was happy that this video confirmed it. But it should in fact always be the same since we divide in one case by the rowsums minus the diagonal and in the other case by the columnsums minus the diagonal, which are the same for every matrix. Correct me if im wrong though :)
Micro averaging: In micro-averaging method, we sum up the individual true positives, false positives, and false negatives of the system for different sets and the apply them. Finally, the micro-average F1-Score will be simply the harmonic mean of above two equations.
In Macro-averaging - we just take the average of the precision and recall of the system on different sets.
May be due to the selected data in the current case - it gives the same result - and it is possible.
However, as discussed in the video also - both measures gives some useful information about the accuracy -
Case-1: We generally use the Macro-averaging method, when we want to know how the system's overall performance across the sets of data.
Case-2: On the other hand, micro-averaging can be a useful measure when your dataset varies in size.
Nice video sir. I am following your Google website, where you have created ebook for ML and DL.
But it is half done.
So do you have any plan of completing the ebook so I can follow along?
That is the old website.... Please go through this new one: ai-leader.com/
I am trying to add more tutorials. However, everything depends on the free time.
@@DrNirajRKumar Thank you, sir. Keep doing the good work. God Bless.
please how to calculate weighted average