ขนาดวิดีโอ: 1280 X 720853 X 480640 X 360
แสดงแผงควบคุมโปรแกรมเล่น
เล่นอัตโนมัติ
เล่นใหม่
andrew is a blessing to folks taking this path on their own
Been trying to understand the dry run of the fundamental principle of the operation, this video is very helpful in solving my doubts...Thanks a lot
Andrew NG is the BEST.
Deep Explanation for Deep Learning, Thanks!
very clear explanation, thumbs up.
This was a very nice hands on explanation. Thank you!
Thank you Andrew!
8:00 what if we have m*(n-1) matrix then can we do it by converting this to copy one column at last?? what was the column then ??
At least one of m or n should be the same, because it should be when calculate elementwise.
I dont think we need to reshape cal because it is already a 1,4 matrix. simply use A/cal.
4:54 "That's actually a little bit redundant" - Andrew Ng
He literally said that. Better be safe than sorry. If you handle neural nets with various input and output sizes you can quickly lose track!
3:40 , cal.reshape(1,4) is it not redundant?
Yes, it is redundant. He notes this a bit further on in the video but I was confused when he did this at first too.
4:43 i think thats incorrect as cal have shape (4,) and not (1,4)
Nope it is right. It is a 1x4 matrix i.e. 1 row 4 columns.
If not for broadcasting introduction, might percentage = 100*np.matmul(A, (np.diag(1/cal))) be more intuitive to do matrix multiplication? And would it be a good convention to not use broadcasting since it is implicit?
how is that more intuitive? expanding it to another whole matrix? also it's more abstract to construct diagonal for tensors.
Does boosting mean, avoiding loops in matrix operations by reshaping matrices, which activates parallel computing?
Where can I get the Juptyer notebook file for this? (Any github links ?)
download anaconda
thank you
how about when you use operation of (m, 1) over (1, n) what will be the result? :>
NICE.. thank you
Is it a feature of python or numpy?
Numpy 😄
@@haakonvt 🤭
Behind the scenes: Binance CEO shares insights into future developments in an exclusive interview
PLEASE REPLY 🙏Instead of A.sum(axis=0) can we use np.sum(A, axis=0) ??
yes
andrew is a blessing to folks taking this path on their own
Been trying to understand the dry run of the fundamental principle of the operation, this video is very helpful in solving my doubts...
Thanks a lot
Andrew NG is the BEST.
Deep Explanation for Deep Learning, Thanks!
very clear explanation, thumbs up.
This was a very nice hands on explanation. Thank you!
Thank you Andrew!
8:00 what if we have m*(n-1) matrix then can we do it by converting this to copy one column at last?? what was the column then ??
At least one of m or n should be the same, because it should be when calculate elementwise.
I dont think we need to reshape cal because it is already a 1,4 matrix.
simply use A/cal.
4:54 "That's actually a little bit redundant" - Andrew Ng
He literally said that. Better be safe than sorry. If you handle neural nets with various input and output sizes you can quickly lose track!
3:40 , cal.reshape(1,4) is it not redundant?
Yes, it is redundant. He notes this a bit further on in the video but I was confused when he did this at first too.
4:43 i think thats incorrect as cal have shape (4,) and not (1,4)
Nope it is right. It is a 1x4 matrix i.e. 1 row 4 columns.
If not for broadcasting introduction, might percentage = 100*np.matmul(A, (np.diag(1/cal))) be more intuitive to do matrix multiplication? And would it be a good convention to not use broadcasting since it is implicit?
how is that more intuitive? expanding it to another whole matrix? also it's more abstract to construct diagonal for tensors.
Does boosting mean, avoiding loops in matrix operations by reshaping matrices, which activates parallel computing?
Where can I get the Juptyer notebook file for this? (Any github links ?)
download anaconda
thank you
how about when you use operation of (m, 1) over (1, n) what will be the result? :>
NICE.. thank you
Is it a feature of python or numpy?
Numpy 😄
@@haakonvt 🤭
Behind the scenes: Binance CEO shares insights into future developments in an exclusive interview
PLEASE REPLY 🙏
Instead of A.sum(axis=0) can we use np.sum(A, axis=0) ??
yes