Tensors for Deep Learning - Broadcasting and Element-wise Operations with PyTorch

แชร์
ฝัง
  • เผยแพร่เมื่อ 17 ธ.ค. 2024

ความคิดเห็น • 38

  • @ravihammond
    @ravihammond 6 ปีที่แล้ว +32

    I think it's very humble of you to include clips from other streamers/speakers/youtubers in your videos. Rather than purely ripping off their exact explanation and delivery, you have identified that this other person X has explained something the best, so let's just put X's explanation in my video directly, rather than copying their explanation.

  • @sytekd00d
    @sytekd00d 6 ปีที่แล้ว +14

    Dude!! This is probably the best learning channel for anything Deep Learning with Python. The explanations with the visuals make things SO much easier to understand.

    • @deeplizard
      @deeplizard  6 ปีที่แล้ว

      Thank you sytekd00d. Really appreciate you letting me know!

  • @deeplizard
    @deeplizard  6 ปีที่แล้ว +1

    Check out the corresponding blog and other resources for this video at: deeplizard.com/learn/video/QscEWm0QTRY

  • @ShahxadAkram
    @ShahxadAkram 2 ปีที่แล้ว +1

    This series is so much underrated. I don't know why this has so low views and likes. It should be on the top.

  • @rubencg195
    @rubencg195 6 ปีที่แล้ว +31

    I love this channel! keep the good work. It will be great if you can continue explaining more advance architectures after going through Deep NN and Convolutionary NN. Maybe LSTM's, GAN's and many other interesting and useful tools.

  • @aiahmed608
    @aiahmed608 2 ปีที่แล้ว +1

    I see the speciality here! The way you're combining the math to improve the way we're writing the code is the meaning of a comprehensive workflow. Thank you so much for your efforts!

  • @BenjaminGolding
    @BenjaminGolding 5 ปีที่แล้ว +1

    As one who studied computer science, these are basic matrix transformations (scalar multiplications) and it is explained really intuitively in the video for people without any linear algebra knowledge.

  • @ravitejavarma1307
    @ravitejavarma1307 5 ปีที่แล้ว +4

    I feel like you are like god... this channel literally saved me...I desperately need someone who can explain Pytorch functionality to me and this channel is best of best of the bestest...
    Thank you soo much please post more videos...

  • @Brahma2012
    @Brahma2012 5 ปีที่แล้ว +3

    Thank you for this exhaustive explanation of the important and critical concept of broadcasting. This really helps.

  • @abdelhakaissat7041
    @abdelhakaissat7041 4 ปีที่แล้ว +1

    Very well explained, thank you

  • @mdafjalhossain
    @mdafjalhossain 6 ปีที่แล้ว +3

    Great works! I love your Pytorch videos.

  • @李祥泰
    @李祥泰 6 ปีที่แล้ว +3

    Great works!! Nice detailed explanation

    • @deeplizard
      @deeplizard  6 ปีที่แล้ว

      嘿李祥泰 - Thank you!

  • @adarshkedia8074
    @adarshkedia8074 5 ปีที่แล้ว +3

    Please add videos on GANS, autoencoders etc. Videos are way too good and explanation is perfect.

    • @deeplizard
      @deeplizard  5 ปีที่แล้ว +2

      Thanks, Adarsh! Will consider. In the mean time, we do have an overview of autoencoders in our Unsupervised Learning video and blog. Check it out!
      deeplizard.com/learn/video/lEfrr0Yr684

  • @biplobdas2560
    @biplobdas2560 5 ปีที่แล้ว +2

    what is that minus zero by t.neg() , at time 10:48 :)

  • @felipealco
    @felipealco 5 ปีที่แล้ว +1

    7:35 I decided to try another case with t3 = torch.tensor([[2],[4]], dtype=torch.float32). I expected t1 + t3 to be equal to tensor([[3., 3.], [5., 5.]]), but instead it returned just the same as t1 + t2.

    • @deeplizard
      @deeplizard  5 ปีที่แล้ว

      Your expectation is correct. t1 + t3 is indeed tensor([[3., 3.], [5., 5.]]).
      The result of t1 + t2 is slightly different: tensor([[3., 5.],[3., 5.]]).
      Maybe double check your variable assignments?

    • @felipealco
      @felipealco 5 ปีที่แล้ว

      @@deeplizard yeah I had a "typo". I thought I had written t1 + t3, but I wrote t1 + t2. 😅️

  • @minimatamou8369
    @minimatamou8369 5 ปีที่แล้ว +5

    Hi, thank you for your videos, they're really useful and I love them.
    Question though: is there a way to write our own element-wise function and ask PyTorch to apply it for us, like at 10:45 where methods "t.neg()" and "t.sqrt()" are applied element-wise ?
    Something like this:
    t.func(lambda x: x*x) # Output would be the same as t.sqrt()
    Or even including other tensors like so:
    t1 = torch.tensor([1, 1, 1])
    t2 = torch.tensor([2, 2, 2])
    add = lambda a, b: a + b
    t1.func(add, t2) # Output would be the same as t1 + t2
    Thanks.

    • @deeplizard
      @deeplizard  5 ปีที่แล้ว +2

      Hey Minimata - You should be able to do it by extending the Tensor class. However, I'm not sure how it will work with autograd. I'd try digging around in the docs. Maybe here: pytorch.org/docs/stable/notes/extending.html

    • @minimatamou8369
      @minimatamou8369 5 ปีที่แล้ว

      @@deeplizard Will check it out. Thanks !

  • @spearchew
    @spearchew 3 ปีที่แล้ว +1

    great vid , subscribed

  • @toremama
    @toremama 4 ปีที่แล้ว +2

    Isn't the typing sound the smoothest and most awesome thing you've ever heard in your life?

  • @林盟政
    @林盟政 5 ปีที่แล้ว +2

    Love this series XD

  • @stacksonchain9320
    @stacksonchain9320 6 ปีที่แล้ว +2

    thankyou for introducing tensors, its a topic many shy away from explaining but it now seems very simple. a topic i still dont quite get is merge layers such as dot, more specifically the axes argument in keras, (not sure what is the pytorch equivalent), is it similar to the .cat function? perhaps i should start using pytorch, it seems more practical. thanks again.

    • @deeplizard
      @deeplizard  6 ปีที่แล้ว

      Hey Carl - You are welcome! Appreciate your feedback.
      PyTorch also has a dot function. Keras and PyTorch both compute a dot product.
      See: en.wikipedia.org/wiki/Dot_product
      Likewise, Keras has a concatenate function.
      Check it out here: keras.io/layers/merge/
      It does what the PyTorch one does. I do like PyTorch. #intuitive
      Keras is cool though! #cool
      Glad tensors are now #simple

  • @xingchenzhao5331
    @xingchenzhao5331 4 ปีที่แล้ว +2

    it is truly a great tutorial!

  • @donfeto7636
    @donfeto7636 2 ปีที่แล้ว +1

    love this 2

  • @hassaanahmad3970
    @hassaanahmad3970 4 ปีที่แล้ว

    Hello.. I have a question. i have a 100 x 768 matrix of test data. And 100 x768 matrix of train data. I'm doing KNN so i need to compute the Euclidean Distance between the test and train data, and map it into a 100x100 matrix. Now the trick it, i cant use any loops here. So I've got to do it completely through broadcasting. Any ideas how i might go about?

  • @antonmarshall5194
    @antonmarshall5194 5 ปีที่แล้ว

    Great Tutorial. How can I check if every element in a tensor is True (not truthy)? I already tried: any(t.reshape(1, -1).numpy().squeeze()) but any() also returns True if every element is not zero (truthy).

  • @rizvanahmedrafsan
    @rizvanahmedrafsan 4 ปีที่แล้ว +2

    When I was trying out the element-wise comparison operations on Jupyter Notebook it showed me True/False instead of 1/0 as an output. I wrote exactly the same code shown here. Can anyone please explain to me why that happened?

    • @deeplizard
      @deeplizard  4 ปีที่แล้ว +4

      Hey Rizvan - The difference you are seeing here is due to an update that was included in PyTorch version 1.2.0.
      Thank you for spotting this change. I've updated the text version of this video on the site. Anytime a change like this occurs, you can track it down by searching the release notes on PyTorch's Github page. See here (look at the top of the breaking changes section):
      github.com/pytorch/pytorch/releases/tag/v1.2.0
      See the comparison operation section here: deeplizard.com/learn/video/QscEWm0QTRY

  • @grombly
    @grombly 5 ปีที่แล้ว +3

    Kinda random, but can you link the audio file during the coding segments? the intense vacuum noise lol

    • @deeplizard
      @deeplizard  5 ปีที่แล้ว

      🤣 I must know! Do you plan to run this on a loop while you code? Maybe white noise for sleeping? 🤣🤣🤣
      It's an awesome sound! Link: freesound.org/people/swiftoid/sounds/119782/

  • @ruchiagrawal600
    @ruchiagrawal600 4 ปีที่แล้ว

    Can someone help me by explaining how -1 in .reshape(1,-1) determines the shape of tensor?

    • @deeplizard
      @deeplizard  4 ปีที่แล้ว +1

      The number of elements inside the tensor is fixed. The -1 tells the reshape function to calculate the value based on the other dimensions and the number of elements constraint. Suppose we have an array A of 12 elements. Now, suppose we do A.reshape(3,-1). In this case, the A.shape would be (3,4) since 3 x 4 is 12. Hope this helps 😄
      Chris