Pytorch Neural Network example

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ธ.ค. 2024

ความคิดเห็น • 152

  • @AladdinPersson
    @AladdinPersson  4 ปีที่แล้ว +24

    In this video I assume you know the basics of how neural networks work, if you don't I feel it goes a long way watching 3b1b series 3b1b.co/neural-networks. Also Deeplearning.ai has free lectures available on TH-cam, I recommend checking those out for more in depth explanations, specifically the videos from course 1 to learn about neural networks. For more specific recommendations with regards to courses then two great ones that I started with is the ML course and DL specialization both by Andrew Ng.
    Links:
    ML Course (free course): bit.ly/3qq20Sx​
    DL Specialization (affiliate): bit.ly/30npNrw​
    Wish you an awesome journey in deep learning :)

    • @cerealpeer
      @cerealpeer ปีที่แล้ว

      'eah weaah, eah elloel:
      #chatgpt
      #openai
      import random
      import hashlib
      class Node:
      def __init__(self, public_key, private_key):
      self.public_key = public_key
      self.private_key = private_key
      self.children = []
      class NetworkHierarchy:
      def __init__(self, root_node):
      self.root = root_node
      def add_node(self, parent_node, new_node):
      parent_node.children.append(new_node)
      def expand_network(self, node, num_children):
      for _ in range(num_children):
      new_public_key = hashlib.sha256(str(random.random()).encode()).hexdigest()
      new_private_key = hashlib.sha256(str(random.random()).encode()).hexdigest()
      new_node = Node(new_public_key, new_private_key)
      self.add_node(node, new_node)
      def encrypt_message(message, session_key, recipient_public_key):
      encrypted_message = message + session_key + recipient_public_key # Simplified encryption
      return encrypted_message
      def decrypt_message(encrypted_message, recipient_private_key):
      decrypted_message = encrypted_message # Simplified decryption
      return decrypted_message
      # Initialize root node of the network hierarchy
      root_public_key = hashlib.sha256(str(random.random()).encode()).hexdigest()
      root_private_key = hashlib.sha256(str(random.random()).encode()).hexdigest()
      root_node = Node(root_public_key, root_private_key)
      network = NetworkHierarchy(root_node)
      # Simulate expanding the network
      network.expand_network(root_node, 3)
      while True:
      operation = input("Choose operation: Send / Receive / Expand / Exit: ")
      if operation == "Send":
      recipient_node = random.choice(root_node.children)
      message = input("Enter message: ")
      session_key = hashlib.sha256(str(random.random()).encode()).hexdigest()
      encrypted_message = encrypt_message(message, session_key, recipient_node.public_key)
      print("Message sent:", encrypted_message)
      elif operation == "Receive":
      encrypted_message = input("Enter encrypted message: ")
      recipient_private_key = input("Enter recipient's private key: ")
      decrypted_message = decrypt_message(encrypted_message, recipient_private_key)
      print("Decrypted message:", decrypted_message)
      elif operation == "Expand":
      num_children = int(input("Enter number of children: "))
      network.expand_network(root_node, num_children)
      print("Network expanded")
      elif operation == "Exit":
      break

  • @smileyley
    @smileyley 4 ปีที่แล้ว +35

    I love how you explain everything line by line briefly, without digging deep into the math. Just a perfect Tutorial how to create a simple Network. Will watch all of your vids!

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +4

      That means a lot! Thank you so much for the comment 😊

  • @Iffythegreat
    @Iffythegreat 2 ปีที่แล้ว +9

    This was a life saver for my 4th year neural networks course. A lot of the other tutorials start out at a much higher level of using pytorch, this takes it to the basics. Great video!

  • @beizhou2488
    @beizhou2488 4 ปีที่แล้ว +19

    For these who have already some experiences on other DL frameworks, such as Tensorflow, this tutorial is probably the best to get your hand dirty on the transfer to PyTorch framework by getting the landscape of building Neural Neural in just one short video.

  • @parthchokhra7298
    @parthchokhra7298 4 ปีที่แล้ว +3

    I am following your completely series for my 100 days ml challenge. This is what I needed. Precise accurate and easy to understand.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Happy that you find them useful!

  • @thecros1076
    @thecros1076 4 ปีที่แล้ว +5

    Man ...... thank you so much....I found what ever I wanted and I am so so happy to find you ....will be watching all of your videos and thank you so so so much

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      I really appreciate the positive feedback, makes me want to make more of these videos. Thanks!

  • @suyashsingh675
    @suyashsingh675 4 ปีที่แล้ว +3

    I love your page bro, appreciate your hustle. Don't stop your flow.

  • @stylish37
    @stylish37 10 หลายเดือนก่อน +1

    Thank you Aladdin, you are a genie!

  • @AbhayShuklaSilpara
    @AbhayShuklaSilpara ปีที่แล้ว +2

    Concise and clear, great tutorial!

  • @maryfrancesgleason9415
    @maryfrancesgleason9415 7 หลายเดือนก่อน

    Bravo. Very well done. Fastest and best explained implementation of mnist. When I coded along in google colab, most of the code autocompleted. Worked the first time with no errors. Amazing.

  • @karthik-ex4dm
    @karthik-ex4dm 4 ปีที่แล้ว +1

    Clean video... No unneccesary BS... Pure content

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Thank you, I appreciate that! I tried to make the explanations clear and to the point, although when watching this I wish I made the text bigger since it's kinda hard to see :\

    • @karthik-ex4dm
      @karthik-ex4dm 4 ปีที่แล้ว

      @@AladdinPersson You're right

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      @@karthik-ex4dm At least it's improved in newer videos, compare this video to one of my newer ones and I'm a bit ashamed of the quality of this video.. I think people wanting to learn about deep learning are commited and hopefully the video can still be useful ;)

  • @quicano5542
    @quicano5542 ปีที่แล้ว +1

    This was a great Video, straight to the point, very clear and very helpful, thanks!

  • @balicien
    @balicien 4 ปีที่แล้ว

    I shared your tutorials on LinkedIn. Your channel is awesome for a newbie on these subjects. I appreciate you.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Thanks a lot, I appreciate that 🙏

  • @hackercop
    @hackercop 3 ปีที่แล้ว

    I'm moving from tensorflow to pytorch, this is really helpful thanks

  • @ADV_UA
    @ADV_UA 3 ปีที่แล้ว

    Yey, my first pytorch tutorial is successfully completed! Thank you!

  • @osielvivar8804
    @osielvivar8804 2 ปีที่แล้ว

    Fantastic, I followed the blogpost which was excellently written. Thank you!

  • @vidhyat7126
    @vidhyat7126 2 ปีที่แล้ว

    Excellent complete walkthrough!

  • @neildutoit5177
    @neildutoit5177 3 ปีที่แล้ว +1

    Thanks this was pretty straightforward.

  • @lattellieeeee
    @lattellieeeee 2 ปีที่แล้ว

    Clear and concise! Thanks for this video, it helps a lot!!!

  • @dingusagar
    @dingusagar 3 ปีที่แล้ว +2

    nice video. just one small place where I got confused was the model.eval() and model.train() code. Thought the model was actually getting trained here. Realised after googling, these are just switches for changing the mode to either training or eval.

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +2

      Exactly right, they change the behavior of certain modules like Dropout or BatchNorm (which weren't used in the video) but nevertheless is a good habit of getting used to toggling eval mode when we evaluate our model

  • @gabriele3168
    @gabriele3168 4 ปีที่แล้ว +2

    Wow, a perfect explanation. Thank you

  • @andyli2934
    @andyli2934 4 ปีที่แล้ว +6

    the tutorial is great! Could you make your font larger in the future video, it would be much easier to follow.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +5

      Yeah if you watch my more recent videos this has been improved and as soon as I get a small amount of motivation I will redo my initial videos to upgrade the quality:)

  • @mostechroom9780
    @mostechroom9780 3 ปีที่แล้ว +2

    So I'm currently preparing for Masters(learning these without help sucks) and you have such a simple approach to deep learning. Thank you

  • @programming_hut
    @programming_hut 4 ปีที่แล้ว +1

    thanks you're really good
    really quality video
    i was looking for something like this

  • @xv0047
    @xv0047 2 ปีที่แล้ว

    Exactly what I was looking for. Thank you.

  • @improvementbeyond2974
    @improvementbeyond2974 3 ปีที่แล้ว +1

    Great job bro

  • @francomozo6096
    @francomozo6096 3 ปีที่แล้ว

    Thank you for your content!! Really helpful!!

  • @nasirrahim5610
    @nasirrahim5610 3 ปีที่แล้ว +1

    Great Explanation. I have one question though. You did not use softmax layer to out probabilities for 10 classes?

  • @randanblabla
    @randanblabla 4 ปีที่แล้ว +2

    Hi Aladdin
    Thank you very much for this helpful playlist !
    where did you call the forward method?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +3

      The forward method is the method that runs when you do model(input_tensor). This is how the call method is done inside the parent class nn.Module

    • @shambhaviaggarwal9977
      @shambhaviaggarwal9977 3 ปีที่แล้ว

      @@AladdinPersson I had the same doubt. Thanks!

  • @noamills1130
    @noamills1130 2 ปีที่แล้ว

    Super helpful, thank you so much!

  • @juleswombat5309
    @juleswombat5309 4 ปีที่แล้ว

    Many Thanks. That was nice an clear brief description on pytorch.
    The only bit I am am still a little confused about, where do have to say .to(device). It looks like just put the data and the model is sufficient to ensure all calculations done on on the GPU?
    It would be nice to have a similar style video on actor critical models, and how controls which parts we back propagate errors through and those parts we don't

  • @dystopiantanuki
    @dystopiantanuki 4 ปีที่แล้ว +1

    thank you so much for your tutorial ! however, I want to ask that on the output layer, model NN() doesn't seem to have any softmax or sigmoids.. so is the softmax actually being implemented inside the CrossEntroplyLoss()?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +2

      Absolutely right! CrossEntropyLoss involves softmax + negative log likelihood :)

    • @dystopiantanuki
      @dystopiantanuki 4 ปีที่แล้ว

      @@AladdinPersson oh I see, thanks again !

  • @thetsairoutine
    @thetsairoutine 2 ปีที่แล้ว +1

    thank you for explaining this

  • @atharvatekawade9002
    @atharvatekawade9002 2 ปีที่แล้ว

    Should we not use activation function like softmax at the end?

  • @chaoxi8966
    @chaoxi8966 2 ปีที่แล้ว

    Hi, thanks for concise and vivid lecture. small questions: the model take input size of 784, but in the training phase, data with shape [64, 784] has been given, why?

    • @kavinarasu6610
      @kavinarasu6610 2 ปีที่แล้ว

      The 64 is the batch_size, and the model takes in a batch of inputs instead of a single input, hence (64,784). If the model had taken a single input for each iteration, in the case of MNIST dataset, it would have to do 60,000 iterations, whereas, with 64 as the batch_size we would have to do 60000/64 iterations per each epoch.

  • @dan-c0d3
    @dan-c0d3 ปีที่แล้ว

    Hi thanks for the great tutorial. One query , at 00:14:02 you have reshaped to a single dimension. do we need to pass input aways in single dimension. what was he problem if we don't reshape the data. i want to understand the concept behind this.

  • @1UniverseGames
    @1UniverseGames 3 ปีที่แล้ว

    May I know what did you used in Spyder that let you display syntax for all Pytorch code when you wrote few command. Looks very interactive. Thank you

  • @trioduoxx
    @trioduoxx 4 ปีที่แล้ว

    Great overview, thanks

  • @sirawitmahanin5679
    @sirawitmahanin5679 3 ปีที่แล้ว

    Why did we put both model.eval() and model.train() in check_accuracy()?

  • @yanxu4968
    @yanxu4968 3 ปีที่แล้ว +1

    firstly, very exciting and concise tutorial! really enjoy watching your videos.
    secondly, is there any way to predict class probability? e.g. max(softmax(dim=1)(scores))?
    finally, is there any way to estimate prediction confidence (or variance)? so that if the model's prediction has low confidence, we could just output "not sure".

  • @mirceaandrei9633
    @mirceaandrei9633 2 ปีที่แล้ว

    Hi, I do not understand on the training part how cross entropy is applied on all the sores which contain 10 scores. Should not have been aplied only on the maximum out of those 10 which is the prediction? Thank you!

  • @subi_rocks
    @subi_rocks 2 ปีที่แล้ว

    Thanks for tutorials. Just a question for prediction why are you not using argmax(dim=1), max(1) should return logits and not the index right?

  • @habiburrehman00766
    @habiburrehman00766 4 ปีที่แล้ว +1

    Hi, i have problem there
    Scores = model(x)
    _, predictions = scores.max(1)
    What is 1 doing here. I changed values to 0, and 2, 3. It gives different output. It also gives index error.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +2

      The output from our model will be (batch_size, num_classes) where we will have a probability of each training example belonging to each class. We want to take the one that it predicts with highest probability, to this we take the argmax of the num_classes which is index 1. Here we use scores.max(1) rather than argmax because it will also return the argmax index. Using index 2, 3 will give you error because there aren't that many dimensions to the output tensor.

    • @habiburrehman00766
      @habiburrehman00766 4 ปีที่แล้ว

      @@AladdinPersson thanks for your reply

    • @akshaygoel2184
      @akshaygoel2184 4 ปีที่แล้ว

      @@AladdinPersson Can you explain why scores.max(1) will return the argmax index?
      I would think it returns the value of the highest probability along the second dimension. Thanks for your videos!

    • @akshaygoel2184
      @akshaygoel2184 4 ปีที่แล้ว

      Ah, nevermind!
      torch.max returns a tuple (max_values, indices)
      I was thinking about numpy

  • @Wanderlust1342
    @Wanderlust1342 3 ปีที่แล้ว

    love the explanation, will share

  • @NoahElRhandour
    @NoahElRhandour 2 ปีที่แล้ว

    thank you SO much

  • @joshlazor6208
    @joshlazor6208 4 ปีที่แล้ว +1

    What is the input_size doing here? Is this parameter the number of inputs that the neural network is taking in? How do we determine this number? Is it arbitrary? Thanks, Aladdin!

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      The number of input features will be the number of pixels in the image, and it's also why we are doing x.reshape since we have a grid of pixels and we want everything to be a single vector and to do that we unroll the 28x28 pixel to be a 784 long vector with 28*28 = 784 pixels. It is not arbitrary and is dataset dependent, so if you use another dataset you need to check how large are the images and calculate input_size = number_of_channels * pixel_height * pixel_width

    • @joshlazor6208
      @joshlazor6208 4 ปีที่แล้ว +1

      @@AladdinPersson ok, thank you for your explanation! What would happen if I had different image dimensions, say I had 10 images that were 50*50 and 10 that were 30*30. How would I go about the number of input features

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      @Josh Lazor Then it won't work, all has to be same shape and you need to reshape all to 30x30 in that case and input features would be 900

    • @joshlazor6208
      @joshlazor6208 4 ปีที่แล้ว +1

      @@AladdinPersson Ok good to know. Looking through your video, your predictions and num_correct and num_samples confused me... Especially the scores.max(1) and (predictions == y).sum(). Could you explain that a little more?

    • @RaviAnnaswamy
      @RaviAnnaswamy 4 ปีที่แล้ว +1

      @@joshlazor6208 scores datastructure has the output from network for each of 64 examples in a minibatch, a score for each class (64x10).
      We want to know the element with the maximum score. Since score values are in index 1, example id in index 0, scores.max(1) returns the 64 set of max values and class with max value (winning prediction)
      ==
      regarding second expression it is the count of how many predictions matched the targets (y)
      Hope this helps.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 ปีที่แล้ว +1

    I think if you can make your fonts bigger, that would be a big help

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Thanks for the feedback, in my newer videos you'll see that the font is much larger :)

  • @rubinluitel158
    @rubinluitel158 4 ปีที่แล้ว

    How does loss.backward() know what model its doing backward propagation for?
    Is it because for "loss = criterion(scores, targets)" knows that scores is for some specific model and loss.backward does backward propagation for what ever model the scores is from?

  • @somayehseifi8269
    @somayehseifi8269 2 ปีที่แล้ว

    We should not have softmax in last layer?

  • @mohdkashif7295
    @mohdkashif7295 3 ปีที่แล้ว

    when you pass the test_loader through check_accuracy fnx it will not be true evaluation as model is still train on test dataset, am i saying right?

  • @asmasulaiman6175
    @asmasulaiman6175 3 ปีที่แล้ว

    VERY HELPFUL video ,well explained. Thank you for sharing. ... just wondering in 14:00 , what if it was a colored image ? How to reshape it.

    • @applepie7282
      @applepie7282 ปีที่แล้ว

      (64, 3, 28, 28) for rgb

  • @amarjeetkushwaha4258
    @amarjeetkushwaha4258 3 ปีที่แล้ว

    I am unable to download the MNIST data urllib
    equest.py", line 502, in _call_chain error.

  • @benjaminfindon5028
    @benjaminfindon5028 2 ปีที่แล้ว

    can deep learning train on annotated examples. e.g. theres no images .jpg files theres just a csv file with data in it (numbers and categories) can a deep NN train on this or is it nessasary to have the images as well?

  • @HungDuong-dt3lg
    @HungDuong-dt3lg 2 ปีที่แล้ว

    Can someone explain to me why do we need model.train() at line 86 in the definition of checking accuracy? Thank you so much!

    • @xatirig
      @xatirig 2 ปีที่แล้ว

      Had the same question earlier. It turns out, model.train() and model.eval() are like switches. With model.eval() we deactivate regularization methods such as Dropout and Batch Normalization which are only valuable during training and turn them back on with model.train().

  • @freeideas
    @freeideas 4 ปีที่แล้ว

    I wonder why, when I follow along typing the code into my own pytorch/anaconda -- which I installed on the first video, I don't see the pop-up help text (4:32, for example), that I see in the video. Thank you for this wonderful tutorial, by the way.

    • @freeideas
      @freeideas 4 ปีที่แล้ว

      Oh, I see; this is a pycharm "professional" feature.

    • @freeideas
      @freeideas 4 ปีที่แล้ว

      Tried the same in vscode, and it pops-up the doc for free.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      What I used in this video is Spyder but this should be available on most IDEs. I don't think this is a professional feature. Right now though I have completely removed those suggestions etc so if you watch any of my more recent videos they wont be there, personally I can find them distracting/annoying sometimes.

  • @bluedade2100
    @bluedade2100 ปีที่แล้ว

    How do we use/call the forward function could someone explain?

  • @fenfenzhu6142
    @fenfenzhu6142 3 ปีที่แล้ว

    RandLANet is preferable,thank you so much

  • @beizhou2488
    @beizhou2488 4 ปีที่แล้ว

    Is the batch matrix multiplication the only matrix multiplication method used in Neural Network?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Just "normal" matrix multiplication is enough to build a neural network from scratch

    • @beizhou2488
      @beizhou2488 4 ปีที่แล้ว

      @@AladdinPersson However in effect, the Neural Networks is more computationally effective when building upon the batch matrix multiplication and training using GPU, compared to the normal matrix multiplication. Is my understanding correct? Thank you.

  • @Geoters
    @Geoters ปีที่แล้ว

    I am confused, why would you call model.train() in check_accuracy method??

    • @AladdinPersson
      @AladdinPersson  ปีที่แล้ว

      I called model.eval in the beginning, and then return it back to model.train when all compute was complete

  • @mundeepcool
    @mundeepcool 4 ปีที่แล้ว

    Do we need to use model.eval() when testing? I removed the model.eval() line and the accuracy function still worked. I'm confused as to what model.eval() actually does.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Some nn modules have different behavior during testing/training for example batchnorm and dropout, but in our case we just built a very simple neural network where the behavior are all the same

    • @yanxu4968
      @yanxu4968 3 ปีที่แล้ว

      e.g. dropout during training has a high variance, which we do not want during testing

  • @teddybest02
    @teddybest02 3 ปีที่แล้ว

    Thank you !!

  • @sepgorut2492
    @sepgorut2492 4 ปีที่แล้ว

    It looks like you're using Spyder. I tried the exact piece of code as you have at 6:52 and highlighted the snippet but just got an error of size mismatch. I even tried the exact same code in Spyder rather than PyCharm as I usually use and couldn't get the snippet to run.

  • @楼心
    @楼心 4 ปีที่แล้ว

    Thank you for your video! I met a problem when running the code with "with torch.no_grad:
    AttributeError: __enter__", can anyone tell me how to solve it? Man thanks!

  • @ShashankShuklaBIS
    @ShashankShuklaBIS 4 ปีที่แล้ว

    Hey Aladdin, I have the mnist dataset already downloaded and bifurcated into test.csv and train.csv [as by default provided in Google colab] and I used pandas.read_csv() to read/load them. How do I further transform the train data ToTensor()? I mean I used pandas.read_csv() instead of datasets.MNIST(). So how do I use transform on that?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      You could check out my video on loading custom datasets (Images), the neural network video is much more focused on getting started and implementing a basic neural net in pytorch

  • @abdulkareemabdullahhasanna6887
    @abdulkareemabdullahhasanna6887 3 ปีที่แล้ว

    what should we use if want to use our dataset instead of mnist?

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว

      You can check out my video on how to use a Custom dataset which would allow you to use your own dataset

  • @dirkneuhauser8213
    @dirkneuhauser8213 4 ปีที่แล้ว

    I think at 16:00 you got things a little bit twisted:
    loss.backward() calculates the gradients with respect to the parameters
    and optimizer.step() actually updates your weights/parameters
    (see discuss.pytorch.org/t/how-are-optimizer-step-and-loss-backward-related/7350)

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      When watching it again I believe what I said was correct, but it was quite confusing why I marked loss.backward when I was talking about optimizer.step

    • @dirkneuhauser8213
      @dirkneuhauser8213 4 ปีที่แล้ว

      @@AladdinPersson Ah I see! Thank you for the vids btw!

  • @marcel2711
    @marcel2711 3 ปีที่แล้ว

    what is 50??? number of neurons in hidden layer for this feed forward NN?

  • @buggi666
    @buggi666 3 ปีที่แล้ว

    How would you compare this to the same approach in keras? I know that pytorch seems to have much more flexibility. However, comparing this example would be in keras only a couple of lines...Is the advantage for mor complicated networks/approaches? Thx for the awesome channel!

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +1

      It's a little more verbose but once you get used to it I think you'll prefer it because of the clarity it provides. There's nothing really hidden from you

    • @buggi666
      @buggi666 3 ปีที่แล้ว

      @@AladdinPersson Yes also the integration of keras into tensorflow has a lot of unclear legacy baggage!

  • @anurajmaurya7256
    @anurajmaurya7256 3 ปีที่แล้ว

    module object is not callable in data loader help me

  • @anshulthakur3806
    @anshulthakur3806 3 ปีที่แล้ว

    Thanks dude. This is really amazing and precise. 😃👍

  • @shaikrasool1316
    @shaikrasool1316 4 ปีที่แล้ว

    Every thing is awesome.. please use white screen in your IDE

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      You want white IDE background? Could you check out one of my recent videos (with black background) and see if you think it's still hard to see?

  • @yannickpezeu3419
    @yannickpezeu3419 3 ปีที่แล้ว

    Thanks !

  • @arjunsigdel8070
    @arjunsigdel8070 4 ปีที่แล้ว

    i have problem in [from torch.utils.data import Dataloader]. it says 'cannot import Dataloader from torch.utils.data'

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Think you should have DataLoader

    • @arjunsigdel8070
      @arjunsigdel8070 4 ปีที่แล้ว

      @@AladdinPersson thank you. I followed similar step but my model got accuracy of just 7.04. What may be the reason?

  • @FI-BB
    @FI-BB 4 ปีที่แล้ว

    hi, thanks for all your videos, I am new to the domain and I want start learning deep learning. I have some fondations on ML. I want to buy a laptop but dont know which one. I prefer laptop for portability rather than desktop. I am planning to buy an external keyboard and screen. by the way which keyboard you use. And if you can make a video on your desk setup :)

  • @VeliGeliyor
    @VeliGeliyor 14 วันที่ผ่านมา

    Can you teach Detr, Swin transform, ssd, co-detr with code implementation, pleaseeee... 😊

  • @1UniverseGames
    @1UniverseGames 3 ปีที่แล้ว

    I have confusion with Mean aggregator and pooling aggregator? Anyone share any resources to understand this topic. Also I found a term "burnin" anyone would like to explain this to me please. Thank you

  • @yoloswag6242
    @yoloswag6242 3 ปีที่แล้ว

    Thanks but please zoom into the text next time. Half the screen is blank.

  • @Murmur1131
    @Murmur1131 3 ปีที่แล้ว

    First of all - thanks! However, a video more explanatory with maybe 2x of the length would have been even more helpful!

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว

      What would you like to have been explained in more depth?

  • @jitendradhiman3460
    @jitendradhiman3460 4 ปีที่แล้ว

    It would have been nice if you could have increased the font-size a bit.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      I agree, it was one of my first videos, and I think my more recent ones are much better :) Actually thinking about re-doing the Neural Network Example video because the font is too small: do you think it's that bad or should I keep it as is?

    • @bonalareddy5339
      @bonalareddy5339 4 ปีที่แล้ว

      @@AladdinPersson it would be great if you can re-do it, if you have the time.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      @@bonalareddy5339 Yeah will do

  • @小火箭呜呜
    @小火箭呜呜 3 ปีที่แล้ว

    where is the data ? how to get it

    • @mrinalsahu7564
      @mrinalsahu7564 3 ปีที่แล้ว

      I also couldn't find the train/test images anywhere in his github repos

  • @rajukurapati1639
    @rajukurapati1639 4 ปีที่แล้ว

    Everything is really good except the FONT SIZE, it is really irritating. Are all the other videos the same way for this playlist?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      I'll re-record this video to make it better, maybe a few videos are like that but check out my more recent ones and you'll hopefully feel that they are improved in this regard :)

  • @gokulakrishnancandassamy4995
    @gokulakrishnancandassamy4995 3 ปีที่แล้ว

    Hi Mr. Aladdin! Absolutely loved your tutorial. Thank you so much and keep up the good work! #Aladdin_Persson

  • @mustafabuyuk6425
    @mustafabuyuk6425 4 ปีที่แล้ว

    Which editor is this

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      For this video I was using Spyder, although I would recommend using Pycharm (check out my video on setting up a deeplearning environment if your curious about how to set that up).

  • @bipanbhatta2736
    @bipanbhatta2736 4 ปีที่แล้ว

    At 14:15, I think there is an issue. You mean to flatten it but its not flattened. I think it should have been:
    data = data.reshape(-1 , data.shape[0]) OR data.view(data.shape[0] , -1). Same thing for checking accuracy too.

    • @AladdinPersson
      @AladdinPersson  3 ปีที่แล้ว +2

      I think what I shown should work, perhaps you could elaborate why you think it shouldn't work? How I think it works is that when we're doing data.reshape(data.shape[0], -1) it will keep the first dimension which is the batch size but it will flatten all the other dimensions, which in our case is 28*28*1 = 784. Also if it wouldn't have worked I believe we would have received an error

  • @sahil-7473
    @sahil-7473 4 ปีที่แล้ว

    Hello Sir! I started learning pytorch of your playlist series after learning foundation of ML and DL. It's a great tutorial and following steps by steps with reasonable :D.
    Just one question, in the check_accuracy function, what does model.eval() and model.train() doing? I try to play around with this. Uncommenting and Commenting this eval() and train() lines, giving the same result. Kindly let me know does it impact on these two lines?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +3

      I'm happy to hear you find it useful, in terms of model.eval() and model.train(), I'm going to assume you know what Dropout and/or BatchNorm is to give an example (otherwise just look at Andrew Ng lecture on it). So during training we want to drop let's say 50% of the nodes following the method of dropout but during testing/evaluating the model we obviously want to utilize the entire power of our model, i.e not drop any nodes. So effectively there are methods like Dropout and BatchNorm where the behavior changes during training vs test and very simply in PyTorch we will let the model know when we want to train it or evaluate it and do model.train() to set it in training mode, or model.eval() to set it in evaluation mode. In this specific video since it's very basic and not using any of these methods it will not impact the performance of the model

    • @sahil-7473
      @sahil-7473 4 ปีที่แล้ว

      Aladdin Persson Thanks! I got it :D

  • @ehza
    @ehza 3 ปีที่แล้ว

    Nice

  • @shambhaviaggarwal9977
    @shambhaviaggarwal9977 3 ปีที่แล้ว

    I am getting 67% accuarcy. What am i doing wrong :/

  • @ethapel5782
    @ethapel5782 3 ปีที่แล้ว

    very nice! How can I apply this to generate text?

  • @JamesBond-ux1uo
    @JamesBond-ux1uo 3 ปีที่แล้ว

    explanation was good, but video quality was poor even at 480p i can barely see the content on screen

  • @salarghaffarian4914
    @salarghaffarian4914 2 ปีที่แล้ว

    1000000*LIKE

  • @ragibshahriar187
    @ragibshahriar187 3 ปีที่แล้ว

    Great intro. Font too small.

  • @momotarodadumpling4065
    @momotarodadumpling4065 หลายเดือนก่อน

    Sorry to say but I didn't understand anything you explained. Maybe it's because I'm a beginner 😅