Pytorch Image Captioning Tutorial

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ต.ค. 2024

ความคิดเห็น • 112

  • @백이음
    @백이음 4 ปีที่แล้ว +4

    u r such a great engineer!
    I found out this vid sooo useful!!
    Thanks!!!!

  • @ashkankhademian600
    @ashkankhademian600 ปีที่แล้ว

    How is it that you are so good at explaining?
    Keep up the good work champ.

  • @nunenuh
    @nunenuh 4 ปีที่แล้ว +1

    thank you very much for your videos, please continue your work, many people need your video

  • @teetanrobotics5363
    @teetanrobotics5363 3 ปีที่แล้ว +2

    one of the best channels evaaaaa

  • @Bobobhehe
    @Bobobhehe ปีที่แล้ว

    Awesome complete tutorial, thank you.

  • @rohinim7707
    @rohinim7707 3 ปีที่แล้ว +3

    Amazing tutorial!!
    Can we do it using the transformer instead of LSTM?

    • @09-muneebunnabi42
      @09-muneebunnabi42 2 ปีที่แล้ว

      did you found any answer, because im also seraching the same.

  • @oskarjung6738
    @oskarjung6738 ปีที่แล้ว

    That was a very Aladdin tutorial, thank you!

  • @hadjdaoudmomo9534
    @hadjdaoudmomo9534 2 ปีที่แล้ว

    Excellent explanation, thank you 👍

  • @ZobeirRaisi
    @ZobeirRaisi 4 ปีที่แล้ว +1

    Thanks for the great tutorial :)

  • @Ihsan_almohsin
    @Ihsan_almohsin 2 ปีที่แล้ว

    that was super helpful man, thanks

  • @НиколайНовичков-е1э
    @НиколайНовичков-е1э 3 ปีที่แล้ว

    Thank you :) Excellent video!

  • @HARIS-q3n
    @HARIS-q3n 2 หลายเดือนก่อน

    since you feed the feature vector at timestamp-0 so at inference time we also only feed the feature-vector at timestamp-0 we not have to provide the start token in the test phase

  • @rahulseetharaman4525
    @rahulseetharaman4525 2 ปีที่แล้ว +3

    Hi Aladdin, thanks for the awesome tutorials.
    Could you please elaborate on 27:51, this statement
    outputs=model(imgs, captions[:-1])
    Why are we ignoring the last row ? The last row would mostly contain padded characters, and very few EOS indexes. Could you please explain how ignoring the last row works in this context ?
    Thanks

    • @saurabhvarshneya4639
      @saurabhvarshneya4639 ปีที่แล้ว

      Maybe it's very late to reply, but just for completeness. In the code, a feature from a CNN model is used as the first word of the input sequence to LSTM. This increases the length of the input and output sequence of LSTM by 1. The Crossentropy loss will not work until the last word of the output sequences is ignored (output[:-1]) or, as done in the code, the input sequence length is reduced by 1.

    • @Glitch40417
      @Glitch40417 ปีที่แล้ว

      ​@@saurabhvarshneya4639 Hey did you get a good model out of this?
      If so then how many epochs did you run it?
      The thing is the model that i built wasn't making any difference like there is no major change in an error even after like 10 epochs of training in the print_examples()

  • @soumyajahagirdar6708
    @soumyajahagirdar6708 4 ปีที่แล้ว +3

    can we have a demo on visual question generation also?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      I'm not familiar with visual question generation, will do some research on that! :)

  • @garikhakobyan3013
    @garikhakobyan3013 3 ปีที่แล้ว

    looking forward to new videos. awesome!

  • @MatanFainzilber
    @MatanFainzilber 10 หลายเดือนก่อน

    Thanks alot! one important question:
    In the training loop the loss is calculated from scores and the captions which are the target.
    there is no shifting to the right of the target captions. Without doing so how does the model still knows to learn the next word? Is there an internal pytorch method that does so implicitly? I tried to look and i dont understand how in this way the loss can be calculated in a way such the model would learn to predict the next word

  • @NutSorting
    @NutSorting ปีที่แล้ว

    Awesome tutorial, followed it till the end. I have a question, where do we split the training and test set? and how as there are image data and caption data too. Can you help me with that?

  • @muhammadzubairbaloch3224
    @muhammadzubairbaloch3224 4 ปีที่แล้ว

    very good work . please make some videos on medical imaging . thanks

    • @tawheedrony
      @tawheedrony 4 ปีที่แล้ว

      I am also interested in it

  • @ashishbhatnagar9590
    @ashishbhatnagar9590 4 ปีที่แล้ว

    very nice tutorial. Awesome

  • @junhuajlake4119
    @junhuajlake4119 4 ปีที่แล้ว +1

    OK, I have known it. Excellent Pytorch Tutorial.

    • @youssef42466
      @youssef42466 3 ปีที่แล้ว

      where can I find it please

  • @vansadiakartik
    @vansadiakartik 4 ปีที่แล้ว +2

    Hey, Thanks for this. These videos makes it so much easier as well as gives validation. I had one quick question though, when concating the features from CNN to the embedding we are actually adding one more timestep in front of the caption embeddings. Does this mean that at time step 0->lstm has image features as input, at time step->1 lstm has token embeddings as input and so on.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Yeah you're exactly right! Then we just compare those outputs to the full correct captions (including start token and end token).

  • @yasserahmed2781
    @yasserahmed2781 3 ปีที่แล้ว +1

    Should'nt the concat and unsqueezing happen on dim=1? the output of the fc layer is of shape (batch_size, embed_size) and the shape of the embedded captions im assuming is supposed to be (batch_size, num_words, embed_size), unsqueezing features on dim 0 results into (1, batch_size, embed_size) which cannot be concated with the embedded captions on dim 0, am I missing something?

    • @mohdkashif7295
      @mohdkashif7295 3 ปีที่แล้ว +1

      in his data augmentation video he has generated caption tensor with shape (seq_len,batch_size) that's why he is concat them at dim=0, hope this clarifies your doubt

  • @AIRobotica
    @AIRobotica 3 ปีที่แล้ว

    You are Excellent... Thanks a lot...

  • @gyanratna7357
    @gyanratna7357 ปีที่แล้ว +1

    How to get the datasets?

  • @mkamp
    @mkamp 4 ปีที่แล้ว +1

    Aladdin, that is an excellent video. Very easy to follow along.
    You are setting which parameters are trainable in the forward function. Wouldn’t this mean it is set again and again in each forward pass? I am wondering if you have thoughts on putting it into the constructir or in the configuration of the optimizer?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Thank you for the comment!
      I definitely understand your point and I changed the requires_grad from the forward and put this in the train function after initializing the model. I initially thought the speed improvements would be more than they were. My tests show that the forward pass of the encoderCNN runs 1.6% faster with the change, although this is definitely an improvement. I updated the code with this change on Github.

    • @mkamp
      @mkamp 4 ปีที่แล้ว

      Aladdin Persson Just to clarify. My intention was not to improve performance, but conceptual clarity. As the parameters do not change in between forward passes, as they are not dependent on the forward pass input, I would maintain them separately to communicate this to the reader. Allocating it with the training code, as you suggest, makes sense to me.
      Keep it up.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Yeah it definitely makes things cleaner and that's primary goal for me. That it would be computationally inefficient just popped into my mind. Again I appreciate your feedback and for taking the time to comment! :)

  • @yashchoudhari1613
    @yashchoudhari1613 4 ปีที่แล้ว +5

    While training , you took output of lstm only once , but while testing you used a for loop ,for generating whole sentence , why didn't you do the same thing while training , can you plz explain

  • @junhuajlake4119
    @junhuajlake4119 4 ปีที่แล้ว

    It can be run successfully. Thanks

  • @aboalifan123
    @aboalifan123 3 ปีที่แล้ว

    Hi Aladdin , thanks so much for this awesome series of videos. Could you please explain how to use BERT instead of RNN in this model ? thanks in advance

  • @rishadkt9451
    @rishadkt9451 2 ปีที่แล้ว

    In the forward function of decoder, you are giving the output from lstm of shape batch, seq, hidden directly to the linear layer. I'm confused...doesn't linear layer expects a linear tensor....

  • @kenyanow8274
    @kenyanow8274 ปีที่แล้ว

    HI Aladdin, thanks for the video and GitHub link. I've gone through your code and have entered it into Jupyter. The program gets through just about everything, and then right before it trains it stops and gives me the following error message: "ValueError: too many values to unpack (expected 2)". I'm really at a lose here. Just wondering if you could provide a recommendation? I know this has happened to a few other people, but it's odd that it's not a universal issue. I am using my own dataset. Thanks a lot in advance.

  • @HARIS-q3n
    @HARIS-q3n 2 หลายเดือนก่อน

    i HAVE A QUESTION THAT IF WE ARE TRAINING THE MODEL IS BATCHES THEN WE CANNOT USE THE LOGIC OF BREAKING THE LOOP IF IT PREDICT THE END TOKEN SINCE THE END TOKEN POISTION MAY VARIES FOR EACH CAPTION WITH IN THE BATCH SO WHAT THE SOL FOR THAT
    THANKS

  • @verakorzhova1443
    @verakorzhova1443 3 ปีที่แล้ว

    Great tutorial!!! But how to save model?

  • @MrRohitpatwa
    @MrRohitpatwa 4 ปีที่แล้ว +1

    Excellent video. Very well explained. Can you tell us what GPUs you used to train this model and how much time it took?
    Also, is your code open on git?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +2

      I only trained for a couple of hours on a 1060 and the model was very small (256 hidden & embed size with 1 layer on the lstm). Performance can definitely become better and the goal of the video was only to convey the underlying ideas of image captioning. Everything is uploaded on Github: github.com/AladdinPerzon/Machine-Learning-Collection

  • @tarzanabcd1234
    @tarzanabcd1234 4 ปีที่แล้ว +2

    Hi Aladdin, Thanks for the excellent tutorials. One question though, in the Encoder code, wouldn't it be better to set the requires_grad for the layers in __init__() method instead of forward() method? I guess this assignment is required only once during initialisation. You are anyways getting the train_CNN value in the __init__() method and not in the forward() method.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +2

      Yeah you're right about that, someone else brought this up as well and I updated the Github code shortly after:)

  • @Moneykinda5295
    @Moneykinda5295 2 ปีที่แล้ว

    YOU'RE AWESOME

  • @Glitch40417
    @Glitch40417 ปีที่แล้ว

    Hey, did anyone get a good model
    Cause I did like 40 epochs and the print_examples are giving me the same answer again and again. If anyone did get a good model pls do reply how many epochs did you run to get a good model.
    BTW awesome video, really helpful

    • @osmanalenbey5427
      @osmanalenbey5427 ปีที่แล้ว

      Same here. I trained for 10 epochs and getting the same output for any image I give. Is there something to change in the code or is it just about insufficient training? Thanks in advance.

  • @sairajdas6692
    @sairajdas6692 ปีที่แล้ว

    Where & when is the caption_image method getting called ?

  • @thecros1076
    @thecros1076 4 ปีที่แล้ว

    Wowwww.....just awesome ❤️❤️❤️❤️

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      You really are my biggest supporter, first comment on all my videos ❤️

    • @thecros1076
      @thecros1076 4 ปีที่แล้ว

      @@AladdinPersson i always fill be forever.....you taught me everything and i owe to you.

  • @pedramkhoshnevis
    @pedramkhoshnevis ปีที่แล้ว

    Thank you. Can you add the requirment.txt file so we know what are the versions of each library?

  • @abhijitdeo2683
    @abhijitdeo2683 3 ปีที่แล้ว

    thank you very much

  • @fish-j5q
    @fish-j5q 5 หลายเดือนก่อน

    can you show us how to inference with this model. you did not show the code

  • @Athulyanklife
    @Athulyanklife 2 ปีที่แล้ว

    Hi Aladdin, thanks for the video😍 I want to capture an image, check which class it belongs to and tell that class in audio format ..so basically image classification and image to speech conversion . so how to tell the image class since image doesn't contain any text? I can use this same code right ? Since I am a beginner so ...

  • @souravsaha1468
    @souravsaha1468 3 ปีที่แล้ว

    I had run the same code.
    But i got an error. in the below i have mentioned the error. would you please tell me why this error for?
    " ValueError: Expected input batch_size (1152) to match target batch_size (1120). "

  • @eng_ajy5091
    @eng_ajy5091 2 ปีที่แล้ว

    thank you... what are your pc hardware? can i run this code in real time?

  • @krishnachauhan2850
    @krishnachauhan2850 3 ปีที่แล้ว

    Please make one vedio for attention in audio processing ex. Speech emotion

  • @tianruiliu2883
    @tianruiliu2883 2 ปีที่แล้ว

    I like your color theme very much, could you tell me which theme are your using?

    • @anmxdev
      @anmxdev ปีที่แล้ว

      You'll find the theme and how to set up in his first video of this playlist

  • @AR22001144
    @AR22001144 ปีที่แล้ว

    Why is the image and captions concatenated and sent to LSTM ?

  • @vincentchong2647
    @vincentchong2647 3 ปีที่แล้ว

    3:37 feed predicted words as input, difference connection for inference and training

  • @muntazirmehdi5250
    @muntazirmehdi5250 3 หลายเดือนก่อน

    Where i can get the loader file

  • @NaqqashDilshad
    @NaqqashDilshad 3 ปีที่แล้ว

    I get an error related to spacy.... I installed it still same error. it says it's deprecated .....

  • @yashchoudhari1613
    @yashchoudhari1613 4 ปีที่แล้ว +1

    i am getting value error at -> for idx,(imgs,caption) in tqdm.... -> too many values to unpack

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว +1

      Did you manage to solve it? Try and remove the tqdm part and see if it works, when running it on my machine from the Github repo I don't obtain an error. Have you downloaded the github repository and ran the script for image captioning is on there and still get the error? If so, what version of Pytorch and tqdm are you using?
      EDIT: Also you can try replacing and doing enumerate(tqdm(train_loader)) and see if that works

    • @yashchoudhari1613
      @yashchoudhari1613 4 ปีที่แล้ว

      @@AladdinPersson thanks , I actually it was my mistake I accidentally deleted the other argument of get_loader. THANKS for replying though✌️

  • @abhisekpanigrahi1033
    @abhisekpanigrahi1033 ปีที่แล้ว

    Hello Aladdin, I have a question. The number of images in the dataset is around 8090 and the batch size I selected as 32. So the total number of batches in each epoch should be 253. but the when I load the data and check length of data loader it shows 1265. I don't understand this. Can you please explain if you have any idea. I have never seen this.

    • @osmanalenbey5427
      @osmanalenbey5427 ปีที่แล้ว

      I hope this is not too late. Despite there are 8090 images, there are also 40455 captions (about 5 captions for each image not 1). This determines the length of the data and when the batch size is 32, you get ceil(40455/32)=1265 batches in total. I hope it will still be useful :)

  • @monikavarshney6225
    @monikavarshney6225 2 ปีที่แล้ว

    getting this error------TypeError: relu(): argument 'input' (position 1) must be Tensor, not InceptionOutputs

  • @mandilquioxtenlp1202
    @mandilquioxtenlp1202 3 ปีที่แล้ว

    hi , how do we check the model on custom data?

  • @mashup3594
    @mashup3594 3 ปีที่แล้ว

    How to extend the code to check validation and testing accuracy

  • @minhct2511
    @minhct2511 ปีที่แล้ว

    hello sir can you guide me how to run this code. I'm new to Python so still bad at it :(( (nice video anyway)

  • @andsoehd277
    @andsoehd277 9 หลายเดือนก่อน

    make video about automated hair removal from dermoscopy image please

  • @sairampatil1664
    @sairampatil1664 2 ปีที่แล้ว

    Can someone send me link of MSVD dataset please?
    It's been removed from website so if anyone has in drive or something then it'd be great

  • @junhuajlake4119
    @junhuajlake4119 4 ปีที่แล้ว

    Hey, Thanks for this. Is the source code open? Where is the GitHub website address?

  • @navyagunji8591
    @navyagunji8591 3 ปีที่แล้ว

    Could you please tell me,Where can I get my_checkpoint.path.tar and run files

  • @arpitajain6747
    @arpitajain6747 ปีที่แล้ว

    how to execute this in colab

  • @YasinShafiei86
    @YasinShafiei86 2 ปีที่แล้ว +1

    TypeError: relu(): argument 'input' (position 1) must be Tensor, not InceptionOutputs This is the error I get

    • @jeremynguyen3669
      @jeremynguyen3669 ปีที่แล้ว +1

      The new version of pytorch makes you use
      ...
      self.inception = models.inception_v3(weights="DEFAULT")
      ...
      return self.dropout(self.relu(features [0]))
      This is a bug in pytorch

  • @piladivan7465
    @piladivan7465 4 ปีที่แล้ว

    when i run train.py file it run 0-1265 then it start at 0 and again goto 1265 and it continuing .
    would you please explain how can i solve this problem?
    btw thanks for pytorch videos

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Have you tried running the one on Github? It works for me

    • @plusminuschirag
      @plusminuschirag 3 ปีที่แล้ว

      There are 1265 batches in this. That's why it goes from 0 to 1265 and than back. If you change batch size from 32 to something else you will see the change as well in 0 to 1265.

    • @kolla_teja
      @kolla_teja 3 ปีที่แล้ว

      same with me have you got the solution

    • @sanjanadhingra1121
      @sanjanadhingra1121 ปีที่แล้ว

      I am also stuck at it, the training is done repeatedly

  • @haideralishuvo4781
    @haideralishuvo4781 4 ปีที่แล้ว

    I created almost same model only changing backbone to efficientnet , Dont know why its giving same caption for every image :(

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      I would assume it just has to do with tweaking the hyperparameters and training longer. If I recall correctly this was the case for me as well before I found hyperparameters that worked. Although make sure you're outputting the feature vector from EfficientNet so the connection between the CNN and RNN makes sense

    • @haideralishuvo4781
      @haideralishuvo4781 4 ปีที่แล้ว

      @@AladdinPersson How did you tune parameters?
      By Random Search?
      It really look difficult problem to tune to me .
      I am thinking may be I do Random search over 25 jobs for 2 epochs ,The parameter with lowest loss I will try it .
      Will that work?

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      @@haideralishuvo4781 Yeah random search works. But let's get back to basics first, did you follow my common mistakes video #1 of didn't overfit a batch first. Have you made sure your model can overfit a single batch of 1, 2, 32, 64, etc before trying anything else? That would be what I would check first then when you find something that works there most likely it's going to work fine for the entire training set too but random search can help in that process

  • @balachakradharrocksagayara2610
    @balachakradharrocksagayara2610 4 ปีที่แล้ว

    Hi training is completed but it's running 2nd time it's self why

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      What do you mean it's running 2nd time?

    • @balachakradharrocksagayara2610
      @balachakradharrocksagayara2610 4 ปีที่แล้ว

      @@AladdinPersson how much time is requires to complete training

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      @@balachakradharrocksagayara2610 Depends a lot on your hardware, I think this was trained for a couple of hours on a gtx 1060

    • @balachakradharrocksagayara2610
      @balachakradharrocksagayara2610 4 ปีที่แล้ว

      @@AladdinPerssonis it possible to train in another machine and copy that files to another machine

    • @yashchoudhari1613
      @yashchoudhari1613 4 ปีที่แล้ว

      @@balachakradharrocksagayara2610 just copy the checkpoint file to the desired machine and load the model with it

  • @shahnazpc
    @shahnazpc 3 ปีที่แล้ว

    i'm regretting to not watching this vdo for few weeks ago...:'(

  • @junhuajlake4119
    @junhuajlake4119 4 ปีที่แล้ว

    Could you provide your source code in this video? Thanks.

    • @AladdinPersson
      @AladdinPersson  4 ปีที่แล้ว

      Hi, the Github link is in the description of the video although it's linking to the machine learning repository and there you can find Image Captioning. Here is the direct link for the code in the video: github.com/AladdinPerzon/Machine-Learning-Collection/tree/master/ML/Pytorch/more_advanced/image_captioning

    • @junhuajlake4119
      @junhuajlake4119 4 ปีที่แล้ว

      @@AladdinPersson Copy it. Thank you very much.

  • @jonatan01i
    @jonatan01i 3 ปีที่แล้ว

    6:44
    In python-3.x you can just do
    super().__init__()

  • @ArunKumar-sg6jf
    @ArunKumar-sg6jf 3 ปีที่แล้ว

    How to do in TENSORFLOW

    • @etlekmek
      @etlekmek 2 ปีที่แล้ว

      u fuckin idiot go google it and watch the damn shit