How to Make Your Images Talk: The AI that Captions Any Image

แชร์
ฝัง
  • เผยแพร่เมื่อ 9 ม.ค. 2025

ความคิดเห็น • 135

  • @PritishMishra
    @PritishMishra  2 ปีที่แล้ว +5

    Here's how I created a search engine for books using GPT3: th-cam.com/video/SXFP4nHAWN8/w-d-xo.html

  • @yashwantrana8229
    @yashwantrana8229 2 ปีที่แล้ว +6

    I can't believe your video views bcz your explaination is on next level dude i thought it must have crossed atleast 1lakh but i hope it will soon cross it

  • @محمدالفقى-ي4ب
    @محمدالفقى-ي4ب ปีที่แล้ว +5

    it's not a tutorial it's a movie i really enjoy it💙

  • @gabip265
    @gabip265 2 ปีที่แล้ว +3

    Amazing video! You made it interesting and practical. The memes and effects were lit.

  • @hugehammer
    @hugehammer 6 หลายเดือนก่อน +1

    Awesome Video bro !! You explained Image captioning in a simple and fun way.

  • @dhiraj223
    @dhiraj223 2 ปีที่แล้ว +13

    Awesome video 🔥 and nice animation as always (or not it was more dramatic 😂😂😂) Way to go 👍🏻👍🏻👍🏻

    • @PritishMishra
      @PritishMishra  2 ปีที่แล้ว +1

      Yeah, I overdid the Transformer Introduction XD.

  • @GolpokothokRaktim
    @GolpokothokRaktim ปีที่แล้ว +2

    I just started to realize the potential of AI, I already feel behind with all these new tools. Would love to see another video in the future about BlueWillow that is completely free

  • @Waliul_The_Wall-E
    @Waliul_The_Wall-E ปีที่แล้ว +1

    It’s a good tutorial. But I have a question regarding attention mechanism. At 4:50, how doest it know to focus on dog getting "dog" words as input? If it knows by detecting object, then how does it know to focus on somewhere else when it receives "It/The/There/Eating/Water/Flying"? Please make it clear.

  • @EM-nr9hj
    @EM-nr9hj 9 หลายเดือนก่อน +3

    Bro unable to get , Image caption using RNN. The link is not working. Can you please check.

  • @RudranilBhattacharjee
    @RudranilBhattacharjee 2 ปีที่แล้ว +1

    Just amazing! Loved this video. Keep more coming!

  • @sapnilpatel1645
    @sapnilpatel1645 ปีที่แล้ว +2

    learned so many new things. thanks for making the video.

  • @venkatavivek2895
    @venkatavivek2895 8 หลายเดือนก่อน +2

    How to use the saved model weights model.h5 in another file to make inferences on new images

  • @rasilmaharjan7307
    @rasilmaharjan7307 ปีที่แล้ว +5

    When I run the code on streamlit it shows two errors:
    1. ValueError: axes don't match array.
    2. ValueError: The name "conv2d" is used 2 times in the model. All layer names should be unique.
    How can I solve the problem?

    • @sameermdr11
      @sameermdr11 ปีที่แล้ว

      i had the same problem too

  • @IamAmitBarman
    @IamAmitBarman ปีที่แล้ว +1

    Bro, let me give you a salute that in this age you are doing a incredible job. BTW lets came into the main purpose..as i am in your comment section you must have guesed that I am having trouble in understanding the attention mechanism and tensorflow overall. I have to submit my paper in next one month and I am having many problems it would be great if you work with me in speech domain.please response fast.

  • @joycemalubay1443
    @joycemalubay1443 ปีที่แล้ว +2

    How can I save the model and run for android studio?

  • @GANGADHARTHOTAKURA
    @GANGADHARTHOTAKURA 9 หลายเดือนก่อน +1

    Image captioning with RNN source code is not opening dude please upload 😊.

  • @BoloFofoPT
    @BoloFofoPT 8 หลายเดือนก่อน

    Amazing video, where did you learn all of this? omg just saved me so much time. Life safer

  • @hellotherethere
    @hellotherethere ปีที่แล้ว +2

    hey i looked into your kaggle notebook of transformer model with coco dataset, you mentioned that you only trained the model on 14k images for coco dataset , im a beginner in ml ,so can you tell me what should i change in your code to increase the training dataset size from 14k

    • @tounesnada4208
      @tounesnada4208 ปีที่แล้ว

      Hello there I'm having same problem understanding this can you tell if you found solution? Thank you

  • @dishadubey8568
    @dishadubey8568 6 หลายเดือนก่อน

    Hey, great lecture! Just need a help, the link for the google colab for image captioning with rnn isn't working. It would be great help if you'll provide a new link. Thankyou!!

  • @shreyaladdha4303
    @shreyaladdha4303 2 ปีที่แล้ว

    Wow,very nicely explained!!!

  • @akashbhavsar1510
    @akashbhavsar1510 ปีที่แล้ว +3

    link isn't working for "Image Captioning with RNN". @PritishMishra can you please share the code

  • @vamshynaidu
    @vamshynaidu 10 หลายเดือนก่อน

    you nailed it bro

  • @ujjawalagrawal
    @ujjawalagrawal 7 หลายเดือนก่อน

    Very nice explanation

  • @drafatkarim8631
    @drafatkarim8631 10 หลายเดือนก่อน

    Nice video. How long does it take you to train the transformer model?

  • @sridharreddy5714
    @sridharreddy5714 10 หลายเดือนก่อน

    i want to do the image captioning with unsupervised or semi supervised bro if you have any reference code or implemented code if you share
    it will be helpful to me

  • @aady392
    @aady392 2 หลายเดือนก่อน

    Hi Pritish, amazing tutorials. Thank you. While running the transformers colab book getting error at -
    ----> 4 pred_caption = generate_caption(img_path). TypeError: `x` and `y` must have the same dtype, got tf.uint8 != tf.float32.
    Can you please help!

    • @PubG-bm7hv
      @PubG-bm7hv 2 หลายเดือนก่อน

      did it got solved??

    • @aady392
      @aady392 2 หลายเดือนก่อน

      @PubG-bm7hv no

  • @vaishnavibadakere8235
    @vaishnavibadakere8235 ปีที่แล้ว

    If the image already contains 5 captions, then whats the use of generating caption for image. We can use those available captions right?

  • @sohailgidwani5621
    @sohailgidwani5621 ปีที่แล้ว +1

    I tried to use your project but when I tried to run it, it shows this error ->
    File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
    File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
    File "h5py\h5f.pyx", line 106, in h5py.h5f.open
    OSError: Unable to open file (file signature not found)
    Can you please help me solve this error
    and also can you please share the link to model.h5 file

    • @sahiltaneja9021
      @sahiltaneja9021 ปีที่แล้ว

      Same, I m thinking to take it as my major project

  • @SHIKAI-du3tf
    @SHIKAI-du3tf ปีที่แล้ว +1

    Hi,when i try to run it on streamlit it displays the error "ImportError: cannot import name 'get_caption_model' from 'model' (C:\Users\z\model.py)",what am i doing wrong? Sorry i'm totally new to this,so can you pls help (i also downloaded both the H5 files too)

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Have you downloaded the model.py file?
      If not, You can download it here: github.com/pritishmishra703/Image-Captioning/blob/master/model.py
      The 'get_caption_model' function is present in this file.

  • @tounesnada4208
    @tounesnada4208 ปีที่แล้ว

    Hello there. Very helpful im so thankful. can you please provide me with the code to evaluate the model(not just one image) using blue metric

  • @sLntBtDdLy
    @sLntBtDdLy ปีที่แล้ว

    Awsome video, is it possible to add multiple image upload and generate a caption on each images?

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Yes, you can do this just by using batching. See HuggingFace documentation for more info, it's easy to do. Post any errors/issues here if you encounter any.

  • @LinhHuynh-lr5bz
    @LinhHuynh-lr5bz 7 หลายเดือนก่อน

    Link of Images Captioning with RNN was dead, Can you update it to help me. Thank you. From VietNam with love

  • @thashinrahuman1976
    @thashinrahuman1976 ปีที่แล้ว

    Kind of a dumb question,
    Why do we train the dataset again if we are already using a pre trained coco dataset cnn model to extract the features as the encoder. Still new to this area.

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว +1

      The Inception V3 is trained for image classification (cloud.google.com/tpu/docs/inception-v3-advanced#introduction ) so we are *fine-tuning* it on our caption generation task.
      In simple words: The InceptionV3 is NOT specialized for doing Image captioning so fine-tuning can help the model learn task-specific features.

  • @swastiktyagi8246
    @swastiktyagi8246 8 หลายเดือนก่อน

    Can you share the link for pretraiend model ( h5 ) .please share it

  • @ekosaputra5887
    @ekosaputra5887 ปีที่แล้ว

    sir why i can't open the file that consist of image captioning with RNN model

  • @jigsaw841
    @jigsaw841 หลายเดือนก่อน

    Image captioning With RNN code isn’t available,could you please solve it :/

  • @beatx2173
    @beatx2173 ปีที่แล้ว

    This is dope brotha!!

  • @RawFromCam
    @RawFromCam ปีที่แล้ว

    It is showing connecting to runtime to enable File browsing after uploading images

  • @sasidharank372
    @sasidharank372 11 หลายเดือนก่อน

    I have a problem in caption key and image signature can pls help me in it

  • @rubabsiddiqui
    @rubabsiddiqui ปีที่แล้ว

    Hey can you please give the code that you wrote in streamllit? And also, how's the huge COCO data set is processing on localhost?? And then how did you hosted that on huggingface? Also which one is hosted on higging face? RNN one or Transformer one or COCO dataset one?Please tell me how do I run it on localhost without downloading the whole dataset on my machine.

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว +1

      the code that I wrote in streamllit:
      huggingface.co/spaces/pritish/BookGPT/tree/main
      how's the huge COCO data set is processing on localhost?
      Answer: I trained my model on COCO dataset, I loaded the dataset once on Google Colab. Once the training was done, I saved the trained model weights to a file. Now, when I want to use the trained model for inference or fine-tuning (on my localhost), I only need to load the saved model from the file, not the entire COCO dataset.
      How did you hosted that on huggingface?
      Answer: I created an app.py file that includes a user interface (UI) made with Streamlit. Then I pushed it to huggingface spaces. Here's how to do it: huggingface.co/docs/hub/spaces-overview#creating-a-new-space
      which one is hosted on higging face? RNN one or Transformer one or COCO dataset one?
      Answer: The Transformer + COCO one is hosted on HuggingFace.
      Please tell me how do I run it on localhost without downloading the whole dataset on my machine.
      Answer: As I said, there's no need to download the whole dataset. You just need to load the model file ('model.h5') and then you can give it any image and it will generate captions.
      First clone the repository:
      git clone huggingface.co/spaces/pritish/BookGPT
      Then run the `app.py` file. This will take some time as it imports all the modules and loads the saved model. This will raise error if you don't have TensorFlow installed so make sure it is installed!

  • @ayushjindal4981
    @ayushjindal4981 10 หลายเดือนก่อน

    Hi Pritish, Is it possible to use your model's results using web API calls?

    • @PritishMishra
      @PritishMishra  10 หลายเดือนก่อน

      Yes, but you will need to deploy it. As per my knowledge, HuggingFace has such features.

    • @ayushjindal4981
      @ayushjindal4981 10 หลายเดือนก่อน

      @@PritishMishra I want to use the model that you have deployed on HuggingFace. Is it possible? Or if possible, can you pls share with me your trained model?

  • @danushaaditya3630
    @danushaaditya3630 ปีที่แล้ว

    Hey bro! Thanks for the video. Learnt a lot. Your RNN colab links seems to be broken. Can you please update? Thanks!

  • @ayushpanwar6626
    @ayushpanwar6626 2 ปีที่แล้ว +1

    hey! It took a lot of time to process fliker 30 k file for so many epochs.
    How did you managed to get Coco h5 file??How much time it took to process.

    • @PritishMishra
      @PritishMishra  2 ปีที่แล้ว +1

      I only trained on random 14K images from coco dataset. So the training time was slightly greater for coco dataset compared to Flickr8K.

    • @quotesofacuity6889
      @quotesofacuity6889 2 ปีที่แล้ว +1

      Thanks for replying!
      I Am excited to see that we can use modal.h5 file and directly build project without training it.Wouldn't it be nice if someone managed to get coco trained full dataset modal.h5 file ...

    • @sahiltaneja9021
      @sahiltaneja9021 ปีที่แล้ว

      I am facing a lot of errors, kindly help me out

  • @AniKeth-wi7zb
    @AniKeth-wi7zb 8 หลายเดือนก่อน

    Github link is not opening , it's says that it was uploaded from a suspended account

  • @fung1459
    @fung1459 6 หลายเดือนก่อน

    Your RNN file is showing Page Not Found , can you reupload the file

  • @rohitchitte5614
    @rohitchitte5614 ปีที่แล้ว

    how to use model.h5 file to make predictions ???? , I tried using load_model but it's expecting checkpoints file, also tried load_weights but still giving error can u pls show how to use this model.h5 file to make predictions ??????????

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      You can use `get_caption_model` function to load the model: github.com/pritishmishra703/Image-Captioning/blob/master/model.py#L299
      Then to make predictions use `generate_caption` function: github.com/pritishmishra703/Image-Captioning/blob/master/model.py#L270

  • @reactjs1900
    @reactjs1900 ปีที่แล้ว

    Good video editing skills

  • @ananda2918
    @ananda2918 ปีที่แล้ว

    Can you tell me how u trained for mscoco dataset for the same, how many epochs did you run to get the results in your video. waiting for your reply

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว +1

      I have added the Jupyter notebook of the entire training on MS COCO. You can find it in description.

    • @ananda2918
      @ananda2918 ปีที่แล้ว

      @@PritishMishra i couldnt find it,in source code only flickr dataset is used, can u please help me out

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว +1

      The link I provided in the description was wrong 😅 Sorry for that!
      Here you go: www.kaggle.com/code/pritishmishra/image-captioning-on-coco-dataset

    • @ananda2918
      @ananda2918 ปีที่แล้ว

      @@PritishMishra Thanks, i been training this for 10 epochs but it stops 8 epochs and results are not much accurate, btw is it possible to retrain the model with saved weights? i have weights that run over 8 epochs with loss: 2.6367 - acc: 0.4514 , please reply ASAP

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Accuracy 0.45 is great i would say! If you want to increase it more I recommend you to train it on more data (i have trained on 14K images, make it 24K or 30K). If you load my save weights you will save some epochs of training.

  • @rahkkuwn
    @rahkkuwn ปีที่แล้ว

    what is the software at the end of the taskbar called? Just Curious

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      The one with a red dot? It's obs studio, the screen recording software I am using.

  • @baochill1504
    @baochill1504 ปีที่แล้ว

    Can you use BLEU score for evaluate the model

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Sure i will keep this in mind next time.

  • @Vikramx123
    @Vikramx123 9 หลายเดือนก่อน

    How can we do it for videos bro ??

  • @Ooblec
    @Ooblec ปีที่แล้ว

    great! learned a lot.

  • @ayamohammed2583
    @ayamohammed2583 ปีที่แล้ว

    hi can you help me please,, when i call get_caption_model() function i get the following error "ValueError: axes don't match array" do you you have any ideas

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Can you please paste the entire error.

    • @kenzamehrin7635
      @kenzamehrin7635 11 หลายเดือนก่อน

      I also have this same error. Do you know how to solve it.

  • @Deep_photographer
    @Deep_photographer ปีที่แล้ว

    can you share the github link for prediction purposes based on loaded model

  • @ghashianameen
    @ghashianameen 9 หลายเดือนก่อน

    bro can you help me in Video captioning project?

  • @blackplagueklan7246
    @blackplagueklan7246 ปีที่แล้ว

    Hi, I tried testing your model and it was not giving correct captions most of the time like whenever I uploaded a simple face image, it would always prompt "a man in a suit and tie". I am new to ML/DL and wanted to make my first project on this topic. How can I make it prompt more accurate with diverse captions?

    • @blackplagueklan7246
      @blackplagueklan7246 ปีที่แล้ว

      You mentioned that you used 14k images, Does using the whole COCO dataset increase its caption accuracy?

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      I haven't tried training it on whole dataset but i am sure that the caption accuracy will increase if you do it. Make sure the model doesn't overfit. This may increase the generalization capabilities of the model.

    • @blackplagueklan7246
      @blackplagueklan7246 ปีที่แล้ว +1

      @@PritishMishra Okay I'll give that a try. In project building, Should I opt for a pre-trained model like ViT model from hugging face and use Pytorch for processing. The whole project is completed within 30 lines of code and the accuracy is extremely high as well. Do let me know your thoughts on that.

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      @@blackplagueklan7246 can you share the notebook with me? I want to see the performance. I will be glad to share the link with everyone in description!

    • @beatx2173
      @beatx2173 ปีที่แล้ว

      dude can you share me the notebook please@@blackplagueklan7246

  • @LeoPaulose-g6n
    @LeoPaulose-g6n ปีที่แล้ว

    Bro the Image Captioning with RNN source code is not available

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      I will fix this and get back to you.

  • @tounesnada4208
    @tounesnada4208 ปีที่แล้ว

    Hello ,is anyone was able to train the model on the all data... If so can you please please share the model.h5 gile

  • @Its_nivas
    @Its_nivas ปีที่แล้ว

    Bro where can i get the full code

  • @c_213sse
    @c_213sse ปีที่แล้ว

    hey how to load kaggle dataset to colab?

  • @sahiltaneja9021
    @sahiltaneja9021 ปีที่แล้ว

    Can we make this for real time images from webcam

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Yes. You can do it by using OpenCV.

  • @lukeshpraveen4763
    @lukeshpraveen4763 8 หลายเดือนก่อน

    ur github link is saying that it is suspended

  • @pandoraowl7379
    @pandoraowl7379 หลายเดือนก่อน

    bro your source code link is not working

  • @shrirampachpor6187
    @shrirampachpor6187 ปีที่แล้ว

    Bro where is the link for the repository

  • @shrutijain4014
    @shrutijain4014 ปีที่แล้ว

    Where is the code of this video

  • @shrirampachpor6187
    @shrirampachpor6187 ปีที่แล้ว

    How to open a browser in Google colab

  • @aquilesmandoyo
    @aquilesmandoyo 2 ปีที่แล้ว

    Hi, I just replicated your code with Coco and the transformers but the accuracy is no more than 43%. You know why?

    • @PritishMishra
      @PritishMishra  2 ปีที่แล้ว +3

      The Image Captioning model is a generative model, which means that it predicts a new caption for each image. You may be aware that the predictions are generated word by word; the model generates new words depending on the words it predicted previously, and generative models are highly chaotic; a minor change in their initial conditions can completely affect the structure of the predicted captions. That's why, accuracy is a hard metric to use when evaluating such models because even a single extra word in the model's prediction might entirely ruin the accuracy. In short, 43% is moderately good accuracy for our model.

    • @ananda2918
      @ananda2918 ปีที่แล้ว

      can u help me out man?

  • @kavalshinkar9872
    @kavalshinkar9872 ปีที่แล้ว

    I want to know how you built the webapp

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      You can find the code on HuggingFace. I have used Streamlit.

  • @s.dharanashs.dharanash5991
    @s.dharanashs.dharanash5991 11 หลายเดือนก่อน

    RNN file does not exist bro pls upload

  • @swetanayak2005
    @swetanayak2005 9 หลายเดือนก่อน

    How to get the code

  • @satyamtiwari3839
    @satyamtiwari3839 7 หลายเดือนก่อน

    hey none of your links are working

  • @someone5347
    @someone5347 ปีที่แล้ว

    Can we use the same code with Arabic language?

    • @PritishMishra
      @PritishMishra  ปีที่แล้ว

      Definitely. You just have to train it on Arabic dataset.

  • @YforYou2596
    @YforYou2596 10 หลายเดือนก่อน

    bro unable to get the dataset brooo

  • @riyajose4564
    @riyajose4564 ปีที่แล้ว

    How can I contact you?

  • @ayush1344
    @ayush1344 10 หลายเดือนก่อน

    Brother this video is really great and i loved your explanation but i am a beginner in aiml and want to learn this in detail
    Can you please create a detail video on this topic

  • @kailashbalasubramaniyam230
    @kailashbalasubramaniyam230 9 หลายเดือนก่อน

    goog one buddy

  • @RAZZKIRAN
    @RAZZKIRAN ปีที่แล้ว

    got error context.txt file

  • @shubhamilhe1452
    @shubhamilhe1452 หลายเดือนก่อน

    that was fucking amazing

  • @omkarkhandalkar8869
    @omkarkhandalkar8869 2 ปีที่แล้ว

    Sexy explanation bhai
    Majja agaya😌💫

  • @E_HarshitaGoydani
    @E_HarshitaGoydani 2 ปีที่แล้ว

    "with open(f'{BASE_PATH}/annotations/captions_train2017.json', 'r') as f:" what is this path in the code???? i can not get it and it is showing me directory error
    pllzzz reply me i m stuck since long!!
    i m geeting directorary error in each code

    • @PritishMishra
      @PritishMishra  2 ปีที่แล้ว

      Hello, I originally made that notebook on Kaggle. So I forgot to include that downloading code on colab. I'm doing it right now. However, I strongly advise you to run that file on Kaggle because the coco dataset is 27 GB and downloading it on colab will take forever. So, to execute the file on Kaggle, do the following:
      1. Download the notebook from colab. (Go to File -> Download -> Download .ipynb)
      2. Go to Kaggle and sign in.
      3. Then, on the left menu, click the big "+" button.
      4. Select "Create Notebook."
      5. You should now be able to see the newly created notebook. Now, Go to File -> Import Notebook
      6. Upload the file you downloaded in Step 1.
      7. You should now be able to see the entire notebook. Now, on the right pane, click the "Add Data" button.
      8. Look for Awsaf's "Coco 2017 Dataset" and add that dataset. (This one: bit.ly/3Vcst64)
      You're good to go! Run the notebook now, and everything should be fine. If you encounter any new errors, please reply here and I will help you.

    • @RawFromCam
      @RawFromCam ปีที่แล้ว

      Hello

    • @RawFromCam
      @RawFromCam ปีที่แล้ว

      Please reply

  • @beatx2173
    @beatx2173 ปีที่แล้ว

    nailed it

  • @beatx2173
    @beatx2173 ปีที่แล้ว

    cool

  • @shafqatkhiraam7343
    @shafqatkhiraam7343 5 หลายเดือนก่อน

    Source code link with RNN not working😢😢

  • @witchergaming5796
    @witchergaming5796 9 หลายเดือนก่อน

    The RNN source code link is not working please provide a link