Thank you so much, my teacher asked me to make a similar tutorial and i based on yours, but dont worry i put your video as reference, greetings from Brazil
I want to deploy openface model for my application but unable to do this and not even showing the deploy option.can you tell me how I deploy this model because The model you use can be deployed.
Fahd great video! Now When you say we can work with any model on huggingface, does this also include models that accept multimodal inputs? Im on a project where I need to make calls to models like InstructBLIP and LLaVA-Next which are found in the huggingface transformers library. However I'm getting stuck on both how I would make the call to the actual model, how I would structure the prompt and json packet to send to the model (given that I need to send multiple images, data from a csv file, and how I'd be able to integrate all of this into a proper prompt that could be sent to a multimodal model). Any tips or guidance would help. Thank you!
very often when i try to run code this way it does not run at all, it just sits blankly waiting for output... is there a solution to this significant waiting time? microsoft/Phi-3-vision-128k-instruct is the model i was using.
i know you explained it but i want just to make sure how to know if a model is paid, or free, and does Inference API can be used for free like lifetime or it will stop one day
I am a beginner, after many hours of failed attempts with other TH-cam videos, yours worked, THANK YOU, you are the best
Glad I could help!
Thank you so much, my teacher asked me to make a similar tutorial and i based on yours, but dont worry i put your video as reference, greetings from Brazil
wow, so nice of you, thanks. Please also share the channel with your teacher and class. Thanks
What was the exact code you used? The one you linked to produces another output. Thanks.
sir i admire for just making such a grate vid
Appreciate it!
How we can make sure the model we pick for repo id. At what quantization type? Because in many models comes with the quantization.
I want to deploy openface model for my application but unable to do this and not even showing the deploy option.can you tell me how I deploy this model because The model you use can be deployed.
Can I use this api just like the open ai api? And maybe use for my projects?
Fahd great video! Now When you say we can work with any model on huggingface, does this also include models that accept multimodal inputs? Im on a project where I need to make calls to models like InstructBLIP and LLaVA-Next which are found in the huggingface transformers library. However I'm getting stuck on both how I would make the call to the actual model, how I would structure the prompt and json packet to send to the model (given that I need to send multiple images, data from a csv file, and how I'd be able to integrate all of this into a proper prompt that could be sent to a multimodal model). Any tips or guidance would help. Thank you!
would have to check.
Same question
Can u provide the code use in github or on your website please?
Sure, I have put the link to code in video's description.
Thank you !!! 😊
sir i want to use "openai/whisper-large-v3-turbo" model but i cant find any tutorial on this
Why'd you use a giant model locally?
Im having difficulty with exporting the token. I think in Windows its done with "set" instead of "export", but it still doesnt accept this.
yes set or setx
Hi, does the free inference api work for custom/pivate models or do we need to use the paid endpoints? Thanks
I think there are some free credits available.
very often when i try to run code this way it does not run at all, it just sits blankly waiting for output... is there a solution to this significant waiting time? microsoft/Phi-3-vision-128k-instruct is the model i was using.
try running in parallel thread using async await
Hi Mirza,
Can we do tool call through this HFInference API? Reply fast if possible.
yes you can if the model supports it.
i know you explained it but i want just to make sure how to know if a model is paid, or free, and does Inference API can be used for free like lifetime or it will stop one day
thank you for the video, and i will appreciate your reply
@@merakshot9277 you need to pay for the endpoint
thanks
is that an ai agent??
No