hello Thakur, thanks for your awesome video, but i have one problem when i converting my models, It's a bert-like model, when i attempt to convert it, here raised an error, "RuntimeError: Unsupported: ONNX export of Slice with dynamic inputs. DynamicSlice is a deprecated experimental op." could you tell me what that mean and what should i do to handle this? Thank you a lot!
The video is really helpful but I already have the fine-tuned model (PEGASUS) on colab so is it going to be any different from this scenario? Note: the model is fine-tuned using huggingface
Hello abhishek, good stuff. Can you take on a kaggle competetion using tensorflow please end-end flow like the one you did for pytorch(Bengali Handwritten Dataset). That would be super helpful. btw excellent stuff.
Thank you. Yes, you definitely can. A simple way would be to use a separator like semicolon. split the sentences and then change the sentence predict function to handle multiple sentences. I can possibly show how to do it in one of the future videos or maybe just push batch api to github repo.
I have been trying to convert a longformer question answering model (which I trained) to onnx. I am, unfortunately, getting the following error: "RuntimeError: Only consecutive 1-d tensor indices are supported in exporting aten::index_put to ONNX." I haven't been able to either understand it or get around it, can someone shed some light as to how this can be fixed?
well, its more about portability. try converting an lstm model and then check speed :) also, i didnt do a few optimizations. it seems like some optimizations will also improve the speed
After this video I thought of giving it a try., but it didn’t work and we suffered a lot. We are working on text generation problem. Could anyone help ?
Nice video, it helped alot. I follow you everywhere . thanks for this Abhishek Bhai
Thank you for all the videos and all the PyTorch gyan :)
hello Thakur, thanks for your awesome video, but i have one problem when i converting my models, It's a bert-like model, when i attempt to convert it, here raised an error, "RuntimeError: Unsupported: ONNX export of Slice with dynamic inputs. DynamicSlice is a deprecated experimental op." could you tell me what that mean and what should i do to handle this? Thank you a lot!
Thanks for another good informative video. Which Pytorch extension do u use. It seems very good.
The video is really helpful but I already have the fine-tuned model (PEGASUS) on colab so is it going to be any different from this scenario?
Note: the model is fine-tuned using huggingface
Hello abhishek, good stuff. Can you take on a kaggle competetion using tensorflow please end-end flow like the one you did for pytorch(Bengali Handwritten Dataset). That would be super helpful. btw excellent stuff.
Thanks for yet another applied ML video. Wondering if we can get batch predictions from model served on flask?
Thank you. Yes, you definitely can. A simple way would be to use a separator like semicolon. split the sentences and then change the sentence predict function to handle multiple sentences.
I can possibly show how to do it in one of the future videos or maybe just push batch api to github repo.
@@abhishekkrthakur Thank you.. It would definitely help! Guessing pushing to your github repo would be faster :)
Thanks for the video
Thanks, Great content!
If my pytorch model has a dictionary as output, with keywords 'classification' and 'regression', can I export it to .onnx? Thanks!
Thanks for Sharing.
So... with ONNX, we don't need to use jit.trace for logic in Pytorch?
I had the same question, I saw this in one of the issues raised in pytorch
THANK YOU!
I have been trying to convert a longformer question answering model (which I trained) to onnx. I am, unfortunately, getting the following error: "RuntimeError: Only consecutive 1-d tensor indices are supported in exporting aten::index_put to ONNX." I haven't been able to either understand it or get around it, can someone shed some light as to how this can be fixed?
Did you find a solution?
If the run time for Pytorch on CPU and ONNX is almost the same, what is the point of converting?
well, its more about portability. try converting an lstm model and then check speed :)
also, i didnt do a few optimizations. it seems like some optimizations will also improve the speed
Abhishek Thakur Good to know, thanks for the insight!
After this video I thought of giving it a try., but it didn’t work and we suffered a lot. We are working on text generation problem.
Could anyone help ?
Good!How do I convert the PT model to TensorRT engine?
convert it to onnx first and then use tensorrt to optimize the inference.
Bahi mera kaam sirf video like karna rah gaya hai. Samujh me to aane se raha ;-/
haha. gimme some suggestions for what you want to see :)
@@abhishekkrthakur multi task learning