Dear KGP Talkie, I wanted to take a moment to express my heartfelt gratitude for the amazing videos you create on your channel. Your content has been incredibly insightful and informative, and I've learned so much from watching them. I am particularly interested in Natural Language Processing (NLP) and have been exploring the Hugging Face Transformers library. Your expertise in explaining complex topics in a simple and engaging manner is truly commendable. I kindly request you to consider covering as many topics as possible on NLP using Hugging Face Transformers in your future videos. Your in-depth explanations and practical examples make the learning process enjoyable and easy to follow. Thank you once again for the effort and dedication you put into your work. Your channel has been an invaluable resource for learners like me, and I eagerly look forward to your upcoming NLP videos.
Hello Sir, I am finetuning llama 2 7b chat hf on colab but using quantisation and lora techniuqe and have freeze most layers when the model is finetuned in runtime, it works absolutely fine but if I save thee finetuned model and then load it for inference it does not give the correct output but gives error. Please help me out with ths
I am new to machine learning, I have a pending project related to machine learning, I tried to do that on chapGPT and etc. but can't make it work, thanks for this detailed lecture with implementation.
Sir, At last while running the command trainer.train(), the following error appears x = np.asarray(x) 330 else: --> 331 raise TypeError(message) 332 333 if hasattr(x, "shape") and x.shape is not None: TypeError: Expected sequence or array-like, got ,
@@KGPTalkie The disk drive letter is choosen based on where the cache folder from Hugging face is used to download model weights and dataset. May be Drive letter D was earlier choose and thats whjy HF is using the same
Please watch video till last. It is mentioned there. And for deployment you can take my Udemy course on ML deployment. Link is given in the video description in last section.
Dear KGP Talkie,
I wanted to take a moment to express my heartfelt gratitude for the amazing videos you create on your channel. Your content has been incredibly insightful and informative, and I've learned so much from watching them.
I am particularly interested in Natural Language Processing (NLP) and have been exploring the Hugging Face Transformers library. Your expertise in explaining complex topics in a simple and engaging manner is truly commendable.
I kindly request you to consider covering as many topics as possible on NLP using Hugging Face Transformers in your future videos. Your in-depth explanations and practical examples make the learning process enjoyable and easy to follow.
Thank you once again for the effort and dedication you put into your work. Your channel has been an invaluable resource for learners like me, and I eagerly look forward to your upcoming NLP videos.
Thanks a lot for your kind words ❤️❤️❤️
Special thanks to chatgpt for this message!!!
It's a pretty nice video for anyone looking to get started with the huggingface library. Thanks 🌟
Thanks a lot for appreciation
Awesome video on emotion detection with a detailed explanation. Thank you
Your welcome 😍
Once again, thank you so much. Lots of blessings to you. Your explanations are very clear and helped a lot in my phd work.
Cool. Happy to know this.
Thank you very much for providing such a helpful course. It has made my studies less difficult.
Much needed tutorial 🎉 Thank you so much . Looking forward to more 🙂
Thanks a ton. You are the first one to comment on my first video in Huggingface. Few more tutorials are in pipeline. Would be coming very soon.
Truly wonderful video
Thanks
Hello Sir,
I am finetuning llama 2 7b chat hf on colab but using quantisation and lora techniuqe and have freeze most layers when the model is finetuned in runtime, it works absolutely fine but if I save thee finetuned model and then load it for inference it does not give the correct output but gives error. Please help me out with ths
thank u so much , you said somthing about "load the model " how can I do that please ?
I am new to machine learning, I have a pending project related to machine learning, I tried to do that on chapGPT and etc. but can't make it work, thanks for this detailed lecture with implementation.
Thank you so much, it helped alot.
Thanks for watching. Happy learning
thanks a lot brother,,,can you please cover multilabel classification as well with huggingface dataset?
Yes sure
@@KGPTalkie thanks brother❤❤
Hi, Multi Label Classification with Hugging Face live on my channel. Please see it here
th-cam.com/video/ZYc9za75Chk/w-d-xo.htmlsi=FxAipl_tYqD4mXez
What was the point of the Model Building section? Because I don't see the steps you took there implemented again after you use the whole dataset.
Very Nice.
Thanks
Sir, At last while running the command trainer.train(), the following error appears x = np.asarray(x)
330 else:
--> 331 raise TypeError(message)
332
333 if hasattr(x, "shape") and x.shape is not None:
TypeError: Expected sequence or array-like, got ,
Seems like data error. Verify once if data is correct.
list_datasets returns an error. Any ideas why?
list_datasets is deprecated. Use
from huggingface_hub import list_datasets
datasets = list_datasets()
why it takes up disk D when fine-tuning a hugging face model? can you help me
How much space is taken?
@@KGPTalkie The disk drive letter is choosen based on where the cache folder from Hugging face is used to download model weights and dataset. May be Drive letter D was earlier choose and thats whjy HF is using the same
Sir, please tell how to zip and download the model and deploy it.
Please watch video till last. It is mentioned there. And for deployment you can take my Udemy course on ML deployment. Link is given in the video description in last section.
!zip -r /content/file.zip /content/Folder_To_Zip
I wish you had provided justification for your choice of parameters during the fine-tuning.
😁 promo sm
you took an existing project and claim that is yours hhhhhhh such a liar
please i want to contact with you
You can reach out to me at udemy@kgptalkie.com