- 120
- 366 290
Dr. Niraj Kumar (PhD, Computer Science)
India
เข้าร่วมเมื่อ 25 มี.ค. 2016
Welcome to the World of Newly Designed Tutorial, Insights, Surveys, and Research Discussions (100% Free and not for commercial use)
About Myself: (Passionate for Research and Teaching)
Visit my Website for more technical stuffs: www.nirajai.com/
Copyright © 2022 Deep Learning & AI (The content used on this website is not for commercial use.)
About Myself: (Passionate for Research and Teaching)
Visit my Website for more technical stuffs: www.nirajai.com/
Copyright © 2022 Deep Learning & AI (The content used on this website is not for commercial use.)
D0RA: Weight-Decomposed Low-Rank Adaptation
Contains.
1. DORA Finetuning Step-by-Step
2. Code walkthrough Dora Finetuning
Note: Codes are available at: www.quantacosmos.com/2024/07/finetune-large-language-models-with.html
1. DORA Finetuning Step-by-Step
2. Code walkthrough Dora Finetuning
Note: Codes are available at: www.quantacosmos.com/2024/07/finetune-large-language-models-with.html
มุมมอง: 146
วีดีโอ
Quantized Low-Rank Adaptation ‘QLORA’
มุมมอง 963 หลายเดือนก่อน
Contains. 1. LORA Vs QLORA 2. QLORA Explanation Reference. 1. Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). 2. Dettmers, Tim, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. "Qlora: Efficient finetuning of quantized llms." Advances...
Finetuning LLMs with LORA
มุมมอง 993 หลายเดือนก่อน
Contains. 1. Finetuning Strategies. 2. Finetuning Code-Demonstration Code is available at: www.quantacosmos.com/2024/06/lora-qlora-and-fine-tuning-large.html Reference. 1. Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). 2. Dettmers, Tim, A...
Low-Rank Adaptation 'LORA'
มุมมอง 1543 หลายเดือนก่อน
Contains. 1. Basics and Requirements of Fine-Tuning. 2. LORA Step-By-Step Reference. 1. Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). 2. Dettmers, Tim, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. "Qlora: Efficient finetuning of...
Fine-Tuning Pretrained LLMs Locally
มุมมอง 2983 หลายเดือนก่อน
Contains. Basics and Requirements of Fine-Tuning. System Work-Flow - Fine-Tune Llama-3 Code-Demonstration
Download and Use Llama-3 Locally
มุมมอง 2523 หลายเดือนก่อน
Contains. A few easiest process (other than using Llama-3 through Ollama ) Code-Demonstration Steps to download Meta-Llama3: 1. Install Hugging Face CLI: pip install -U "huggingface_hub[cli]" 2. Create a Hugging Face account if you don’t have one (huggingface.co/) 3. Accept the model’s conditions and privacy policy for Llama-3-8B-Instruct (huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Wa...
One-Shot LLM + RAG with Knowledge Graph
มุมมอง 853 หลายเดือนก่อน
Contains. 1. Basics of Knowledge Hyper Graph based One-Shot LLM-RAG Techniques. 2.System Work-Flow 3. Code-Demonstration Source code is available at: www.quantacosmos.com/2024/06/one-shot-llm-rag-with-knowledge-graph.html
Zero-Shot LLM-RAG With Knowledge Graph
มุมมอง 743 หลายเดือนก่อน
Contains. 1. Basics of Knowledge Hyper Graph based Zero-Shot LLM-RAG Techniques. 2. System Work-Flow 3. Code-Demonstration Source code is available at: www.quantacosmos.com/2024/06/zero-shot-llm-rag-with-knowledge-graph.html
Knowledge Hyper Graph with LLM-RAG
มุมมอง 1263 หลายเดือนก่อน
Contains. 1. Basics of Knowledge Hyper Graph based RAG Techniques. 2. System Work-Flow 3. Code-Demonstration Source Code is available at: www.quantacosmos.com/2024/06/knowledge-hyper-graph-with-llm-rag.html
Using Knowledge Graph with LLM-RAG
มุมมอง 753 หลายเดือนก่อน
Contains. 1. Basics of Knowledge Graph based RAG Techniques. 2. System Work-Flow 3. Code-Demonstration Source Code is available at: www.quantacosmos.com/2024/06/using-knowledge-graph-with-llm-rag.html
Graph Based RAG (Retrieval Augmented Generation) Techniques PART-2
มุมมอง 1284 หลายเดือนก่อน
Contains (Implementation). 1. Easy Offline demonstration of Graph Based RAG using LLM (Llama-3). Please find the code at: www.quantacosmos.com/2024/06/rag-retrieval-augmented-generation-llm.html
Graph Based RAG (Retrieval Augmented Generation) Techniques PART-1
มุมมอง 1974 หลายเดือนก่อน
Contains. 1. Basics of Graph based RAG Techniques. 2. GraphRAG Introduction. Please find the code at: www.quantacosmos.com/2024/06/rag-retrieval-augmented-generation-llm.html
How to construct Flow-Diagram by Using LLM + RAG
มุมมอง 3614 หลายเดือนก่อน
Contains. 1. Generating Mermaid Code for a given text using LLM (Llama-3) and RAG. 2. Converting Mermaid Code to Flow Diagram. Please find the code at: www.quantacosmos.com/2024/06/rag-retrieval-augmented-generation-llm.html
How to use LLM + RAG to Construct Knowledge Graph
มุมมอง 1484 หลายเดือนก่อน
Contains. 1. Simplest way to construct Knowledge graph using LLM (Llama-3) and RAG (Retrieval Augmented Generation). 2. Display of Knowledge Graph. 3. Complete Coding demo. Please find the code at: www.quantacosmos.com/2024/06/rag-retrieval-augmented-generation-llm.html
RAG (Retrieval Augmented Generation) with LLM
มุมมอง 1094 หลายเดือนก่อน
Contains. 1. Why we need RAG? 2. Working of RAG. 3. Detailed Coding Demonstration of RAG using Llama-3 (Offline LLM System) Please find the code at: www.quantacosmos.com/2024/06/rag-retrieval-augmented-generation-llm.html
Wasserstein GAN Part-3 (Architecture and Implementation)
มุมมอง 766 หลายเดือนก่อน
Wasserstein GAN Part-3 (Architecture and Implementation)
Wasserstein GAN Part-2(Wasserstein Distance - Details)
มุมมอง 696 หลายเดือนก่อน
Wasserstein GAN Part-2(Wasserstein Distance - Details)
Wasserstein GAN Part-1(KL-Divergence Vs Jensen-Shannon Divergence Vs Wasserstein Distance)
มุมมอง 2196 หลายเดือนก่อน
Wasserstein GAN Part-1(KL-Divergence Vs Jensen-Shannon Divergence Vs Wasserstein Distance)
Use of Long Text Sequences with LLM’s Trained on Shorter Part-3 RoFormer-Rotary Positional Embedding
มุมมอง 966 หลายเดือนก่อน
Use of Long Text Sequences with LLM’s Trained on Shorter Part-3 RoFormer-Rotary Positional Embedding
Use of Long Text Sequences with LLM’s Trained on Shorter, Part-2 (Attention with Linear Biases)
มุมมอง 1516 หลายเดือนก่อน
Use of Long Text Sequences with LLM’s Trained on Shorter, Part-2 (Attention with Linear Biases)
Use of Long Text Sequences with LLM’s Trained on Shorter Text Sequences Part-1
มุมมอง 1466 หลายเดือนก่อน
Use of Long Text Sequences with LLM’s Trained on Shorter Text Sequences Part-1
Generative Adversarial Network GAN Part-1
มุมมอง 1137 หลายเดือนก่อน
Generative Adversarial Network GAN Part-1
Generative Adversarial Network GAN Part-3
มุมมอง 847 หลายเดือนก่อน
Generative Adversarial Network GAN Part-3
Generative Adversarial Network GAN Part-2
มุมมอง 557 หลายเดือนก่อน
Generative Adversarial Network GAN Part-2
"Improved Multi-Step, Multi-Variate, and Spatiotemporal 5G Data Usage Forecasting.."
มุมมอง 789 หลายเดือนก่อน
"Improved Multi-Step, Multi-Variate, and Spatiotemporal 5G Data Usage Forecasting.."
L2-Norm and Unit-Sphere for Contrastive/Self Supervised Learning
มุมมอง 120ปีที่แล้ว
L2-Norm and Unit-Sphere for Contrastive/Self Supervised Learning
Ver good sir
Dr.Niraj, good video. Even if it is fine tuned can you also extend this video on evaluation of the model along with metrics with explanation?
Great tutorial😊. Could you please make a similar video on QnA fine tuning?
fantastic
Share these steps sir Thank you
Awesome content professor. Could you please order the videos chornologically in the playlist with the first video of the course on the top and the last video of the course at bottom. Same for all the other playlists
Thanks @Teetanthegamer, I, tried to ordered through - website: www.nirajai.com/home/llm
@@DrNirajRKumar Can you please do it on youtube as well ?
Insanely useful
Helpful
thank you very much sir... very much enjoyed
Thank you very much.
Please make a coding with python
Thank you sir for your effort. 🙏🙏 Clear explanation
Good explanation
Sir, thank you for all your efforts. Your videos are very good and useful to understand difficult concepts.
good explanation
Thanks and welcome
Amazing! thank you. Do you know if is there any possible way to put subtitles, because the video is not making the subtitles. Thank you
Thanks. I am trying to solve the subtitle related issue.
thank you so much. I was so confused but now got a clear idea
Thank you sir
very detailed explaination
Awesome tutorial
Thanks man this helped out👊
this video is gold
Could you please reverse the playlist ? it is very difficult to autoplay.
Please go through the following links for - listing of all topics in highly organized way: 1. Deep Learning: www.nirajai.com/home/deep-learning 2. Advanced Deep Learning: www.nirajai.com/home/advanced-deep-learning 3. Deep Learning for Graph: www.nirajai.com/home/deep-learning-for-graph 4. Quantum Deep learning: www.nirajai.com/home/quantum-deep-learning 5. Machine Learning: www.nirajai.com/home/machine-learning
Your way of speaking is professional and just like IIT professors.
Thank you Sir!
Thank you Doctor. The video I have been waiting for long time.
Very nice presentation, informative. Thanks. . Can you explain what limits the reversibility in electrical and optical systems?
Noted Thanks
@@DrNirajRKumar Can u pls tel what limits the reversibility
@@Krishna16789 I think.. this person has explained things in a very simple way. Due to time constraints, I may not be able to go into more depth on this topic. But, in the future, I will try to compile this topic with appropriate depth. - www.linkedin.com/pulse/computational-reversibility-quantum-computing-sa%C5%A1a-savi%C4%87 Other resources: 1. arxiv.org/pdf/2301.09679.pdf 2. www.nature.com/articles/s41567-022-01873-9 3. arxiv.org/pdf/2301.06838.pdf and so many deals with the limitation and reversibility of entanglement and other quantum components (useful for quantum computation)
While calculating revised probability some people add learning rate*weight to previous log Odds, is it correct. Do we add to residuals or log odds
Please can you share python code
can we impliment HAN on sentence classifications ?
Nice explanation! I was looking for such an explanation of this papers. Thank you!
Great explanation! Thanks
Fantastic explanation!! Thanks so much. Unbelievable you got so few likes for the best self-attention explanation on the internet.
Very good explanation, thanks
Very good sir but please try to take example of dataset
Good
Thank you for such a nice video, But there's something not yet clear to me, when you have an imbalanced dataset, which is better macro or micro f1-score?
Nice work sir
Sir plz made videos on vgg16 and 19 ,resnet, inception architectures plz 🙏 sir
During inferencing (predicting for new test data points), can we use parallelization to get the output from each individually constructed tree?
Generally, boosting is known for sequential learning.. but during the prediction process, even during the training process, you can achieve parallelization. For example: 1. ieeexplore.ieee.org/document/8890990 2. scikit-learn.org/stable/modules/ensemble.html So - many other references are available..
Great explanation sir!
Sir plz made detailed videos on vgg16 and 19 ,resnet & inception architecture plz 🙏 sir
Sir plz made detailed videos on vgg16 and 19 ,resnet & inception architecture plz 🙏 sir
Sir plz made detailed videos on vgg16 and 19 ,resnet & inception architecture plz 🙏 sir
Sir plz made detailed videos on vgg16 and 19 ,resnet & inception architecture plz 🙏 sir
Thanks Piyush.
Sir plz made detailed videos on vgg16 and 19 ,resnet & inception architecture plz 🙏 sir
Thanks, I will try ..but too much busy these days.. hope will start posting videos soon. Thanks Again.
Excellent explanation!
Nice one sir
Thanks Rimjhim
Can i get these slides sir
how can i plot DBN model Python?