Complete Generative AI With Azure Cloud Open AI Services Crash Course
ฝัง
- เผยแพร่เมื่อ 8 ก.ค. 2024
- Azure OpenAI Service provides access to OpenAI's powerful large language models such as ChatGPT, GPT, Codex, and Embeddings models. These models enable various natural language processing (NLP) solutions to understand, converse, and generate content. Users can access the service through REST APIs, SDKs, and Azure OpenAI Studio.
Code github: github.com/sunnysavita10/azur...
Subscribe to Sunny YT Channel: @sunnysavita10
Timestamp:
00:00:00 Introduction
00:01:00 Introduction of Mentor
00:01:38 Overview of the Syllabus
00:03:30 Azure AI Service Introduction
00:09:44 Differences between Azure Open AI API and Open AI API
00:14:23 Creating Azure Open AI Resource
00:29:04 Model Deployment
00:41:00 Using DALL-E for Image Generation
00:49:06 Chat Completion API using Python SDK with Azure Open AI Client
00:52:58 Creating QA Chatbot Using Own Custom Dataset
01:07:45 Audio or video to text using whisper model
01:20:10 Function Calling using Azure Open AI API
01:23:52 Function Calling implementation using python
01:50:04 Finetuning using Azure Open AI model
02:09:35 Introduction of Azure AI Studio
02:24:23 Creating Custom RAG chatbot using Azure AI Studio
02:29:50 Setting up flask for Custom RAG Chatbot
02:36:38 RAG Architecture Explain
02:48:17 Deployment of RAG Chatbot Application using Azure AI studio
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
►GenAI on AWS Cloud Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD
Lets keep the target of the likes to 2000. Also dont forget to subscribe his YT channel.
Hi sir, I want to create an AI power chat application which will query with MS SQL server database. Is this possible to implement using free ai models? Plz make a video for this. Don't want to download models on local system also.plz make a video like this also if feasible.
1 like added
He should be awarded Bharat Ratna for his outstanding contribution to AI learning in India
PM bna dena chahiye
@@yogeshyts that works too
Worth the time... A good course covering overall most of the concepts...
This is what I looking for for a long time. Thank you guys
really ... me too
Thanks Sunny and Krish =)
❤ from Malaysia!!!! , never stop learning and always Day1
needed this for a project
On point Knowledge ❤
Who's target is complete within 3 days, hit like and share thoughts. I am learning
I am listening in one go and reiterate again topic by topic
Thanks krish , It helps us a lot :)
OMG, so cool, Thanks a lot :)
Krish u have been showing us so many things created by others like langchain , llama index, crew ai etc... I think now its time for u to create a tool or a framework by yourself.. I am sure you will do great .... Good Luck...
Hi Krish,
Thank you so much for sharing your knowledge through your videos. They have been incredibly helpful in my learning journey. I appreciate all the effort you put into them. I have a few questions that I believe will also benefit the community:
Best Practices for Deployment: What are the best practices for deploying an open-source LLM (Large Language Model) to handle multiple requests from different users simultaneously?
Handling Multiple Requests: How can I deploy an open-source LLM to efficiently handle many requests from different users at the same time?
Reducing Response Time: What strategies can I use to reduce the response time of the deployed LLM?
Accuracy with RAG Implementation: How can I check the accuracy of an LLM model when using Retrieval-Augmented Generation (RAG) implementation?
Creating Instruction-Response Pairs: How can I convert raw text into instruction and response pairs for fine-tuning models like Mistral, LLaMA, or any other open-source model?
I believe a video addressing these questions and providing solutions would be very helpful for the community.
Thank you once again for your support and guidance.
Best regards,
Jay
1.Best Practices for Deployment: Deploying a Large Language Model (LLM) in a production environment to handle multiple requests simultaneously requires careful planning. Here are some best practices:
Load Balancing: Use a load balancer to distribute incoming requests across multiple instances of your model. This can help ensure that no single instance becomes a bottleneck.
Scaling: Use auto-scaling to automatically adjust the number of model instances based on the load. This can help ensure that you have enough capacity to handle peaks in demand, while not wasting resources during quieter periods.
Monitoring and Logging: Implement comprehensive monitoring and logging to track the performance of your model and to quickly identify and resolve any issues.
2.Handling Multiple Requests: To handle multiple requests efficiently, consider using asynchronous processing or queuing systems. This allows your service to accept a request, place it in a queue, and then process it when resources are available, rather than forcing each request to wait for the previous one to complete.
3.Reducing Response Time: To reduce response time, consider the following strategies:
Optimization: Optimize your model for inference. This could involve techniques like quantization, pruning, or using a more efficient implementation of the model.
Caching: Implement caching for common requests. If you notice that certain requests are made frequently, you can store the results in a cache and return them immediately on subsequent requests.
4.Accuracy with RAG Implementation: To check the accuracy of an LLM model when using Retrieval-Augmented Generation (RAG), you can use a validation set of data. Run the model on the validation data and compare the model's outputs to the actual outputs. You can then calculate accuracy metrics based on this comparison.
5.Creating Instruction-Response Pairs: To convert raw text into instruction and response pairs for fine-tuning models, you can use a script that processes the raw text and splits it into pairs. The specifics of this will depend on the format of your raw text and the requirements of the model you're using. Typically, you would use a natural language processing (NLP) library like NLTK or SpaCy to help with this.
Thanks for the help
Liking the session🤙
Thank you I requested.
Good work ❤
Brilliant
First comment ..🎉 great work sunny Savita
Hey am here too!
Thank you for detail session, covered all the points. Can you please also provide session on Copilot it will be really helpful.
Can we use azure for free ? just for the purpose of project development for a better resume.
Please guide me if possible,I am a big fan of Krish's channel and his immnese contribution.thnks from thr core of my heart for teaching us ❤
Maximum log kya Sikh rahe hai currently development or AI ML ? Job kisme better hoga in future aspects
it is asking for a company mail addresss for creating Azure open AI account , students cant have it , is there any other way sir?
what should i do if i don't have a private email? I can't create MS Azure OpenAI :(
Hi Krish, I am a MERN full stack developer, from where what should i learn to become a fullstack AI developer to match with job market.
I see you are first cerating a model via GUI and then coding to call them via api? Do we have to create a model first via GUI? I am thinking how to create infra and deploy automatically from one environment to another via scripts with nothing manual from the GUI?
2:34:17 and 2:36:13 - Where is it storing vectors or embeddings? Cosmos DB? There is no reference of Cosmos DB in RAG application built in this video?
we are using Azure Blob Storage it gives the capability of vector search
@@sunnysavita10 Thanks. Is it using Integrated Vectorization?
@@Momentum_Option_Buyer yup
2:17:33 Azure AI studio is a framework like langchain?
How much is subscription cost for using Azure OpenAI in (INR) ?
We need more courses where we are integrqting open source llm models in hf/colab.
Already on the track to upload the videos
Can we give images as an input from the user to this chatbot???
Are you planning to upload Azure Gen AI created by you. You said you are going to work in parallel with AWS.
Sir asking you to create a video on Guardrails using Guardrails AI and Nemo Guardrails using Mistral or Llamma models
🙏💯👍
Azure Open AI has restricted their access, I am not getting any access with my regular Azure account.
Sir, can I apply for AI 900 certification after this course?
Can you do one on Gcp ?
will be uploaded soon.
Please Sir Hindi ki playlist bhi bna do
Fresher ko generative ai me job mil rahe hai?
Good job . Howevr, its too lengthy and I felt lil bit the trainer was over acting and dragging... make It small nuggets so that it will be more interesting and to the point. It took just 5 mins for me to get the bigpicture from here ..th-cam.com/video/3SRh2nzN2DM/w-d-xo.html . No offense its my view point
Why do you post a cringe face on a thumbnail; is it just to annoy people? I just clicked on the video to say this.