Deploy LLMs (Large Language Models) on AWS SageMaker using DLC

แชร์
ฝัง
  • เผยแพร่เมื่อ 23 พ.ย. 2024

ความคิดเห็น • 98

  • @UpendraKumar-zc8lm
    @UpendraKumar-zc8lm ปีที่แล้ว +6

    great video sir, I was looking something really like this for my client POC and fortunately landed to your videos. Thanks a ton for your efforts.

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Glad to hear that... keep learning and growing.

    • @sravantipris3544
      @sravantipris3544 6 หลายเดือนก่อน +1

      Can anyone please say how much money it will cost for me for doing all this or is it free???

  • @PrasadPrasad-hi7pl
    @PrasadPrasad-hi7pl ปีที่แล้ว +4

    Excellent tutorial. Could you please give a tutorial how to feed a document and extract the answers from it and then deployment it. Thank you in advance❤

  • @nikhilkomakula
    @nikhilkomakula ปีที่แล้ว +3

    Great video Sonu. Thanks for sharing 🙏

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      My pleasure 😊

  • @aillminsimplelanguage
    @aillminsimplelanguage 8 หลายเดือนก่อน +1

    A very nice video.
    A small suggestion - Maybe after the video, you should also show how to terminate all the 3 things - notebook, model and endpoint, so that people don't incur a lot of cost.
    Keep up the good work!

    • @AIAnytime
      @AIAnytime  8 หลายเดือนก่อน +1

      Noted

    • @FaizAhmed-l2y
      @FaizAhmed-l2y 8 หลายเดือนก่อน

      if i do all of the things that shows in the video so it will cost me for 24 hours? if yes then how can i save my cost by only triggering it when sending a request and then terminate it?

    • @sravantipris3544
      @sravantipris3544 6 หลายเดือนก่อน

      Can anyone please say how much money it will cost for me for doing all this or is it free???

    • @aillminsimplelanguage
      @aillminsimplelanguage 6 หลายเดือนก่อน

      @@sravantipris3544 If you use initial free credits then it would not cost you any money. However, make sure to disable all the services immediately else it could go to USD 600+

  • @SharmaManoj-pl5jy
    @SharmaManoj-pl5jy 7 หลายเดือนก่อน

    great video brother..I was looking exactly for this stuff and luckily landed over your channel..keep up the good work

  • @VIVEKKUMAR-kx1up
    @VIVEKKUMAR-kx1up ปีที่แล้ว +2

    really love to see this I will definitely follow this video and get this done today only!!

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Thanks Vivek for your kind words.....

  • @amitt9053
    @amitt9053 7 หลายเดือนก่อน

    I think I almost missed where you referred "AWS SageMaker DLCs", maybe possible to emphasize more on DLCs.

  • @Phas0ruk
    @Phas0ruk ปีที่แล้ว +2

    Great tutorial thanks. The only bit i got lost with is creating the policy to let lambda call the sagemaker endpoint. GPT4 helped :)

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Glad to hear that ..... Thanks!

    • @Artificialintelligencia
      @Artificialintelligencia ปีที่แล้ว

      @@AIAnytime better hota bata dete wo policy kaise create karna hai ?

  • @anandsahu6767
    @anandsahu6767 8 วันที่ผ่านมา

    Great Video , i need some information for the cloud deployment of the llm model if deploy a llm model on aws sagemaker and used it using api gateway and lambda then how the amazon will charge me like whether the charge is for 24/7 or for the api hit only it will be very helpful if anyone can share their insights

  • @surya-z9e
    @surya-z9e 4 หลายเดือนก่อน

    what is the permissions we need to add in the IAM role, while we getting this "ACCESS DENIED EXCEPTION WHEN CALLING INVOKE ENDPOINT" in lambda function.

  • @luizfelipevercosa
    @luizfelipevercosa 5 หลายเดือนก่อน

    Was the API Gateway used for anything? Thank you for the video again! Very useful!

  • @safaelaqrichi9096
    @safaelaqrichi9096 9 หลายเดือนก่อน

    Great content and excellent tutorial! thank you

    • @AIAnytime
      @AIAnytime  9 หลายเดือนก่อน

      Glad it was helpful!

  • @usmanthechamp123
    @usmanthechamp123 11 หลายเดือนก่อน

    Excellent tutorial

    • @AIAnytime
      @AIAnytime  11 หลายเดือนก่อน

      Thank you! Cheers!

  • @VIVEKKUMAR-kx1up
    @VIVEKKUMAR-kx1up ปีที่แล้ว +2

    in the video I have 2 doubts.
    1) 48:48 you have created some IAM > Policy like AWSlambdaBasicExecutionRole-30e....... and AmzonSagemaker-ExecutionPolicy...
    how did you did that!!
    2) 46:40 what is that "path" : "\example" can you please explain that!!

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      1. You can create policies in the IAM. Search IAM in search box. Open IAM, look for policies in the left hand side and go inside it add policies.
      2. Path is typically related to the URL path of the incoming HTTP request, specifically when working with API gateway. Mainly, you will configure API gateway after Lambda function. That's why but you can ignore that. You can just define queryStringParameters. param1=query or something depending upon how you write your lambda code.

    • @VIVEKKUMAR-kx1up
      @VIVEKKUMAR-kx1up ปีที่แล้ว

      @@AIAnytime thankyou!!

  •  11 หลายเดือนก่อน

    Very instructive video, I would like to know if it is possible to upload the model directly to AWS without going through HF. Thanks you in advance.

    • @AIAnytime
      @AIAnytime  11 หลายเดือนก่อน +1

      Absolutely , you can do that. That will be a bit of manual deployment. Where you can push the model weights to S3 and using script you can deploy via Sagemaker. The good way is to deploy through DLC. I mean images.

  • @pablogonzalezrobles803
    @pablogonzalezrobles803 ปีที่แล้ว +4

    how much is the monthly cost of keeping the service up?

    • @aillminsimplelanguage
      @aillminsimplelanguage 8 หลายเดือนก่อน

      it could go up to $500 or more. We need to terminate if we don't want to incur this cost.

  • @AnkitKumar-sz3by
    @AnkitKumar-sz3by ปีที่แล้ว

    Thanks got to know about how to increase output length using hyperparameters ....

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Glad it helped

  • @programwithpradhan
    @programwithpradhan 9 หลายเดือนก่อน +1

    I finetuned tinyllama on my own dataset. Can I deploy my finetuned model with these steps that you mentioned in this video

    • @AIAnytime
      @AIAnytime  9 หลายเดือนก่อน

      Absolutely....

  • @ashwanikumar1690
    @ashwanikumar1690 ปีที่แล้ว

    Very informative video... I have a query. There will be some provision to disable the notebook and endpoint when not in use right??

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +2

      Yes, correct, Ashwani. You can control it completely. Stop the endpoint, delete endpoint, etc! You can also set limits on budget etc.

  • @naturalmeditation2241
    @naturalmeditation2241 9 หลายเดือนก่อน

    Error show when choosing 70b how do fix it this error showing jumpstart-dft-meta-textgenerationneuron-llama-2-70b-f
    Something went wrong
    We encountered an error while preparing to deploy your endpoint. You can get more details below.
    operation deployAsync failed: handler error

  • @pluckthelivingflowers
    @pluckthelivingflowers ปีที่แล้ว +1

    How are you querying the model in Jupyter Labs prior to you ever deploying the model? I am confused by that. Amazing video, just want some clarification if possible. In addition, why is the instance type configured in the deployment code different than the T5.2xlarge you configured in SageMaker?

    • @alekseiwolff
      @alekseiwolff 11 หลายเดือนก่อน

      He first configured the notebook instance that was used to run the Jupyter Notebook code, etc. then later in the video he is configuring the predictor (i.e. inference endpoint) that will hold the model and can be called from AWS Lambda

    • @sravantipris3544
      @sravantipris3544 6 หลายเดือนก่อน

      Can anyone please say how much money it will cost for me for doing all this or is it free???

  • @VenkatesanVenkat-fd4hg
    @VenkatesanVenkat-fd4hg ปีที่แล้ว +1

    Hi, Thanks for the valuable video. My doubts are
    1. How you handled error on lambda related to IAM policy whether it is specifically for accessing sagemaker endpoints here...
    2. For getting api response, whether we do not need any flask or fastapi implementation.
    Can you guide on this. waiting for ur responses and videos....

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Hi Venkatesan, you have to attach the policies in IAM policy for lambda, S3, sagemaker, etc. For getting an API response, you can deploy a Microservice as well. I have created a function url from lambda that i can use in any of my app through backend like FastAPI, flask, Streamlit, etc.

    • @VenkatesanVenkat-fd4hg
      @VenkatesanVenkat-fd4hg ปีที่แล้ว

      @@AIAnytime Thanks for the kind response whether lambda is mandatory here or i can use inference endpoint from aws sagemaker directly in fastapi....

  • @utkarshtiwari7947
    @utkarshtiwari7947 9 หลายเดือนก่อน +1

    I am getting error here-
    model = llm_pipeline()
    generated_text = model(input_prompt)
    print(generated_text)
    ValueError: The following `model_kwargs` are not used by the model: ['return_full_text'] (note: typos in the generate arguments will also show up in this list)

    • @ssss-u7w
      @ssss-u7w 8 หลายเดือนก่อน

      i am also getting error here

  • @royriver23
    @royriver23 ปีที่แล้ว

    I learned a lot here, thanks a lot! 🙌

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Glad it was helpful!

  • @SloanMosley
    @SloanMosley ปีที่แล้ว +1

    How does it compare with hosting on a cheaper cloud provider or GPU such as Lambda Labs?

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Depends..... AWS is the primary cloud provider. If you are working in IT you will probably work with AWS, Azure or GCP. AWS provides different ways of deploying this models like via one click deployment using DLC . And hourly rate or pay per go is quite affordable. But yes you have to select your options. Based on many things like data protection, privacy, governance, scaling, etc.

  • @gurjotsingh3726
    @gurjotsingh3726 11 หลายเดือนก่อน

    Thanks a lot brother! Means a lot

    • @AIAnytime
      @AIAnytime  11 หลายเดือนก่อน

      No problem

  • @ricand5498
    @ricand5498 6 หลายเดือนก่อน

    Great, video. Followed along until 48:46. Please go into depth on the Policies error and how you fixed it. I have no experience with AWS and got the same error but you skipped over why the error occurred or detailed instructions of how to solve it.

  • @pagadishyam7049
    @pagadishyam7049 ปีที่แล้ว +1

    Hi, getting below error when creating a end point. can anyone help please. Error Message: " UnexpectedStatusException: Error hosting endpoint huggingface-pytorch-tgi-inference-2023-09-04-16-49-09-918: Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check. Please check CloudWatch logs for this endpoint.. "

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Can you check if you are using the right model? Do you need to authenticate with the Huggingface model repo? Pls look at the logs in cloudwatch in AWS console.

    • @pagadishyam7049
      @pagadishyam7049 ปีที่แล้ว

      checkpoint = "MBZUAI/LaMini-T5-738M"

  • @FaizAhmed-l2y
    @FaizAhmed-l2y 8 หลายเดือนก่อน

    i am stucked on policy creation. anybody can help or have a guide how to create that policy?

  • @satyamtiwari3839
    @satyamtiwari3839 ปีที่แล้ว +1

    how to fix permissions error problem
    go to i am Access management/Policies and make new policy and create for s3, lamda, and sagemaker give access to all permission and save after that link the policy to your sagemaker project go to Access management/Policies select the policy you created then go to entitled attached and attach your sagemaker sand save problem fix

    • @SubodhDeolekar
      @SubodhDeolekar ปีที่แล้ว

      Great. Thanks for the detailed steps.

  • @akshay_raut
    @akshay_raut ปีที่แล้ว

    Great video ❤

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      Glad you liked it!!

  • @siddhardhselvam6861
    @siddhardhselvam6861 ปีที่แล้ว

    Hi! I am trying to deploy llama 2 in sagemaker. Not sure how to use the HF tokens. The endpoint is failing saying that the repo is gated.

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      May be you have to login into Huggingface hub using your access tokens. Just do a login from notebook cell in sagemaker. Then you can deploy. FYI, DLC for official Llama2 is still not available for deployment. You can deploy manually or from jumpstart.

    • @sravantipris3544
      @sravantipris3544 6 หลายเดือนก่อน

      Can anyone please say how much money it will cost for me for doing all this or is it free???

  • @parasetamol6261
    @parasetamol6261 ปีที่แล้ว

    That Greath Video, Thank

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      You're welcome!

    • @parasetamol6261
      @parasetamol6261 ปีที่แล้ว

      @@AIAnytime can your deploy huggingface model on azure please 🙏

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Very soon. I am working on it.

  • @kevinyuan2735
    @kevinyuan2735 ปีที่แล้ว +1

    谢谢!

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Thank you so much for the support.

  • @deepthireddy1789
    @deepthireddy1789 8 หลายเดือนก่อน

    Unable to understand clearly because of video quality, please provide high quality video.

    • @AIAnytime
      @AIAnytime  8 หลายเดือนก่อน

      Sure... Thanks for the feedback

  • @raghurana1
    @raghurana1 ปีที่แล้ว +1

    How can u fine tune this model with your own data ?

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว +1

      1. Prepare the data in Alpaca format 2. Spin up a machine like g5.2x large or above 3. Fine tune using PEFT and QLoRA

  • @Imran-Alii
    @Imran-Alii ปีที่แล้ว

    @Thanks for Sharing Sonu!!!!!

  • @sravantipris3544
    @sravantipris3544 6 หลายเดือนก่อน

    Can anyone please say how much money it will cost for me for doing all this or is it free???

  • @shivendratiwari-u5t
    @shivendratiwari-u5t ปีที่แล้ว

    just great .....

  • @gurjotsingh3726
    @gurjotsingh3726 11 หลายเดือนก่อน

    Bro I am not able to add the policy, can you help?

    • @AIAnytime
      @AIAnytime  11 หลายเดือนก่อน

      Why are you not able to attach policies? Can you open an issue on GitHub repo of this video and put some screenshots so I can help you debug?

  • @Ryan-yj4sd
    @Ryan-yj4sd ปีที่แล้ว +1

    where is the code?

  • @SubodhDeolekar
    @SubodhDeolekar ปีที่แล้ว

    How to fix "Internal Server Error" ?

    • @AIAnytime
      @AIAnytime  11 หลายเดือนก่อน

      Can you paste the error complete trace?

  • @sndrstpnv8419
    @sndrstpnv8419 8 หลายเดือนก่อน

    where it code repo on your girhab?

    • @AIAnytime
      @AIAnytime  8 หลายเดือนก่อน

      This should be

  • @bingolio
    @bingolio ปีที่แล้ว

    When will you begin to look like your Avatar photo? 😝

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Haha... I usually keep like that. Let's c 🔜

  • @prithviprakash1110
    @prithviprakash1110 ปีที่แล้ว +1

    Is this serverless?

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Yes it is.....

  • @SMSHRIYOKESHST
    @SMSHRIYOKESHST ปีที่แล้ว

    Hi, thanks for the video it teaches a lot. I just want to know, what is ideal notebook instance i can go to load and deploy starcoder 15B model? At first level, i tried with ml.g4dn.xlarge instance but i got "out of memory error".

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w ปีที่แล้ว

    How about on Google Cloud

    • @AIAnytime
      @AIAnytime  ปีที่แล้ว

      Very soon.....

  • @febyferdinansyah8896
    @febyferdinansyah8896 ปีที่แล้ว

    I've got this error, how to solve it?
    Test Event Name
    generateTestResponse
    Response
    {
    "errorMessage": "'queryStringParameters'",
    "errorType": "KeyError",
    "requestId": "4196dde2-b2e7-4863-afa7-f2a67129021b",
    "stackTrace": [
    " File \"/var/task/lambda_function.py\", line 10, in lambda_handler
    query_params = event['queryStringParameters']
    "
    ]
    }

    • @ashutoshroutaray7080
      @ashutoshroutaray7080 ปีที่แล้ว

      I'm getting the same error, did you find anything? @AIAnytime can you please check?