How does AWS Lambda scale - Reserve Capacity - AWS Service Deep Dive

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ส.ค. 2024

ความคิดเห็น • 36

  • @SeanLarge1
    @SeanLarge1 2 ปีที่แล้ว

    this might be the only video on youtube without a thumbs down, great break down!

    • @CompleteCoding
      @CompleteCoding  2 ปีที่แล้ว

      1 year, 2k views and no thumbs down. Pretty happy with that :D

  • @savanvadalia
    @savanvadalia 3 ปีที่แล้ว

    Very helpul, please keep making these in depth videos

  • @josealbert7806
    @josealbert7806 3 ปีที่แล้ว

    Great lineup explanation, highly appreciated.

  • @sparrowestes962
    @sparrowestes962 2 ปีที่แล้ว

    Thanks for the clear explanations!

  • @giridharansubramanian3923
    @giridharansubramanian3923 ปีที่แล้ว

    Great video again!

  • @Naveenkumar-hs7ec
    @Naveenkumar-hs7ec ปีที่แล้ว

    awesome explanation

  • @amrhossam8058
    @amrhossam8058 2 ปีที่แล้ว

    very good explanation bro

  • @sonia10647
    @sonia10647 3 ปีที่แล้ว

    Great video. Easy to understand. Thanks.

  • @karthikvenkataram4790
    @karthikvenkataram4790 ปีที่แล้ว

    👏👏👏👏👏

  • @piyushraj7901
    @piyushraj7901 3 ปีที่แล้ว

    indepth and great content , thanks for sharing

  • @ray811030
    @ray811030 2 ปีที่แล้ว

    Thanks for your sharing

  • @praneeth0820
    @praneeth0820 3 ปีที่แล้ว

    Great content, very useful !!
    One suggestion - would be great if you can bump up ur speach rate, nothing serious but sounds very scripted and cautious !!!!

    • @CompleteCoding
      @CompleteCoding  3 ปีที่แล้ว +3

      If you want you can speed up youtube videos using the setting cog on the player. I tend to watch a lot of videos on 1.5 speed and pause it if I need to

  • @florianb2572
    @florianb2572 ปีที่แล้ว

    Hum, and how does it work with an async request without await in it? like I return a 200 to the client, but I still have some running code in it. The lambda function cool down as soon as there is a return or it keeps working while the JS stack is not empty?
    The answer seems clear but I just want to be sure ?

    • @CompleteCoding
      @CompleteCoding  ปีที่แล้ว

      There is a configuration option in Lambda to either kill any processes that are running when you return, or to leave the lambda running until the JS event stack is empty.
      In general you should never return until all of your processes are completed.
      If you need to have something that runs a lot longer but you need to return an API response, it is best to invoke another process (fire EventBridge Event, add SQS event, call another lambda, trigger step function, etc) and then return your response. You handle that long running task' in another process.

  • @ray811030
    @ray811030 2 ปีที่แล้ว

    Can I say QPS for lambda at most is 1000? by default

    • @CompleteCoding
      @CompleteCoding  2 ปีที่แล้ว

      You can have 1000 lambdas running at any one time. That doesn't mean 1000 QPS though.
      If a lambda only runs for 100ms, you could do 1000 requests and then another 1000 100ms later => 10K QPS.
      On the other side if your lambda takes 5 seconds to run, you could start 1000 at the same time, but have to wait 5s before starting the next 1000. This averages to 200 QPS.

  • @mohammedramadan3480
    @mohammedramadan3480 3 ปีที่แล้ว

    can I ask for 100k request per second ( 100k concurrency limit ) ? if yes, do you know how much does it cost ?

    • @CompleteCoding
      @CompleteCoding  3 ปีที่แล้ว +2

      You could ask for a lambda concurrency of 100K but I think AWS might say no- or at least as for a very good reason.
      Do you mean 100K requests/second or 100K concurrent request. These number are only the same if every request takes exactly 1s to run.
      Cost - assuming constant 100K requests/s for 0.3s per request
      60s * 60m * 24h * 30d = 259 200 Million lambda invokes = $51,840
      259.2 B invokes * 128mb * 0.3s * Lambda GB/s cost = $162,000.324
      These are just estimate costs. Things that could change this
      100k/s is probably the peak rate, what is the total invokes/month?
      how long would the lambda need to run for? What size memory does it need?
      If you're triggering with API Gateway then that's an extra cost.
      It sound like your use case might not be well suited to Serverless.

    • @mohammedramadan3480
      @mohammedramadan3480 3 ปีที่แล้ว

      ​@@CompleteCoding thank you so much for that
      and yes serverless might not be good for my use case but
      the thing is i expect to receive about 2million requests/day minimum ( for 100k users )
      lets say 1m read and 1m write to databases
      if the 100k users are connected in the same time and that indeed is going to happen i'm sure that they won't do the same action that call the same lambda
      but i want to make sure that i don't have slow performance
      this is copy paste from AWS calculator i have increased it to 3 million requests / day and yes i won't need more than 128MB for memory
      Unit conversions
      Amount of memory allocated: 128 MB x 0.0009765625 GB in a MB = 0.125 GB
      Pricing calculations
      90,000,000 requests x 1,000 ms x 0.001 ms to sec conversion factor = 90,000,000.00 total compute (seconds)
      0.125 GB x 90,000,000.00 seconds = 11,250,000.00 total compute (GB-s)
      11,250,000.00 GB-s x 0.0000166667 USD = 187.50 USD (monthly compute charges)
      90,000,000 requests x 0.0000002 USD = 18.00 USD (monthly request charges)
      187.50 USD + 18.00 USD = 205.50 USD
      Lambda costs - Without Free Tier (monthly): 205.50 USD

    • @CompleteCoding
      @CompleteCoding  3 ปีที่แล้ว

      @@mohammedramadan3480 Those numbers look about right. The only real unknown at the moment is how long the Lambda will need to run to perform the task. If it runs less than 1s then it will be cheaper, if it runs longer then costing more

    • @swaminathbera6407
      @swaminathbera6407 10 หลายเดือนก่อน

      ​@@CompleteCodingNice perspective,
      What would you suggest for; say I have 2000 users that open my app and I need 6000 concurrent execution (3lambdas x 2000users), but after that I do not these many concurrent requests until all app users open the app again at same time?

    • @CompleteCoding
      @CompleteCoding  10 หลายเดือนก่อน

      If you did have such highly spiky traffic then you have two choices.
      Always have enough capacity - this could be using lambda reserved capacity, fargate or ec2 (probably as ECS cluster). None of these scale down
      If you know that users open the app at a certain time (at half time in a football game) then you use fargate and program it to scale up just before the high spike in traffic.
      This idea of going from zero to 6000 requests is never going to be one that has a nice solution

  • @maxpapirovnyk4304
    @maxpapirovnyk4304 ปีที่แล้ว

    informative, but boring :) informative wins, thanks