Nir Feigelshtein
Nir Feigelshtein
  • 5
  • 6 052
Using AWS Lambda with Amazon Kinesis - Create a Streaming Application using Serverless Framework
Learn how to set a Lambda Function as a Kinesis Data Stream consumer by creating a real time stock market streaming application using the Serverless Framework.
00:00 - Intro
00:48 - Architecture
03:17 - Code
12:56 - Execution
14:59 - Outro
GitHub - github.com/nirf/using-aws-lambda-with-amazon-kinesis-serverless-framework-demo
LinkedIn - www.linkedin.com/in/nir-feigelshtein/
มุมมอง: 405

วีดีโอ

Using AWS Lambda with Amazon Kinesis - Error Handling using Event Source Mapping
มุมมอง 4322 ปีที่แล้ว
When attaching a Lambda function to consume a Kinesis stream, you are in fact attaching an event source mapping and pointing your Lambda function to it. Event Source Mapping will handle the polling, checkpointing and error handling complexities which will allow you to focus on your business logic. A good understanding of the different Event Source Mapping parameters, will make you utilize the L...
Using AWS Lambda with Amazon Kinesis - Event Source Mapping Application Parameters
มุมมอง 6182 ปีที่แล้ว
When attaching a Lambda function to consume a Kinesis stream, you are in fact attaching an event source mapping and pointing your Lambda function to it. Event Source Mapping will handle the polling, checkpointing and error handling complexities which will allow you to focus on your business logic. A good understanding of the different Event Source Mapping parameters, will make you utilize the L...
Using AWS Lambda With Amazon Kinesis - Shared Throughput Consumer vs Enhanced fan-out
มุมมอง 1.4K2 ปีที่แล้ว
Consuming a Kinesis Data Stream using a Lambda function has 2 options: 1. Shared Throughput Consumer (Standard Iterator) 2. Dedicated Throughput Consumer (Enhanced fan-out) Kinesis can be used with Amazon EC2-based and container-based workloads. However, its integration with AWS Lambda can make it a useful data source for Serverless applications. When choosing Lambda as a kinesis stream consume...
Scaling Based On Amazon SQS
มุมมอง 3.2K2 ปีที่แล้ว
How to scale based on the number of messages in a SQS queue. Writing a lambda function that emits custom metrics to CloudWatch. Understand dynamic scaling and learning what is a target tracking scaling policy. 0:00 - Intro (Theory) 04:33 - Code 09:46 - Demo 15:09 - Further Topics 16:40 - Outro GitHub - github.com/nirf/scaling-based-on-aws-sqs LinkedIn - www.linkedin.com/in/nir-feigelshtein/

ความคิดเห็น

  • @vbodduluri
    @vbodduluri 5 หลายเดือนก่อน

    Thank you

    • @MrNir1234
      @MrNir1234 5 หลายเดือนก่อน

      My pleasure!

  • @mdmoniruzzaman703
    @mdmoniruzzaman703 5 หลายเดือนก่อน

    Great explanations. Keep making videos.. Thanks

  • @_skyyskater
    @_skyyskater 8 หลายเดือนก่อน

    Excellent video! Also love how you prenounce "gigabyte" as "jigabyte" like Doc Brown 1.21 jigawatts 😂

  • @stevenalves7506
    @stevenalves7506 ปีที่แล้ว

    Great video, thank you for share. One thing that I would like to now is why we need localstack to running?

    • @nirfeigelshtein8787
      @nirfeigelshtein8787 ปีที่แล้ว

      Hi Steven, you welcome. localstack is only used for local development of the sqs-worker, its not required for running the code.

  • @RajaRaviVarman
    @RajaRaviVarman ปีที่แล้ว

    Thank you so much for this video. It helped to understand enhanced fan out better.

    • @MrNir1234
      @MrNir1234 ปีที่แล้ว

      Hi Raja, I'm glad it helped you :)

  • @yuming123
    @yuming123 ปีที่แล้ว

    great explanation, thanks a million for preparing this video.

  • @vladbunin8994
    @vladbunin8994 ปีที่แล้ว

    What if we publish 150 messages? How many instances will be running?

    • @nirfeigelshtein8787
      @nirfeigelshtein8787 ปีที่แล้ว

      Hi Vlad, good question! It depends on multiple factors. As illustrated in the code: Let’s assume that you currently have an Auto Scaling group with 1 instance and the number of visible messages in the queue (ApproximateNumberOfMessages) is 150. Average message processing time is 1 second [07:55] and the longest acceptable latency is 10 seconds, therefore the acceptable backlog per instance is 10 / 1 which equals to 10 messages [09:21]. This means that 10 is the target value for your target tracking policy. When the **current** backlog per instance is bigger than the target value, a scale-out event will happen. Because the **current** backlog per instance is already 150 messages (150 messages / 1 instance), your auto scaling group scales out, and it scales out by ~14 instances to maintain proportion to the target value at or near 10 messages.

  • @mayursoni2433
    @mayursoni2433 ปีที่แล้ว

    Hi Nir, thanks for the explanation. I had a question - since enhanced fanout is push based approach. Is it polling possible here?

    • @nirfeigelshtein8787
      @nirfeigelshtein8787 ปีที่แล้ว

      Hi Mayur, you're welcome. Enhanced fanout consumer uses HTTP2 which uses persistent connections and push records to the consumers using the SubscribeToShard API. Polling is possible using the Shared throughput consumer, this option uses HTTP to poll records from the shard using the GetRecords API.

    • @mayursoni2433
      @mayursoni2433 ปีที่แล้ว

      @@nirfeigelshtein8787 thanks.!

  • @boyle5810
    @boyle5810 ปีที่แล้ว

    𝓟Ř𝔬𝓂𝔬𝐒ϻ

  • @alonzo3668
    @alonzo3668 2 ปีที่แล้ว

    𝓹𝓻𝓸𝓶𝓸𝓼𝓶

  • @MegaTh123
    @MegaTh123 2 ปีที่แล้ว

    Great video, thanks