Scaling Based On Amazon SQS
ฝัง
- เผยแพร่เมื่อ 8 ก.ค. 2024
- How to scale based on the number of messages in a SQS queue.
Writing a lambda function that emits custom metrics to CloudWatch.
Understand dynamic scaling and learning what is a target tracking scaling policy.
0:00 - Intro (Theory)
04:33 - Code
09:46 - Demo
15:09 - Further Topics
16:40 - Outro
GitHub - github.com/nirf/scaling-based...
LinkedIn - / nir-feigelshtein
Great video, thanks
My pleasure!
Thank you
My pleasure!
Great video, thank you for share. One thing that I would like to now is why we need localstack to running?
Hi Steven, you welcome.
localstack is only used for local development of the sqs-worker, its not required for running the code.
What if we publish 150 messages? How many instances will be running?
Hi Vlad, good question!
It depends on multiple factors. As illustrated in the code: Let’s assume that you currently have an Auto Scaling group with 1 instance and the number of visible messages in the queue (ApproximateNumberOfMessages) is 150. Average message processing time is 1 second [07:55] and the longest acceptable latency is 10 seconds, therefore the acceptable backlog per instance is 10 / 1 which equals to 10 messages [09:21]. This means that 10 is the target value for your target tracking policy. When the **current** backlog per instance is bigger than the target value, a scale-out event will happen. Because the **current** backlog per instance is already 150 messages (150 messages / 1 instance), your auto scaling group scales out, and it scales out by ~14 instances to maintain proportion to the target value at or near 10 messages.