Thank you for a great video and explanation on concepts! I would like to add that now you can configure the Lambda trigger to accept partial batch processing. You change the response object and only the messages that failed processing will be re processed by SQS without having to delete the messages manually.
Honestly, this video is better than 99% of the tutorials and paid courses. With only 4.4 k subscribers at the point of this comment.... you deserve much more than that. Keep up the amazing work!
your explanation is far better than rest of videos I watched. I believe you have very solid design architecture experience to makeout which aws service is for what
excellent explanation and on point. i figured the hard way with those batched messages. I solved this by adding a database table to record each message.
Can we use SNS->SQS1, SQS2, SQS3->Lambda1, Lambda1, Lambda1 architecture? That is SNS sends messages to multiple queues but we want a single/same lambda to process the request instead of creating multiple Lambda's? Please advise.
Very helpful, thanks. Fills in a lot of gaps in knowledge between system design questions that just hand wave through these abstract concepts like fanout and pubsub
Hi Raj, you mentioned the lambda function code can explicitly deletes the successful records from SQS using some boto3 SDK. How is this different from the lambda property of "Report batch item failure" ? Also, what if there are 10k records in a single batch. Do we have to delete each one if it from SQS in the lambda function code ?
Raj, at 4.28 you are saying messages gets deleted from SQS once processed by consumer, but did you mean to say the consumer deletes the message and message persists otherwise?
I have a situation that is pretty much like the example at 12:20. My first instinct was to, rather than sending the messages one at a time to the lambda, send them to a queue, batch then, and then send them to the lambda. But that was not enough. My team and I fiddled with a lot of things to get it to work, and it is working now, but I don’t feel like I know enough about it to optimize it or to make further changes to it if they become necessary. Does anyone know of a good place to get information about that? I have been reading a lot about it and a lot of the stuff that I find is not related to the problem that I have even though I feel it is a common use case.
@@franklinantony Sadly no. We fiddled with the settings a lot and eventually it became stable enough that it doesn't cause any issues. Other things came up and we never had the time to go back and look at it in more detail. I still feel annoyed that I don't understand enough about it. I want to say that AWS released a feature that sounded like it would help for our particular case though. I only read about it briefly a while back so I might be completely wrong. In our case the issue was the number of connections to the database and it sounded like a feature was added to lambdas to manage the connection to the databases as in, if there are no connections available, no more lambdas would spawn. That would have addressed our problem perfectly. But that is if that feature actually does what I think it does. I really hope that I can put in the time to look at it soon.
Hi Raj, One question. In the example shared by you in the video during 5.51, you said a batch can have 10 messages. And total size cannot exceed 256KB. Is this the total size of the batch or the total size of each message in the batch? ( Because In my knowledge, the maximum size of the individual message in the SQS cannot exceed 256KB )
Thank you for introducing me awesome summary for SNS vs SQR in AWS services. It was really helpful for me. While I was watching video, I was curious why POST method could be Async and GET method could be Sync? Thank you! :)
Hey Denys, do you have the use case? Like are you trying to use SQS with Dynamo streams? here is a blog - aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and-design-patterns/. Let me know your use case and I can suggest some ways accordingly.
@@cloudwithraj I was thinking about e-com case when we push orders to the queue, then process them with lambda a save to dynamodb. In case if we trigger lambda each time then queue has a message, I do not see big diff here. But if we will use something like that dashbird.io/blog/cost-efficient-ways-to-run-dynamodb-tables, this is more interesting, but still, I do not know if it the best way to handle such a scenario. They take messages from the queue and increase dynamodb provisioned capacity if there are a lot of messages. Let me know what you think, please.
@@cloud_architector Thanks for the background Denys. This has lot of factors, depending on your traffic. Depending on traffic rate, provisioned capacity will work. Ondemand handles spiky traffic better, however can be more expensive. One strategy is to use ondemand first, gain traffic nature, then set provisioned capacity accordingly. I am skeptical about changing the RCU, LCU based on sqs depth on the fly because of three reasons: 1. There will be some delay to change the capacity before you can start processing messages. You can manage batch size and concurrency of the consumer lambda instead, based on provisioned capacity of your table, to control the rate of consumption. Since you are consuming in batch fashion anyway, there is no SLA pressure. 2. The Lambda now will have IAM permission to change Dynamo table configuration, this is not recommended. Application lambda should do application logic, and separate process should change configuration. 3. Ideally if you can solve a design problem with AWS components that's one less headache to worry about. Nothing against dashbird, but if you use native AWS components, and when AWS enhances capability it's much easier to adopt. And you don't have to think about security compliance of third party vendors. Sorry for the long response, let me know what you think. Happy New Year Denys.
Thank you for a great video and explanation on concepts!
I would like to add that now you can configure the Lambda trigger to accept partial batch processing. You change the response object and only the messages that failed processing will be re processed by SQS without having to delete the messages manually.
You are correct Julian, this enhancement came out recently. Comment pinned.
Honestly, this video is better than 99% of the tutorials and paid courses. With only 4.4 k subscribers at the point of this comment.... you deserve much more than that. Keep up the amazing work!
Thanks Viktor for the kind words :). Feedback like this energizes me to keep making more videos and motivates me to do better. Thanks for watching!
@@cloudwithraj I second that! great video, thanks a lot.
“This video is better than my PhD professors lecture” is a quote quite used too.
This is so helpful man, This channel is really underrated.
your explanation is far better than rest of videos I watched. I believe you have very solid design architecture experience to makeout which aws service is for what
Thanks Sarir, glad it was helpful!
I usually don't comment on videos. But for you, I must. You have done a beautiful job. Well done!! Please post more.
Thank you so much gokulagiridaran for your kind words! I am so glad you found the video helpful 🙏
excellent explanation and on point. i figured the hard way with those batched messages. I solved this by adding a database table to record each message.
Great explanation! Would love to hear you throw real world use cases into the mix as well
Deserve 5 stars!
Just like the Holy Trinity of Serverless - API Gateway, Lambda and DynamoDB!
Can we use SNS->SQS1, SQS2, SQS3->Lambda1, Lambda1, Lambda1 architecture? That is SNS sends messages to multiple queues but we want a single/same lambda to process the request instead of creating multiple Lambda's? Please advise.
Congratulations for the explanation.
Glad it was helpful!
I have a question:
How frequent does AWS Lambda polls the SQS queue?
Explained in very simple language. Thank you so much.
You are most welcome
Very helpful, thanks. Fills in a lot of gaps in knowledge between system design questions that just hand wave through these abstract concepts like fanout and pubsub
Glad it was helpful! Thanks for watching!
Best explanation! Thank you!
Glad it was helpful!
Hi Raj, you mentioned the lambda function code can explicitly deletes the successful records from SQS using some boto3 SDK. How is this different from the lambda property of "Report batch item failure" ? Also, what if there are 10k records in a single batch. Do we have to delete each one if it from SQS in the lambda function code ?
Hi Raj, can you plz reply to this ?
I hardly subscribe but for this one video you've got one subscription and it's from me....
Explained so well and in Understandable way. Thank you.
Glad it was helpful!
amazing job explaining - 5 stars
Raj, at 4.28 you are saying messages gets deleted from SQS once processed by consumer, but did you mean to say the consumer deletes the message and message persists otherwise?
Thanks Raj for great tutorial video. Hope you can also do comparison SNS vs SQS vs EventBridge vs Kinesis
Noted
Perfect timing
I have a situation that is pretty much like the example at 12:20. My first instinct was to, rather than sending the messages one at a time to the lambda, send them to a queue, batch then, and then send them to the lambda. But that was not enough.
My team and I fiddled with a lot of things to get it to work, and it is working now, but I don’t feel like I know enough about it to optimize it or to make further changes to it if they become necessary.
Does anyone know of a good place to get information about that? I have been reading a lot about it and a lot of the stuff that I find is not related to the problem that I have even though I feel it is a common use case.
We have a similar scenario. Were you able to find a optimum solution? If yes, could you please share?
@@franklinantony Sadly no. We fiddled with the settings a lot and eventually it became stable enough that it doesn't cause any issues. Other things came up and we never had the time to go back and look at it in more detail. I still feel annoyed that I don't understand enough about it.
I want to say that AWS released a feature that sounded like it would help for our particular case though. I only read about it briefly a while back so I might be completely wrong.
In our case the issue was the number of connections to the database and it sounded like a feature was added to lambdas to manage the connection to the databases as in, if there are no connections available, no more lambdas would spawn. That would have addressed our problem perfectly. But that is if that feature actually does what I think it does. I really hope that I can put in the time to look at it soon.
Incredible explanation! This video is pure gold, thanks a lot really appreciate this content!
Glad it helped!
Great video🎉
HI Rajdeep,
Is there anyway to get ppt which demoed/shared?
Hi Raj, Can you share ppt!
Thank you for this nice video. Do you have a video comparing SES and SNS. When to use them?
Hi Raj, One question. In the example shared by you in the video during 5.51, you said a batch can have 10 messages. And total size cannot exceed 256KB. Is this the total size of the batch or the total size of each message in the batch? ( Because In my knowledge, the maximum size of the individual message in the SQS cannot exceed 256KB )
Hi Raj, can you plz reply to this ?
Very nice and concise details on sms and sqs
Thanks vs! I am glad you found this useful.
Deep dive was very helpful . thanks :)
Glad it was helpful!
Better than the best.
Thank you for introducing me awesome summary for SNS vs SQR in AWS services. It was really helpful for me. While I was watching video, I was curious why POST method could be Async and GET method could be Sync? Thank you! :)
Super Explanation
Thank you 🙂
Nice Work Raj!
Thanks brother!
Are you sure and doesn't have dead letter queue?
long awaited..
I know you waited long, hope you liked this video and found it useful Arivu :)
Amazing!
Thank you! Cheers!
Awesome in depth video...
Thanks Atul!
you are awesome
14:18 Could you not directly connect S3 to SQS?
Yes, you can now. The time this video was made, that feature was not there
Thanks a lot 😀
Great!
Can you please share ppt
Hi, do you have practical example how to use sqs queue to reduce dynamo db cost?
Hey Denys, do you have the use case? Like are you trying to use SQS with Dynamo streams? here is a blog - aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and-design-patterns/. Let me know your use case and I can suggest some ways accordingly.
@@cloudwithraj I was thinking about e-com case when we push orders to the queue, then process them with lambda a save to dynamodb. In case if we trigger lambda each time then queue has a message, I do not see big diff here. But if we will use something like that dashbird.io/blog/cost-efficient-ways-to-run-dynamodb-tables, this is more interesting, but still, I do not know if it the best way to handle such a scenario. They take messages from the queue and increase dynamodb provisioned capacity if there are a lot of messages. Let me know what you think, please.
@@cloud_architector Thanks for the background Denys. This has lot of factors, depending on your traffic. Depending on traffic rate, provisioned capacity will work. Ondemand handles spiky traffic better, however can be more expensive. One strategy is to use ondemand first, gain traffic nature, then set provisioned capacity accordingly. I am skeptical about changing the RCU, LCU based on sqs depth on the fly because of three reasons:
1. There will be some delay to change the capacity before you can start processing messages. You can manage batch size and concurrency of the consumer lambda instead, based on provisioned capacity of your table, to control the rate of consumption. Since you are consuming in batch fashion anyway, there is no SLA pressure.
2. The Lambda now will have IAM permission to change Dynamo table configuration, this is not recommended. Application lambda should do application logic, and separate process should change configuration.
3. Ideally if you can solve a design problem with AWS components that's one less headache to worry about. Nothing against dashbird, but if you use native AWS components, and when AWS enhances capability it's much easier to adopt. And you don't have to think about security compliance of third party vendors.
Sorry for the long response, let me know what you think. Happy New Year Denys.
@@cloudwithraj Same for you Rajdeep!!! Thank you for your help!!!
Thanks Raj! one request: Can you make some condensed videos in your style covering the Serverless First event? Many thanks.
Just saw this Bitter Lime. Do you mean the "Serverless First" Twitch event? Or do you mean something else?
@@cloudwithraj Yes Raj the twitch event :)
Thanks
Really nice, thanks. BTW, this guy talks like the Nigerian guy from Facejacker.
Thanks! 😃
th-cam.com/video/LzFuXvhA5xk/w-d-xo.html - this need to be updated with partial batch response
You are correct. I will put a pinned comment. Unfortunately no way to update a TH-cam video.