Hi Michael, very nice content!! Just want to point that you need to create a LB (eg nginx) to randomly pick a pod inside the k8s and make sure that you are indeed using a distributed system.
For a newbie in Nest I didnt get quite a number but somehow this ia looking so cool and helpful.| Cant wait to flow in Nest and deployments with K8s like you did on this project. Thanks mate :)
One remark here. Bull is not exactly the same thing as BullMQ. BullMQ is a newer version of Bull written in TS. NestJS provides separate packages for Bull and BullMQ.
hello, thanks for your excellent content and explanation, i have a question, if i write a processor and then i write a cron job to call this in a specific time, is it going to provide me the same functionality? thanks
This is great! I see however usage of Bull and Redis is mostly recommended for intensive tasks. Would implementing this queue architecture on, say a REST service, beneficial? I'm thinking maybe the queue system can help on handling errors and probably restarting failed tasks that run daily on the application, such as a data validation task or a database mutation.
Hi Michael, great video again. Very well put together, straight to the point with a real example. If you don't mind me asking, what are the reasons someone would choose to use bullmq over say rabbitmq? Is there a particular reason you have started looking into it?
So I think they're different solutions to this same problem of achieving scale by distributed async processing. They obviously have their own differences between each other and it's a matter of finding what works best for you. BullMQ is "simpler" in my opinion, no need to ack messages, etc.
So instead of using Docker and K8s orchestration what if we're using a LB with multiple ec2 instances (like in AWS) would the entire setup still pick only 1 of the consumer to process the message and not multiple consumers (from other servers) fighting to process that message? I mean is this the nature of distributed queuing system that allows this to happen or something else?
How you think this great solution compare to using an orchestrator like Netflix conductor or Netflix maestro? Thanks for sharing your knowledge with us.
Great content again. Quick question though, with your first example "transcode an audio file", why choose to go for a job with BullMQ (the queueing system with Redis) instead of an event with EventEmitter (appart from showcasing it of course)? Both would achieve the same result right ? Not blocking the thread and decoupling the producer/emitter from the consumer/listener ? My question is thus : are those 2 patterns just different ways to implement a distributed system ? Why go for one or the other then ? What are the main differences ?
There's is a fundamental difference between a QUEUE and a PUB/SUB or EVENTS. A queue (just like the name implies) strictly follows a FIFO sequence, meaning that, regardless of the number of replicas or instances there are, only one of this instance can dequeue and process the data in sequence (based on how they got enqueued). However, in the case of events (or pubsub), there's a producer of the event and potentially multiple subscribers (one to many). Hence, when an event is published, multiple instances could subscribe and react to that event. In essence, it depends on your architecture and what you're trying to achieve but that is basically the difference.
I believe the consumers should be an app separate from the API, no? The way your are doing in the video, if you have 5 replicas and send 5 requests, a 6th request will hang because the transcoding processes are blocking all 5 API nodes.
Well, actually, you can use the same app but offload the transcoding to a worker thread which is designed for CPU-intensive tasks, or you can spawn a process if you are using ffmpeg, for example. In this way you can put in the bull job the code that spawns the new thread/process and you will able to scale for an infinite number of concurrent transcodings, because bull will send the next job to process only after it finishes the previous one (unless you overwrite the default concurrency).
Get my highly-rated Udemy courses at a discount here: michaelguay.dev/udemy/
Hi Michael, very nice content!! Just want to point that you need to create a LB (eg nginx) to randomly pick a pod inside the k8s and make sure that you are indeed using a distributed system.
such quality content, I am familiar with most English speaking youtubers in this space and u r simply unmatched.
Totally agree with you
Brilliant teaching quality and the amount of knowledge you have on the topics you teach is phenomenal.
Thank you!
Amazing tutorial, new skill added to NestJS. I've done same but using PM2 instead of Kubernetes/Docker, for sake of simplicity.
This is a really incredible video. Thanks, bro.
For a newbie in Nest I didnt get quite a number but somehow this ia looking so cool and helpful.|
Cant wait to flow in Nest and deployments with K8s like you did on this project.
Thanks mate :)
Thanks for the lessons 🙏
Thank you for making this.
One remark here. Bull is not exactly the same thing as BullMQ. BullMQ is a newer version of Bull written in TS. NestJS provides separate packages for Bull and BullMQ.
exactly !
100% 👍
exactly, and the syntax is not completely the same.
Unique content, keep rocking!!
Would love to see how are you going to write end-to-end test for this
Great guide. Like and subscribed.
Subscribed. Great content, keep it up!
Hi Micheal, great video!!
Is it possible to run NestJS Bull Queues in a separate process on AWS/Heroku? If so, please provide an example.
Just one thing to mention, nestjs/bull !== nestjs/bullmq!
Very useful, thank you!
Great content. Thanks
hello, thanks for your excellent content and explanation, i have a question, if i write a processor and then i write a cron job to call this in a specific time, is it going to provide me the same functionality?
thanks
great content keep rocking🚀
Great tutorial. Thank you...But how do you set up the kubernetes cluster you're using ? Can you help me with that ?
This is great! I see however usage of Bull and Redis is mostly recommended for intensive tasks. Would implementing this queue architecture on, say a REST service, beneficial?
I'm thinking maybe the queue system can help on handling errors and probably restarting failed tasks that run daily on the application, such as a data validation task or a database mutation.
Agreed.
Hi Michael, could you explain what difference between bull and rabiitmq? I’m an newer for backend, Thanks!
i want to attached bull-board to monitor queues
how i can do? i don't find any helping material
Hi Michael, great video again. Very well put together, straight to the point with a real example.
If you don't mind me asking, what are the reasons someone would choose to use bullmq over say rabbitmq? Is there a particular reason you have started looking into it?
So I think they're different solutions to this same problem of achieving scale by distributed async processing. They obviously have their own differences between each other and it's a matter of finding what works best for you. BullMQ is "simpler" in my opinion, no need to ack messages, etc.
So instead of using Docker and K8s orchestration what if we're using a LB with multiple ec2 instances (like in AWS) would the entire setup still pick only 1 of the consumer to process the message and not multiple consumers (from other servers) fighting to process that message?
I mean is this the nature of distributed queuing system that allows this to happen or something else?
It's a distributed queue. Doesn't matter who or what consumes it. It is guaranteed (to a very high certainty) that it's gonna be sequential (FIFO).
Very nice content, but I would like to ask you. How to return data from the queue to one front end via websocket?
Your tutorial is very good and advanced, if possible make a udemy course with all these topic
My new Ultimate Nest.js Microservices course will cover this! Stay tuned for its release in May.
How you think this great solution compare to using an orchestrator like Netflix conductor or Netflix maestro?
Thanks for sharing your knowledge with us.
waiting for full course from fundamental to advance
Great content again.
Quick question though, with your first example "transcode an audio file", why choose to go for a job with BullMQ (the queueing system with Redis) instead of an event with EventEmitter (appart from showcasing it of course)?
Both would achieve the same result right ? Not blocking the thread and decoupling the producer/emitter from the consumer/listener ?
My question is thus : are those 2 patterns just different ways to implement a distributed system ? Why go for one or the other then ? What are the main differences ?
There's is a fundamental difference between a QUEUE and a PUB/SUB or EVENTS.
A queue (just like the name implies) strictly follows a FIFO sequence, meaning that, regardless of the number of replicas or instances there are, only one of this instance can dequeue and process the data in sequence (based on how they got enqueued). However, in the case of events (or pubsub), there's a producer of the event and potentially multiple subscribers (one to many). Hence, when an event is published, multiple instances could subscribe and react to that event.
In essence, it depends on your architecture and what you're trying to achieve but that is basically the difference.
Can we use kafka or rabbitmq instead of bullmq? Is that the right way
Can you amazon clone or any big projects using node microservice architecture and mongodb as a db.
My new Ultimate Nest.js Microservices course will cover this! Stay tuned for its release in May.
When will you be launching your Udemy course, plz give us any update on that.
I have about 3 hours recorded so far.
...need to set TTL on those bull:transcode..
why not use kafka ?
You are using bull not bullmq
How to scale big PostgreSQL database with + 100 milion records?
I believe the consumers should be an app separate from the API, no? The way your are doing in the video, if you have 5 replicas and send 5 requests, a 6th request will hang because the transcoding processes are blocking all 5 API nodes.
Definitely - nice add. If we want to produce as fast as possible and not be affected by the consumers, this is a great architecture you describe.
Well, actually, you can use the same app but offload the transcoding to a worker thread which is designed for CPU-intensive tasks, or you can spawn a process if you are using ffmpeg, for example. In this way you can put in the bull job the code that spawns the new thread/process and you will able to scale for an infinite number of concurrent transcodings, because bull will send the next job to process only after it finishes the previous one (unless you overwrite the default concurrency).
bullmq should be used instead of bull
This is not BullMQ
you are using bull and the video title says bullmq. This is not good.
Thanks for pointing that out. I've updated it
I don't like the nest.js. over engineered and complicated.