Hey you are doing great work! A request- Can you please continue such tutorials and also teach about scalability, microservices, chat servers with rooms, video calls or deployment on docker-k8s etc. Like what software engineering looks like irl. People on yt just doing nextjs stuff and I can't understand anything...
awesome course and thank you so much. Make more awesome valuable content with monorepo architecture in nodejs . God bless you sir and thank you once again.
Hi Great work Piyush. Can you please create a video how we can able to deploy turborepo project like current scalable Realtime chat app on servers (vercel)
why are we using redis along with kafka, can't we simply use Kafka's pub/sub for two servers to communicate instead of redis? can someone please explain the advantages or tradeoffs of doing so?
I'll give example based on Google pubsub or storage queue that I use at place of kafkaa, and the reason is if one consumer takes that message and ACK that message is gone from topic and other instances with same subscriber won't get the messages, offcourse we can create separate subscriber for each instance but that is manual process unlike redis pub sub
Very helpful videos Piyush. Just have one doubt, How to retrieve data in case user refresh the page and we have to fetch the last few messages? because we can't query DB in that case.
You are making a new entry to the database every time a new message is produced and as we know databases have low through put so can we run consumer or a second consumer to consume the data at a certain interval of time to store the produced data in db?
If kafka too is inserting each message at a time to the db then where is the benefit except that when db downs message will still be there in kafka and will be inserted in the db when it is up again?
One Question ---> So, if we refresh the browser, the values got removed. So we can again get the values from the database by making a simple GET request to keep the messages even after refreshing the browser?
You earlier said that consumer is a separate NodeJS Server but u defined the consumer in the primary server itself.. Why ? Is that for the sake of simplicity? If so , then how wil l get the same Prisma instance if we had standalone consumer server ?
Here I was curious to know that if Redis could have been replaced kafka? I am not sure if redis is required here if we are using kafka ? Please let me know your thoughts
Hi Piyush. Thanks for the amazing video!!! Just one Question, Couldn't we use kafka directly as a pub/sub instead of using redis separately , where all servers and the processing server ( running write queries in postgres ) subscribes to the 'MESSAGES' kafka topic?
Bhai ek request hai plz ek pagination aur inifinite scroll ke upar bhi video banaye bahut jada ye topic pucha jata hai interviews mein react.js, node.js mein. Aur konsa kab use karna hai wo bhi bata dijiyega indepth banaye ga bhai
Hey @piyushgargdev, Is Kafka's consumer interval (to consume message) is incremental or all data will be provided? If so, how can we only handle the incremental data and not whole?
Hello Piyush, I hope you are doing well, Can you please make a video on aws-msk to work with kafka, I am really trying hard to make kafka work with aws-msk but things are not getting solved
Just one question, why are you using cloud services for postgres and kafka? Isn't using a docker container locally free and less time consuming as well?
A doubt, You are able to get messages in kafka at high velocity because kafka is meant for that , but when you are inserting into db for eachMessage , how will it make any diff because in event of high velocity of messages eachMessage function will do a insert query into db so for example if you are rec 100000 messages at 1-2 second interval your db will have 100000 insert operation which will make the db down. and if that happen what is the benefit of using kafka i understand that there will be no downtime because kafka will still be active but there should be something that will reduce the insert operation into the db
Actually the consumer should be an altogether different microservice. This will consume the messages in batches and do batch insertion. Let's say we configured the DB to support 10k WPS. So we'll consume 10k messages and insert it into DB. This is actually known as async processing.
One more like... hey mate have query how kafka understands that to which consumer beed to send reply.. i did see your kafka video... struggling to understand this...and how can i build same for mobile app??
Kafka itself have a pub/sub model, rather than saving data in two places(redis and kafka), can we create a data aggregation function that'll cater the user messaging service? Working adjacent to yours for updating the db update query handling function
One thing i didnt understand is why did you run the consumer function in init function of index.ts file. Couldn't understand the logic behind it. Everything else is topnotch
This is a beneficial video but I didn't like the ending of this. With the try catch if the DB crashes or something goes wrong with it then pause for 1 minute and restart it from the beginning, Can we do something else that would be good for the DB always?
DB crashing is possible but not likely. If your DB is managed by a cloud provider like AWS then you have 99.99999% durability. Cloud providers usually have robust infrastructure which should be least of your worries.
How to handle real time notification in Vue&Node like FB handle its post notification. For example I have a Assignment Management system and I'm logged in as a Admin, when someone upload/send new assignment so the notification come in realtime and show in toast. How it can be possible?
When the messages are consumed, then they should be deleted from the 'Topics' in Kafka, right? But they are still present there. Is it supposed to be like this?
Yes the messages are there in the event of a failure. You will need them there to replay events back to current state. The events are stored in kakfa but are marked as read if they were already consumed. thus those events won't be read twice.
Yes. Redis streams is an alternative to Kafka because it also uses consumer groups and offsets. However Kafka can handle very large throughput sizes like 1 million requests per sec.
Hi Piyush and everyone, I have a doubt. When there are multiple servers, each of them will be consuming msg from kafka and writing to postgres, thereby creating as many message entries in db on every message as the number of servers. Is that desired? spin up one more server on a diff port. Send a message. There will be two entries for this message on db
Yes, you are correct. If the logic for consuming messages and writing to PostgreSQL is directly placed within the message-receiving event, it can lead to duplicated message entries. In my implementation, I've used Redis for inter-server communication. I have not used kafka yet. When a message is sent (triggered by the "send" event), I publish it to the "MESSAGES" channel in Redis. And, on the "receive" event, I broadcast the message to all connected clients. Regarding the storage of messages in PostgreSQL, I've introduced a global array named "messageBatch." When a message is sent ("send" event), I push the message into this array. The important aspect is the use of setInterval to periodically process this array(use copy of messageBatch and make messageBatch empty to store new messages), writing its contents to PostgreSQL. The data is successfully stored.
I have one question. Since you introduced Kafka in the project, couldn't we remove Redis from the project?? Because Redis was being used for pub/sub, which Kafka can do as well.
Redis supports push-based delivery of messages that means messages published to Redis will be delivered automatically to subscribers immediately but kafka is supports pull-based delivery of messages, meaning that messages published in Kafka are never distributed directly to consumers, consumers subscribe to topics and ask for messages when consumers are ready to deal with them.
Yes, one might think of using only Kafka or Redis. But here we need both. Here we have 2 requirements: 1. Inter-Server Communication. Meaning message sent by user1 on server1 should be received by all the users present on different servers. Here we can use Redis Pub/Sub model. Redis Publisher publishes the message to the channel "MESSAGES". All the Redis Subscribers of this channel will receive the message, including the server which has sent the message. Thus Inter-Server Communication is achieved. If we use Kafka in this case. Kafka producer will produce the message to topic "MESSAGES". Here all the Kafka consumers (on all servers) will belong to the same consumer group, because they have same groupId. Hence only any one consumer will receive the message on the "MESSAGES" topic. And other Kafka consumers (servers) will not receive the message. 2. Storage of Messages in Database. Here we can use Kafka. Kafka producer will produce the message to topic "MESSAGES". And only one Kafka consumer of this topic will receive the message. This consumer will store it in the database. Like I said earlier, all the consumers here have same groupId. Hence only one of them can receive message. If we use Redis here, all the Redis subscribers will receive the messages and store the messages in database, resulting in duplicate messages.
Here, all the server instances subscribe to a redis channel for incoming messages. I think, we could simply remove redis, and make every server long poll kafka for messages.
@@catchroniclesbyanikyes because kafka also gives pub/sub mechanism. I think pyush bhai ny first video just problem solve krny k liye bnai and yeh full scalability k liye.
You come up with something that no one makes
You are awesome Piyush.
Hey you are doing great work! A request- Can you please continue such tutorials and also teach about scalability, microservices, chat servers with rooms, video calls or deployment on docker-k8s etc.
Like what software engineering looks like irl. People on yt just doing nextjs stuff and I can't understand anything...
Amazing tutorial, helping us to do better engineering. Industry level standards !!
Thanks piyush, I was waiting for this video. I love your scalable, system design videos
What a amazing amazing video it is. 😮 Greatly done.
Thank you so much
Finally, completed this project and gained a lot of knowledge Thank you Piyush Sir ❤️👍🎉
do provide a GH link.
Man! So much of learning in these 2 videos. Thanks!
Really something awesome.Practically answering all system design question.
Just wow 🤩 🤩 . I have learned something new that no one teaches us. Highly appreciable work. Thank you . 🙏
Please make a full video about rabbitMQ🙏
that is something i was planning to build and had lot of confusion, now everything is cleared thank you brother
Fresh unique stuff, no one is teaching this on TH-cam.
Also can you teach more about tueborepo in detail like testing, linting etc in a turborepo
Mann i love how professionally you do your work ❤
You are best piyush bhaiya ❤
Love you brother ❤❤❤,
And my one is that please make series on microservices project, that how to make project using this architecture
Great video, please continue this series.
I like your teaching style. ❤
awesome course and thank you so much.
Make more awesome valuable content with monorepo architecture in nodejs .
God bless you sir and thank you once again.
Very nice learning ❤🎉it will definitely impact on community
Great content !! Keep sharing your experience ❤
Better than paid course ❤
Thank man. Learned a lot.
Love you ❤❤❤
Awessome Content Brother . You can further extend this Project ❤
love you man, you are just amazing,
Hi Great work Piyush. Can you please create a video how we can able to deploy turborepo project like current scalable Realtime chat app on servers (vercel)
bro is litrally creating his own empire in backend mastery.
why are we using redis along with kafka, can't we simply use Kafka's pub/sub for two servers to communicate instead of redis? can someone please explain the advantages or tradeoffs of doing so?
I'll give example based on Google pubsub or storage queue that I use at place of kafkaa, and the reason is if one consumer takes that message and ACK that message is gone from topic and other instances with same subscriber won't get the messages, offcourse we can create separate subscriber for each instance but that is manual process unlike redis pub sub
Very helpful videos Piyush.
Just have one doubt, How to retrieve data in case user refresh the page and we have to fetch the last few messages? because we can't query DB in that case.
Awesome video ❤🎉
Piyush ek introduction video banao plzz What is turboRepo im confused with it
You Got New Subscriber
What if user wants to see old messages while they are still inside kafka?
Keep posting such content please
awesome content Piyush Bhai,
one question, why produce message when get message from redis, why don't produce it when publishing to redis
ek MVC pe bhi video lao bhai
why you use redis and kafka both can we use kafka only?
You are making a new entry to the database every time a new message is produced and as we know databases have low through put so can we run consumer or a second consumer to consume the data at a certain interval of time to store the produced data in db?
If kafka too is inserting each message at a time to the db then where is the benefit except that when db downs message will still be there in kafka and will be inserted in the db when it is up again?
top notch content
One Question ---> So, if we refresh the browser, the values got removed. So we can again get the values from the database by making a simple GET request to keep the messages even after refreshing the browser?
You earlier said that consumer is a separate NodeJS Server but u defined the consumer in the primary server itself.. Why ? Is that for the sake of simplicity? If so , then how wil l get the same Prisma instance if we had standalone consumer server ?
You can just create the new instance for the prisma, and if you are using turbo repo then you can just import that
@@opsingh861 New instance would probably lead to new connection ig and probably new migrations.... Yeah !, Turbo might be a better option.
please make more videos for this as continuation !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Here I was curious to know that if Redis could have been replaced kafka? I am not sure if redis is required here if we are using kafka ? Please let me know your thoughts
If it is a group chat then we have to make a different database postgres to store all the data like room id and all users of that grp
Sir please make backend project using microservices
Sir y u not use redis list to store messages?
Hi Piyush. Thanks for the amazing video!!!
Just one Question, Couldn't we use kafka directly as a pub/sub instead of using redis separately , where all servers and the processing server ( running write queries in postgres ) subscribes to the 'MESSAGES' kafka topic?
amazing video
Bhai ek request hai plz ek pagination aur inifinite scroll ke upar bhi video banaye bahut jada ye topic pucha jata hai interviews mein react.js, node.js mein. Aur konsa kab use karna hai wo bhi bata dijiyega indepth banaye ga bhai
where is the video piyush sir has mentioned about i.e the first part to this video?
love it bhai
I had one doubt please anyone help, Where is the complete frontend and deployment part of this project ?
Hey @piyushgargdev,
Is Kafka's consumer interval (to consume message) is incremental or all data will be provided? If so, how can we only handle the incremental data and not whole?
Hello Piyush,
I hope you are doing well,
Can you please make a video on aws-msk to work with kafka, I am really trying hard to make kafka work with aws-msk but things are not getting solved
Can i run two databases in prisma in the same project like PostgreSQL and MySQL
can you help us in knowing how to deploy monorepo appilcations
.
Piyush you are sending data into the database one by one not in bulk, right?
Just one question, why are you using cloud services for postgres and kafka? Isn't using a docker container locally free and less time consuming as well?
awesome value
A doubt, You are able to get messages in kafka at high velocity because kafka is meant for that , but when you are inserting into db for eachMessage , how will it make any diff because in event of high velocity of messages eachMessage function will do a insert query into db so for example if you are rec 100000 messages at 1-2 second interval your db will have 100000 insert operation which will make the db down. and if that happen what is the benefit of using kafka i understand that there will be no downtime because kafka will still be active but there should be something that will reduce the insert operation into the db
Actually the consumer should be an altogether different microservice. This will consume the messages in batches and do batch insertion. Let's say we configured the DB to support 10k WPS. So we'll consume 10k messages and insert it into DB. This is actually known as async processing.
when db is down or not able to insert that message so the consumer will resume from that message or from the next message?
That message because the message is stored inside kafka
One more like... hey mate have query how kafka understands that to which consumer beed to send reply.. i did see your kafka video... struggling to understand this...and how can i build same for mobile app??
Can you show the deployment process?
Brother please deploy bhi kr diya kro, bahut problem hoti hai.
Can anyone give me the previous video link?
Kafka itself have a pub/sub model, rather than saving data in two places(redis and kafka), can we create a data aggregation function that'll cater the user messaging service? Working adjacent to yours for updating the db update query handling function
We should use either Kafka or redis , right ? not both @rohitpandey4411
One thing i didnt understand is why did you run the consumer function in init function of index.ts file. Couldn't understand the logic behind it. Everything else is topnotch
Make a video on posgresql
This is a beneficial video but I didn't like the ending of this. With the try catch if the DB crashes or something goes wrong with it then pause for 1 minute and restart it from the beginning, Can we do something else that would be good for the DB always?
DB crashing is possible but not likely. If your DB is managed by a cloud provider like AWS then you have 99.99999% durability. Cloud providers usually have robust infrastructure which should be least of your worries.
How to handle real time notification in Vue&Node like FB handle its post notification. For example I have a Assignment Management system and I'm logged in as a Admin, when someone upload/send new assignment so the notification come in realtime and show in toast. How it can be possible?
love you.
When the messages are consumed, then they should be deleted from the 'Topics' in Kafka, right? But they are still present there. Is it supposed to be like this?
Yes the messages are there in the event of a failure. You will need them there to replay events back to current state. The events are stored in kakfa but are marked as read if they were already consumed. thus those events won't be read twice.
Hi Piyush as we are using redis so will it be more better to use reddis streams instead kafka. Please reply
Yes. Redis streams is an alternative to Kafka because it also uses consumer groups and offsets. However Kafka can handle very large throughput sizes like 1 million requests per sec.
Hi Piyush and everyone, I have a doubt. When there are multiple servers, each of them will be consuming msg from kafka and writing to postgres, thereby creating as many message entries in db on every message as the number of servers. Is that desired?
spin up one more server on a diff port. Send a message. There will be two entries for this message on db
Yes, you are correct. If the logic for consuming messages and writing to PostgreSQL is directly placed within the message-receiving event, it can lead to duplicated message entries. In my implementation, I've used Redis for inter-server communication. I have not used kafka yet. When a message is sent (triggered by the "send" event), I publish it to the "MESSAGES" channel in Redis. And, on the "receive" event, I broadcast the message to all connected clients.
Regarding the storage of messages in PostgreSQL, I've introduced a global array named "messageBatch." When a message is sent ("send" event), I push the message into this array. The important aspect is the use of setInterval to periodically process this array(use copy of messageBatch and make messageBatch empty to store new messages), writing its contents to PostgreSQL. The data is successfully stored.
This is a valid issue. This wont occur if we put the produceMessageForKafka logic after publishing to redis instead of putting it on subscribe
please make the same videos on python.
First comment 😁
I have one question. Since you introduced Kafka in the project, couldn't we remove Redis from the project?? Because Redis was being used for pub/sub, which Kafka can do as well.
you mean we can subscribe on servers to kafka topics?
Redis supports push-based delivery of messages that means messages published to Redis will be delivered automatically to subscribers immediately but kafka is supports pull-based delivery of messages, meaning that messages published in Kafka are never distributed directly to consumers, consumers subscribe to topics and ask for messages when consumers are ready to deal with them.
Yes, one might think of using only Kafka or Redis. But here we need both.
Here we have 2 requirements:
1. Inter-Server Communication.
Meaning message sent by user1 on server1 should be received by all the users present on different servers. Here we can use Redis Pub/Sub model. Redis Publisher publishes the message to the channel "MESSAGES". All the Redis Subscribers of this channel will receive the message, including the server which has sent the message. Thus Inter-Server Communication is achieved.
If we use Kafka in this case. Kafka producer will produce the message to topic "MESSAGES". Here all the Kafka consumers (on all servers) will belong to the same consumer group, because they have same groupId. Hence only any one consumer will receive the message on the "MESSAGES" topic. And other Kafka consumers (servers) will not receive the message.
2. Storage of Messages in Database.
Here we can use Kafka. Kafka producer will produce the message to topic "MESSAGES". And only one Kafka consumer of this topic will receive the message. This consumer will store it in the database. Like I said earlier, all the consumers here have same groupId. Hence only one of them can receive message.
If we use Redis here, all the Redis subscribers will receive the messages and store the messages in database, resulting in duplicate messages.
Here, all the server instances subscribe to a redis channel for incoming messages. I think, we could simply remove redis, and make every server long poll kafka for messages.
@@catchroniclesbyanikyes because kafka also gives pub/sub mechanism. I think pyush bhai ny first video just problem solve krny k liye bnai and yeh full scalability k liye.
can i do this with mongo ???
Yes
Please provide english subtitles 😢.
Please reply anyone can i make chat app using Java Networking concept? is it possible ? please reply
Yes. Explore about Netty.
👏👏
CHOTI se bhout bari problem😂
🙏👍
maza ni aaya bro
Video Title In English... Video audio in Hindi... No offense... but Bruh... What are you doing?
But at least the comments are in english 😊