Scaling Websockets with Redis, HAProxy and Node JS - High-availability Group Chat Application

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 พ.ย. 2024

ความคิดเห็น • 169

  • @hnasr
    @hnasr  4 ปีที่แล้ว +11

    More resources
    1:00 websocket th-cam.com/video/2Nt-ZrNP22A/w-d-xo.html
    9:25 Redis th-cam.com/video/sVCZo5B8ghE/w-d-xo.html
    9:45 pub/sub th-cam.com/video/O1PgqUqZKTA/w-d-xo.html
    11:24 microservices th-cam.com/video/9sAg7RooEDc/w-d-xo.html
    11:30 haproxy th-cam.com/video/qYnA2DFEELw/w-d-xo.html

  • @johnstorm589
    @johnstorm589 3 ปีที่แล้ว +37

    This hits a sweet spot between a few things: a complex topic like load balancing, docker and docker compose (just the tip), and sockets, all under a practical example. This is great. Thank you!

    • @twitchizle
      @twitchizle ปีที่แล้ว

      Its like g spot

  • @sunnyrajwadi
    @sunnyrajwadi 4 ปีที่แล้ว +24

    Solves real life problems. Thank you.

  • @YGNCode
    @YGNCode 4 ปีที่แล้ว +16

    This is really awesome. My current company using websocket and still don't need to scale. But, it might be in the future, so I was checking around. You video explain very well. Thanks

  • @DiaryOfMuhib
    @DiaryOfMuhib 3 ปีที่แล้ว +8

    I was really struggling with WebSocket scaling. Nicely explained!

  • @dearvivekkumar
    @dearvivekkumar 4 ปีที่แล้ว +2

    Hi Hussein,
    Thanks for making all these great videos, these days I used to check daily if you have uploaded any video or not. All your videos are very useful and answering lots of my doubts.

  • @hoxorious
    @hoxorious 3 ปีที่แล้ว +2

    By far one of the best channels I have ever subscribed to 👍

  • @jackykwan8214
    @jackykwan8214 2 ปีที่แล้ว +2

    Really wonderful video, keep going !!
    I love how you simplify the talk, and with a practical POC example !

  • @letsflow.oficial
    @letsflow.oficial 9 หลายเดือนก่อน

    Hey Hussein, first of all, I need to say that I love your videos, they are very informative and very clear, even satisfatory for relaxing purposes kkkk relax while we learn :) Thank you for this video on websockets and redis. Could you please, explain how we could use this architecure to spin up a model handling? Let's supose a database to store all the messages and a central copy of the model, with disbributed copies of the model in each client. Then we would use the command pattern to alter the model based on commands, keeping a stack of commands, and maybe a snapshot to replay comands and have the ability do do and undo changes to the model. I'm facing this challenge right now and would love to hear from you on that.

  • @zcong3402
    @zcong3402 2 ปีที่แล้ว

    Very nice video, this provides a reasonable good depth of the architeture details of how to build a real time application, and especially how the redis (or any application can work as a broker) plays in this architectures. Thank you!

  • @ryanquinn1257
    @ryanquinn1257 ปีที่แล้ว

    Such a quick powerful demo.
    If you’re breaking Redis you’re already gonna need to be doing more advanced stuff than this haha.

  • @jackcurrie4805
    @jackcurrie4805 2 ปีที่แล้ว +1

    Your channel is fantastic Hussein, thanks for making such great content!

    • @hnasr
      @hnasr  2 ปีที่แล้ว

      Thanks Jack

  • @basselturky4027
    @basselturky4027 2 ปีที่แล้ว

    This channel is gold mine.

  • @jongxina3595
    @jongxina3595 4 ปีที่แล้ว

    Dude you have no idea how GLAD I am to have found this video! Amazing 😀

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Ben Sharpie enjoy! 😊

  • @peterlau9731
    @peterlau9731 ปีที่แล้ว +1

    Really appreciate the video! Perhaps can also cover the db design/optimization for a chat app?
    I believe many interesting topics like sharding, and database selection can be covered; thanks and looking forward to future videos!

  • @mytheens6652
    @mytheens6652 3 ปีที่แล้ว +5

    I wish I could get you as my senior developer.

  • @M.......A
    @M.......A 2 ปีที่แล้ว +4

    At the end of the video, you mentioned that Redis is a single point of failure. Isn't it also the case with HAProxy? Thanks for the video.

    • @peterhindes56
      @peterhindes56 2 ปีที่แล้ว +1

      Yes. If you host at multiple sites, you could replicate redis across. And then dns will handle your load balancing

  • @neketavorotnikov6743
    @neketavorotnikov6743 ปีที่แล้ว +1

    So as i understand our ws proxy server hold each ws connection from clients. So the question is If our ws app server need to be scaled to hold N ws connections, why our proxy is able to hold them all by one? Why is so big difference in performance between ws proxy server and ws app server?

  • @lonewolf2547
    @lonewolf2547 3 ปีที่แล้ว

    You just solved one of my biggest problems...thanx a ton

  • @ZoraciousDCree
    @ZoraciousDCree 4 ปีที่แล้ว +19

    Really appreciate all that you have to offer! Good pace in presentation, interesting side notes, and keeping it fun. Thanks.

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Thank you 🙏 glad you liked the content 😍

  • @uneq9589
    @uneq9589 2 ปีที่แล้ว +1

    That was a really nice explaination. Just have one question on the reverse proxy. What would the limit on the number of websocket connection the reverse proxy be able to handle?

  • @shailysangwan3977
    @shailysangwan3977 3 ปีที่แล้ว

    The content is explained pretty well and spontaneously enough for one to follow but the the pitch of the voice varies too much to keep the volume constant through the video. (i'm using earphones so it might just be me)

  • @programmer1356
    @programmer1356 2 ปีที่แล้ว

    Brilliant. Inspirational. Thank you very much.

  • @anthonyfarias321
    @anthonyfarias321 4 ปีที่แล้ว +1

    I recently implemented something very similar for a phone dialer. I used socket io, and a library for connecting socketio with redis, socketio adapter. It works smoothly.

  • @vewmet
    @vewmet 4 ปีที่แล้ว +1

    Love your content bro! Awesome

  • @ragavkb2597
    @ragavkb2597 3 ปีที่แล้ว +1

    Good video and i enjoyed it. In your example you stored the connections in an array in the nodejs. Is this typically how real world application do or are there any patterns ? It would be nice to have tutorials on connection drop from a client and how things get cleaned up eventually on the server.

  • @sanderluis3652
    @sanderluis3652 4 ปีที่แล้ว +2

    wow, very clear tutorial

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Thanks Sander!

  • @rajatahuja4720
    @rajatahuja4720 4 ปีที่แล้ว

    I was looking for the same. You rock :)

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Thanks glad you found it!

  • @lucas_badico
    @lucas_badico 4 ปีที่แล้ว

    Just build one like this using Go. Was really satisfying!

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Lucas gomes de santana nice work! It does feel satisfying when you finish a project

    • @lucas_badico
      @lucas_badico 4 ปีที่แล้ว

      I really wanted to discuss my approach with you. I build my WebSocket server in go, and I have a feeling that I don't need a Redis connection because my pub-sub is inside the application. Anyway, thanks for the videos, learning a lot with them.

  • @saidkorseir192
    @saidkorseir192 3 ปีที่แล้ว +1

    Great work Hussein. Super clean. I have a question. What if I create docker-compose.yml with only ws1 and "docker-compose up --scale ws1=4", how does haproxy config file need to be?
    I couldn't find a way. Also I tried balancing with nginx.

  • @hichem6555
    @hichem6555 ปีที่แล้ว

    thank you , this video solve the big problem that I have !!!!!💪

  • @kiranparajuli6724
    @kiranparajuli6724 2 ปีที่แล้ว

    Hi Hussein, really nice video. It was very helpful, informative. At one part of the video, you talked about the drawback of redis that it have to register two clients for a single server as subscriber and publisher. What software you mentioned to solve this problem? It was little unclear in the video.

  • @UzairAhmad.
    @UzairAhmad. หลายเดือนก่อน

    Implemented same thing in django but now i understand why we use redis.

  • @sthirumalai
    @sthirumalai 4 ปีที่แล้ว +2

    Hi Nasser. Thanks for the video and is pretty informative.
    What if one of the Websocket server crashes while serving the traffic. How can we guarantee the delivery to the clients connected to the WS server.
    Also how is HA guaranteed in REDIS?
    Awaiting your response

    • @hnasr
      @hnasr  4 ปีที่แล้ว +3

      Santhoshkumar Thirumalai since websockets are stateful and a server crashed, the client MUST restart the connection again with the reverse proxy so it goes to another server..

    • @sthirumalai
      @sthirumalai 4 ปีที่แล้ว

      @@hnasr : Thanks for the response. Did some research and found an interesting article on Session Management using AWS Elasticcache redis to persist the sessions. The solution you gave may not scale well I suppose.
      aws.amazon.com/caching/session-management/

  • @ashuthe1
    @ashuthe1 4 หลายเดือนก่อน

    Very Informative :)

  • @giangviet5155
    @giangviet5155 2 ปีที่แล้ว

    This video just explains about load balancing for somethings stateful like WS. Not sure about scaling. While mention to scaling, you must resolve both scale-out, scale-in problems. But it's seem with rounded-robin and HAProxy-config-file like that. It's impossible to scale in/out. Anw, thanks for great video.

  • @kailashyogeshwar8492
    @kailashyogeshwar8492 2 ปีที่แล้ว

    Very nice explanation and demo.
    One question though, demo shows brodcasting of messages to all the connected clients.In case of delivery to single client does the backend to which user is connected also subscribes to user specific topic.
    eg: User 1 connected to Backend 4444, backend will also subscribe to a channel based on userId or something else to receive direct messages.Is there an alternate approach for doing the subscription.

  • @ciubancantheb3st
    @ciubancantheb3st 3 ปีที่แล้ว +1

    Can you do a tutorial on doing the same thing but with a redis cluster, as redis is single threaded and it might throttle the processes when you are as big as facebook

  • @vilmarMartins
    @vilmarMartins 2 ปีที่แล้ว +2

    Would the number of connections in HAProxy be a problem?

    • @hnasr
      @hnasr  2 ปีที่แล้ว +1

      It can at a large scale (hundred of thousands) Thats when you would have two HAProxy instances and either use keepalive with virtual IP or load balance them at the app client side through DNS.
      I wouldn’t go there unless absolutely necessary of course

    • @vilmarMartins
      @vilmarMartins 2 ปีที่แล้ว

      @@hnasr Excellent! Thanks a lot!!!

  • @ProgrammerRajaa
    @ProgrammerRajaa หลายเดือนก่อน

    Thanks for the awesome Content but I have doubt
    We have a reverse proxy that needs to maintain all ws connection active does the proxy will not get overloaded
    If so what the purpose of using reverse proxy we can use single ws server
    Can you clear my doubt

  • @gurjarc1
    @gurjarc1 2 ปีที่แล้ว

    nice video. I have one question. What if there are thousand users, how will load balancer know which user's call to map to which stateful server. Will we refer to some db that holds the users and do the mapping?

  • @stormilha
    @stormilha 3 ปีที่แล้ว

    Awesome content!

  • @sreevishal2223
    @sreevishal2223 4 ปีที่แล้ว

    Awesome 👌👌, All i wanted at the moment.!!. Also instead of building same container multiple times with different port can i spin up a docker swarm??.

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Sure you can!

  • @shoebpatel4027
    @shoebpatel4027 4 ปีที่แล้ว +4

    Hey, Hussein make a video on Elastic Search in Details.

  • @earlvhingabuat8984
    @earlvhingabuat8984 3 ปีที่แล้ว

    New Subscriber Here! Thanks for this awesome video!

    • @hnasr
      @hnasr  3 ปีที่แล้ว +1

      🙏🙏🙏

  • @sezif3157
    @sezif3157 2 ปีที่แล้ว

    thanks for the video Hussein, one question : 13:02 - all the backend servers in haproxy.config are linked to 8080 , ws1:8080, ws2:8080... and so on. , but in docker-compose you gave them APPID, diferent than 8080, so inside the docker-compose network, those servers will be on the port you gave from environment. should this be ws1:APPID1, ws2:APPID2... etc?

  • @developerjas
    @developerjas 3 ปีที่แล้ว

    You saved my life!

  • @sariksiddiqui6059
    @sariksiddiqui6059 4 ปีที่แล้ว +1

    How does Load balancing look like for a websocket?Does sticky sessions at layer 7 is enough, since it's websocket, the TCP connection would remain open anyway no?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Good question, web socket starts at layer 7 proxying (upgrade) then funnels back at layer 4 as stream level.

  • @EhSUN37
    @EhSUN37 2 ปีที่แล้ว

    we subscribe and publish to ""livechat" but we are receiving from "message" ? wtf is "message" ? and what happened to "livechat" then? very nice explanation dude !

  • @trollgg777
    @trollgg777 3 ปีที่แล้ว

    Let's say you have an API gateway, after that, you have an Auth microservice that validates requests. And also you have a cluster behind a load balancer with an instance of WebSocket. How do you connect your clients to the WebSocket? lol i'm struggling with this!!!

  • @arbaztyagi123
    @arbaztyagi123 2 ปีที่แล้ว

    I have one doubt.. the way you stored the connections in an array.. is it the good way...? and how can I store these connections in a central store or memory where all other servers (machine) can access those stored connections. Thanks

  • @denisrazumnyi6456
    @denisrazumnyi6456 4 ปีที่แล้ว

    Well done !!!

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      🙏

  • @TheNayanava
    @TheNayanava 3 ปีที่แล้ว

    Hi Nasser, I have never implemented websockets ever, but here is something I want to understand.
    When a TCP connection, a persistent one, is established between the client and the server, how do we decide on what ports to open on the server side.
    For example: in a normal http communication scenario, on the edge we would enable 443 to allow only (s) communication, and then on the actual servers open up 443 or 80 depending on whether or not we have a zero trust architecture pattern. But how is it done in case of websockets? I understand we maintain a registry to store the information about which connection the server event should be pushed to, so that it can be routed correctly to the client. How many ports do we open up on the server side.. in short, when any one says we scaled up to 1 million connections on a single machine, how is that achieved??

  • @sergiosandoval3821
    @sergiosandoval3821 3 ปีที่แล้ว

    Master !!!!!!!!

  • @robinranabhat3125
    @robinranabhat3125 2 ปีที่แล้ว

    Just curious. In this particular example, would clients from different tabs (not windows) be considered the same or not ?

  • @houssemchr1539
    @houssemchr1539 4 ปีที่แล้ว

    Well explained thanks, can you explain how push notifications works like fcm, and if there is any alternative as open source

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      houssem chr thanks! Made a video on push notifications here th-cam.com/video/8D1NAezC-Dk/w-d-xo.html

  • @animatrix1851
    @animatrix1851 4 ปีที่แล้ว +1

    Could you give a situation where you'd need to scale? When do you do this, when the socket has >64k connections (or) maxed out ram because of high load of messages.

    • @hnasr
      @hnasr  4 ปีที่แล้ว +4

      Adithya angara one example when one server can no longer handle all your users this need to be tested because it depends on the app. You app might be very CPU/mem hungry and could only handle 10k web socket connections. However your app might be light and efficient and could handle 100k ..
      You need to monitor your server and your clients and see if the experience starts to become degrading

  • @momensalah8497
    @momensalah8497 4 ปีที่แล้ว

    Well explained thanks.
    but I have a question, how can all this node apps listen to one port (8080) without an error?
    should they be ported or exposed to a different ports from each other?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Momen Salah thanks Momen!
      They listen on the same port without any error because they are different containers which each has a unquie ip address. If they were on the same host network then correct you have to pick different ports

  • @ahmeddaraz8494
    @ahmeddaraz8494 4 ปีที่แล้ว

    inspiring video hussien, thanks, but I have a question, can we add a HA mode for haproxy (i.e.. by using keepalived) and that has no impact on the established tcp websocket connections ?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Interesting question Ahmed!
      It really depends if its active/active or active/passive.. if you used keepalived with haproxy, keep alived will make sure there is only one active haproxynode and all your sockets will go through that. If that haproxy goes down keepalived will switch to the other haproxy node and all connections will be dropped (because websockets are stateful)
      Active active ensures more well balanced configuration and less likely to fail but still failures can happen and unfortunately here the client has to manually reestablish the connection..

    • @ahmeddaraz8494
      @ahmeddaraz8494 4 ปีที่แล้ว

      ​@@hnasr I was thinking that probably the tcp connections can be some how shifted, as the virtual IP is same and tcp is dealing with IP/Port (probably I am wrong here), I am still not quite sure about that and I also did not do any research, but your answer is more sense !

  • @localghost3000
    @localghost3000 ปีที่แล้ว

    How would you gracefully handle if one of your server instances with an active connection goes down?

  • @davidmontdajonc6332
    @davidmontdajonc6332 4 ปีที่แล้ว +1

    Im trying to figure out how to to this in aws with the autoscaling groups in case I'd need it. No idea how I will get which servers are suscribed info... Can I code that redis stuff on php or do I need to import all my ratchet ws logic to a nodejs app? Thanks for the video!!!

    • @vewmet
      @vewmet 4 ปีที่แล้ว

      Hey david, we are also doing on aws

    • @davidmontdajonc6332
      @davidmontdajonc6332 4 ปีที่แล้ว

      @@vewmet Cool, how is it going? Have you found some good documentation or tutorials? Are you using elasticache for Redis? Cheers!

  • @abhimanyuraizada7713
    @abhimanyuraizada7713 2 ปีที่แล้ว

    Hi Hussein, As you have created a simple websocket server here, cannot we spin it up with cluster module as in most cases in production, the servers use Nodejs clustering, so will we connect our websocket in that case to different worker ids?

  • @FAROOQ95123
    @FAROOQ95123 4 ปีที่แล้ว +1

    Please make video on elastic stack

  • @962tushar
    @962tushar 3 ปีที่แล้ว

    A dumb question, can we not persist these connection somewhere like Redis (It'll have some cost associated to it due to serialization and deserialization, would it be negligible,? ) but it would make the load balancer avoid sticky sessions.

  • @karthikrangaraju9421
    @karthikrangaraju9421 3 ปีที่แล้ว

    Hi Hussein, pub sub is not real time no? It’s pull based. Instead I think we should use redis only for bookkeeping which server has what connections and the server themselves push messages to other servers directly.

    • @hnasr
      @hnasr  3 ปีที่แล้ว +1

      You can implement pub sub as push, pull or long polling.

  • @angeliquereader
    @angeliquereader ปีที่แล้ว

    Great Content! Just a. doubt. So we're spinning up 4 different instances. Each instance will have it's own "connections" variable. So if a client is connected to instance 1 and another client to instance 3, then how id the message sent by client1 reached to client3?

    • @anchalsharma0843
      @anchalsharma0843 3 หลายเดือนก่อน

      Redis PubSub can be used here again.
      Hussein took an example of group chat. But to make it 1:1 massaging here's what you can do.
      1. Servers Setup remains the same.
      2. When a client 1 connects to a web server, we need to subscribe to a redis channel named 'client1'.
      3. When other client connects to some other server, we sub to the redis channel named 'client2' from that same server
      3. Suppose client 1 sends a message to 2. Upon receiving the message on client1's server, you will publish this message to the channel of the intended recipient i.e channel 'client2'
      4. As we already had client2's server sub to the channel 'client2', that server will get the message published by client 1. And you ferry it to the user2 via the websocket connection

    • @angeliquereader
      @angeliquereader 3 หลายเดือนก่อน

      @@anchalsharma0843 my doubt was for this group chat applications only!
      so basically if we also do console.log(connection.length) will it be 1 or 4? ( I guess 1 )

  • @dmitrychernivetsky5876
    @dmitrychernivetsky5876 2 ปีที่แล้ว

    "scaling" with a single point of failure redis.
    FYI, most of the libraries and therefore code with respect to connection to clustered redis is entirely different from what was presented.

  • @mahmoudsabrah5158
    @mahmoudsabrah5158 3 ปีที่แล้ว

    Is there a source ports limitation between the reverse proxy and the websocket server ? , because the reverse proxy has to reserve (Source port) for each websocket connection to the websocket server, and the websocket connection will still alive for a long time, so we will run out of source ports really quick at the reverse proxy

  • @m_t_t_
    @m_t_t_ ปีที่แล้ว

    Is it a good idea to store all of the messages in a in-memory database though?

  • @nailgilaziev
    @nailgilaziev 4 ปีที่แล้ว

    Hello and thanks! You say that there is an implementations of reverse proxies (gateways) that can create really one physical tcp connections, but this is another story) can you tell it? at least as answer to this question. Thanks!

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      If the client of the reverse proxy is within the same subnet, the client can set its gateway IP address as the reverse proxy ip address. This way any packets will immediately go to the gateway (reverse proxy) through the power of ARP. And the reverse proxy simply use pure NAT to replace the packet as its own public ip address before sending it to the backend
      This is exactly how your phone connected through the WIFI router works. All packets go through your router by default because its the default gateway. You can actually see this in ur wifi settings

  • @diboracle123
    @diboracle123 2 ปีที่แล้ว

    HI Hussein,
    No doubt it is good informative video but one doubt, here bottle neck is the load balancer. If we have millions of users and only one load balancer is sufficient to handle those many tcp connections ?
    one more doubt I have (it is different context) let's say I have a trading application like upstock, zerodha , we can create a watchlist of stocks. Those stock price are updating frequently , If UI sends request to the server to fetch latest price then server will be bombarded with lots of request and it is not scalable also. How we can do? pls give some thoughts here..

    • @m_t_t_
      @m_t_t_ ปีที่แล้ว

      If the load balancer started to be the bottleneck, then another cluster would be made and distributed through DNS

  • @adb86
    @adb86 3 ปีที่แล้ว

    Hussein ,
    Awesome explanation on haproxy, can you please tell us how to run haproxy on container with https . Creating certificate on host machine works great wen haproxy is also started on host machine , but wen haproxy is running as a docker container with certificates created on host machine does not work . I did not find a way to create cert from container itself .Your input is valuable , please respond.

  • @5mintech567
    @5mintech567 4 ปีที่แล้ว

    Hi First of all i like your videos and watch these stuff u are creating that is awesome but i have a doubt regarding docker file workdir path
    so my question is that while i am creating these docker file i am unable to link the volumes or the path like /home/node/app
    so can u tell me how i can bind the volumes for the images.I am mostly uses the ubuntu system for my development so it can change the folder structure ?

  • @MidhunDarvin625
    @MidhunDarvin625 3 ปีที่แล้ว

    What are the connections limit on the load balancer ? and How will we scale the load balancer if there is that limit ?

    • @m_t_t_
      @m_t_t_ ปีที่แล้ว

      There won’t be a limit because the load balancers job is so small. But if we started getting google like traffic then we would need multiple datacentres and DNS would do the load balancing between the load balancers

  • @MAURO28ize
    @MAURO28ize 2 ปีที่แล้ว

    Hi, How could i save the connections of 2 servers ? ,for example : 2 users could connect to different servers , so if a server have to response to 2 clients , it wouldn't find the connection data for response them. Help me please.

  • @zummotv1013
    @zummotv1013 4 ปีที่แล้ว

    Does Google keep (notes making app) use web socket? What are the things to keep in mind if I am making a cloning of google keep?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      zummotv not sure what they are using but if its google probably gRPC, instead of websockets. That being said you get the same result.
      Notes are little tougher specially if you want to reconcile changes

  • @bisakhmondal8371
    @bisakhmondal8371 3 ปีที่แล้ว +1

    Hey Hussein,
    Thanks for the awesome content man. I am extending the application to a multiroom chat server kinda like discord and also for person to person unicast. But in this highly distributed environment, I am choosing apache Kafka for pub/sub (one reason is the connectors for persistency). But I am still thinking about how to serve the pub/sub system because creating a single topic for all chat rooms (with some meta information for each message meant for that room) is a disaster but also creating individual topics for individual chatrooms is also a disaster (because I don't have any idea how to consume messages when the number of topics is humongous).
    My main goal is selective broadcast to all the users connected to each node js server and joined a particular room.
    Any thoughts here, I would love to hear them.
    If possible could you please provide any reference to articles/blogs related to this content?

    • @manglani87
      @manglani87 3 ปีที่แล้ว

      Hi Hussein,
      I have a similar question / doubts, can you please help here!

    • @mti2fw
      @mti2fw 2 ปีที่แล้ว +2

      Hey! I imagine that you would like to save in your database the user chat groups id, for example. Am I right? If yes, you could test subscribe your user in each of them, so each chat group would have a different channel for the messages. I'm not right if this is scalable but it's a idea that you could try to use

  • @fxstreamer238
    @fxstreamer238 2 ปีที่แล้ว

    I ran into a redis npm library error on redis publish event in docker compose with error that seams to be incompability issues with latest node version and latest docker. When bunch of noobs have access to open source code and can contribute and write whatever they want that happens. not only they change the way the library was configured but also they mess up with all kinds of nodejs arguments (coding with new way of nodejs syntax just to be fansy) to manipulate and make it suitable or unsuitable for a version of windows or node and sometimes like this when even all are the latest version something breaks

  • @XForbide
    @XForbide 2 ปีที่แล้ว

    Can someone help me understand something?
    From what i understand is that load balancers like NGINX have a max connection limit of 66K ish due to limit on number of open file descriptors you can have.
    So if connections are long lived, doesnt that mean in such an architecture youre gonna get bottled necked to 66K at the load balancer level (or any intermediate proxy)? So regardless of how many machines you have behind the load balancer it will always be capped that amount.
    So what is the correct way to scale to say 100K concurrent connections? Ive read somewhere about dns load balancing. is this the way to go?

  • @esu7116
    @esu7116 4 ปีที่แล้ว

    Do you have any ideas on how to scale the reverse proxy too or this is not necessary?

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Esu you can if your monitoring shows that the reverse proxy can’t handle the load, you can deploy another reverse proxy on an active/active cluster and put them behind a DNS SRV record
      Check out the video here
      Active-Active vs Active-Passive Cluster to Achieve High Availability in Scaling Systems
      th-cam.com/video/d-Bfi5qywFo/w-d-xo.html

  • @mayankkumawat8802
    @mayankkumawat8802 4 ปีที่แล้ว

    How would this work if there are multiple channels. With different users in them

  • @pickuphappiness5027
    @pickuphappiness5027 2 ปีที่แล้ว

    In one to one chat case, in redis db we can have user server mapping and when multiple servers recieve message from server 1 - they check whther they are connected to intended user and specific server connected to intended user can process that message(in one to one chat) is this possible?

  • @implemented2
    @implemented2 4 ปีที่แล้ว

    How does proxy know which server to send data to? Does it have a mapping from clients to servers?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Great question, you specifically asked about the proxy (not reverse proxy) right?
      the proxy knows because the client actually wants to go the final destination end server which is example is google.com
      Let us say you want to go to google.com and you have configured your client to use "1.2.3.4" as a proxy
      so in HTTP at least the client adds a header call "host: google.com" and that is how the proxy knows where it will forward the traffic to
      looking at layer 4 content of this packet, the client puts the destination IP address (1.2.3.4) as the PROXY not google.com ' ip address..
      proxy is the final destination to from a layer 4 prespective, but the layer 7 the real final destination is google.com

  • @abdallahelkasass6332
    @abdallahelkasass6332 3 ปีที่แล้ว

    How to save opened connections after reload servers.

  • @alshameerb
    @alshameerb 4 ปีที่แล้ว

    How can we send some data when we connect...it’s like client wants to store data in a certain location ...I need to send this location to client during connection...how can we do that...

    • @alshameerb
      @alshameerb 4 ปีที่แล้ว

      I mean send location to server...

  • @OneOmot
    @OneOmot 3 ปีที่แล้ว

    What if you have just another websocket server that is just connected to all other ws server instead of redis.
    Each message will be send to its clients and one of it is the last ws server that one sends it to the other server. So each ws don't need to know redis server. Just the on ws server is configured to know other ws server and connects in case of failure the other ws can just operate fine. You can scale this just put two or more connector ws!?

    • @hnasr
      @hnasr  3 ปีที่แล้ว +1

      Yes that is possible for sure, its just you will be building your own version of a pub/sub system using websockets. Assuming synchronously. Possible and had its own use cases.

  • @saurabhahuja6707
    @saurabhahuja6707 3 ปีที่แล้ว

    Here haproxy is maintaning connection between backend and fronend, will that cause bottleneck.. If yes then how to solve it.?

    • @kozie928
      @kozie928 3 ปีที่แล้ว

      you can create multiple haproxy/nginx instances with docker compose for example

  • @vibekdutta6539
    @vibekdutta6539 3 ปีที่แล้ว

    A big fan of your channel always has been. Can you please explain the difference between subscriber.on (subscribe) and subscriber.on(message), I didn't understand the direction of the data flow here.

  • @anuragvohra5519
    @anuragvohra5519 4 ปีที่แล้ว +1

    Isn't load balancer and reddis bottle neck of your application scalling?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +1

      Anurag Vohra there will always be bottlenecks for sure. No system is perfect.
      I would however relief that bottleneck by introducing many loadbalancers and throw them behind an active/active cluster.
      Active-Active vs Active-Passive Cluster Pros & Cons
      th-cam.com/video/d-Bfi5qywFo/w-d-xo.html

    • @anuragvohra5519
      @anuragvohra5519 4 ปีที่แล้ว

      @@hnasr thanx that cover what I was searching for!

    • @anuragvohra5519
      @anuragvohra5519 4 ปีที่แล้ว

      @@hnasr Do you have any protal where one can reach you for job offers ? [kind of freelancing]

  • @yelgabs
    @yelgabs ปีที่แล้ว

    Isn't the load balancer here a single point of failure?

  • @nit50000
    @nit50000 2 ปีที่แล้ว +1

    Thank you for the great article. It is very useful indeed. (Sorry but I feel your voice is very annoying. 😣😂🤣 )

  • @Samsonkwakunkrumah
    @Samsonkwakunkrumah 3 ปีที่แล้ว

    How do you handle offline users in this architecture?

    • @jeyfus
      @jeyfus 2 ปีที่แล้ว

      One way to handle this could consist of persisting the messages of the related topic(s) in a database. When your (formerly) offline client goes live, they can fetch the whole history using a regular http request.

  • @wassim5622
    @wassim5622 4 ปีที่แล้ว

    I dont get it this multiple servers things, does it mean buy mlre hosting plans or what does it exactly mean by multiple servers ?

    • @hnasr
      @hnasr  4 ปีที่แล้ว +3

      wassim could be multiple physical machines, or multiple virtual machines in a single physical machines or multiple containers in a single machine .. really depends how far you want to go with scaling

    • @wassim5622
      @wassim5622 4 ปีที่แล้ว

      @@hnasr Thanks !!

  • @HM_Milan
    @HM_Milan 3 ปีที่แล้ว

    can we rederect all websockts to another available docker aws deferent availablity zonez

    • @hnasr
      @hnasr  3 ปีที่แล้ว +2

      Yes! You can set a rule in haproxy to redirect traffic to another backend based on the source ip for example. Better approach is to use geoDNS

    • @HM_Milan
      @HM_Milan 3 ปีที่แล้ว

      @@hnasr thanks

  • @predcr
    @predcr 2 ปีที่แล้ว

    Can you please help me in scaling up my redis server

  • @gerooq
    @gerooq ปีที่แล้ว

    But why have multiple WS servers and then use Redis to share messages when you can just run a single WS server that uses in-process memory to store a map of "channel name" to list of sockets that requested to subscribe to that channel. Then it's trivial to simply divvy emitted messages among other sockets in the same channel 🤷‍♂️. I mean it's way more performant especially if done multithreaded.

  • @dgalaa5850
    @dgalaa5850 4 ปีที่แล้ว

    when i use nginx servers like this can i access to other services by socket id

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Am not sure there is a socket id but you can sure create an id and use it in rules I think

  • @randomlettersqzkebkw
    @randomlettersqzkebkw 2 ปีที่แล้ว

    I do not understand how this is scaling, when the middle load balancer is actually connected as well to the clients. If it merely routed the request directly to the websocket servers, then ok, but its not doing that :/

  • @RahulSoni-vc8kv
    @RahulSoni-vc8kv 4 ปีที่แล้ว

    Does not the ha proxy become bottle neck?

    • @hnasr
      @hnasr  4 ปีที่แล้ว

      Rahul Soni it does of course that is why you need to scale the haproxy itself , you can either use active active or active passive cluster
      Active-Active vs Active-Passive Cluster Pros & Cons th-cam.com/video/d-Bfi5qywFo/w-d-xo.html

  • @praneetpushpal1410
    @praneetpushpal1410 4 ปีที่แล้ว

    Nice tutorial! Thanks!
    If you have any free time, could you please share your insights on this:
    "Twitter account of top celebrity hacked". How this would have happened even after so much security at twitter.