Scaling Websockets with Redis, HAProxy and Node JS - High-availability Group Chat Application

Поделиться
HTML-код
  • Опубликовано: 3 июл 2024
  • In this video I want to demonstrate how to scale websockets connection to multiple servers using a load balancer such as HAProxy.
    0:00 Intro
    1:00 What are WebSockets?
    2:40 WebSockets Scaling
    7:44 Chat WebSocket App Code
    * WebSockets
    * WebSockets Scaling
    * Live chat application (microservices)
    * Demo
    Source Code
    github.com/hnasr/javascript_p...
    🏭 Software Architecture Videos
    • Software Architecture
    💾 Database Engineering Videos
    • Database Engineering
    🛰 Network Engineering Videos
    • Network Engineering
    🏰 Load Balancing and Proxies Videos
    • Proxies
    🐘 Postgres Videos
    • PostgresSQL
    🚢Docker
    • Docker
    🧮 Programming Pattern Videos
    • Programming Patterns
    🛡 Web Security Videos
    • Web Security
    🦠 HTTP Videos
    • HTTP
    🐍 Python Videos
    • Python by Example
    🔆 Javascript Videos
    • Javascript by Example
    👾Discord Server / discord
    Support me on PayPal
    bit.ly/33ENps4
    Become a Patreon
    / hnasr
    Stay Awesome,
    Hussein
  • НаукаНаука

Комментарии • 163

  • @hnasr
    @hnasr  4 года назад +10

    More resources
    1:00 websocket ruclips.net/video/2Nt-ZrNP22A/видео.html
    9:25 Redis ruclips.net/video/sVCZo5B8ghE/видео.html
    9:45 pub/sub ruclips.net/video/O1PgqUqZKTA/видео.html
    11:24 microservices ruclips.net/video/9sAg7RooEDc/видео.html
    11:30 haproxy ruclips.net/video/qYnA2DFEELw/видео.html

  • @johnstorm589
    @johnstorm589 2 года назад +34

    This hits a sweet spot between a few things: a complex topic like load balancing, docker and docker compose (just the tip), and sockets, all under a practical example. This is great. Thank you!

    • @twitchizle
      @twitchizle 8 месяцев назад

      Its like g spot

  • @sunnyrajwadi
    @sunnyrajwadi 4 года назад +22

    Solves real life problems. Thank you.

  • @ZoraciousDCree
    @ZoraciousDCree 4 года назад +18

    Really appreciate all that you have to offer! Good pace in presentation, interesting side notes, and keeping it fun. Thanks.

    • @hnasr
      @hnasr  4 года назад

      Thank you 🙏 glad you liked the content 😍

  • @YGNCode
    @YGNCode 4 года назад +14

    This is really awesome. My current company using websocket and still don't need to scale. But, it might be in the future, so I was checking around. You video explain very well. Thanks

  • @dearvivekkumar
    @dearvivekkumar 3 года назад +2

    Hi Hussein,
    Thanks for making all these great videos, these days I used to check daily if you have uploaded any video or not. All your videos are very useful and answering lots of my doubts.

  • @DiaryOfMuhib
    @DiaryOfMuhib 3 года назад +8

    I was really struggling with WebSocket scaling. Nicely explained!

  • @jackykwan8214
    @jackykwan8214 2 года назад +2

    Really wonderful video, keep going !!
    I love how you simplify the talk, and with a practical POC example !

  • @hoxorious
    @hoxorious 3 года назад +2

    By far one of the best channels I have ever subscribed to 👍

  • @jongxina3595
    @jongxina3595 3 года назад

    Dude you have no idea how GLAD I am to have found this video! Amazing 😀

    • @hnasr
      @hnasr  3 года назад

      Ben Sharpie enjoy! 😊

  • @zcong3402
    @zcong3402 2 года назад

    Very nice video, this provides a reasonable good depth of the architeture details of how to build a real time application, and especially how the redis (or any application can work as a broker) plays in this architectures. Thank you!

  • @jackcurrie4805
    @jackcurrie4805 Год назад +1

    Your channel is fantastic Hussein, thanks for making such great content!

    • @hnasr
      @hnasr  Год назад

      Thanks Jack

  • @basselturky4027
    @basselturky4027 2 года назад

    This channel is gold mine.

  • @lonewolf2547
    @lonewolf2547 3 года назад

    You just solved one of my biggest problems...thanx a ton

  • @vewmet
    @vewmet 4 года назад +1

    Love your content bro! Awesome

  • @programmer1356
    @programmer1356 2 года назад

    Brilliant. Inspirational. Thank you very much.

  • @rajatahuja4720
    @rajatahuja4720 4 года назад

    I was looking for the same. You rock :)

    • @hnasr
      @hnasr  4 года назад

      Thanks glad you found it!

  • @ryanquinn1257
    @ryanquinn1257 Год назад

    Such a quick powerful demo.
    If you’re breaking Redis you’re already gonna need to be doing more advanced stuff than this haha.

  • @sanderluis3652
    @sanderluis3652 4 года назад +2

    wow, very clear tutorial

    • @hnasr
      @hnasr  4 года назад

      Thanks Sander!

  • @hichem6555
    @hichem6555 Год назад

    thank you , this video solve the big problem that I have !!!!!💪

  • @letsflow.oficial
    @letsflow.oficial 4 месяца назад

    Hey Hussein, first of all, I need to say that I love your videos, they are very informative and very clear, even satisfatory for relaxing purposes kkkk relax while we learn :) Thank you for this video on websockets and redis. Could you please, explain how we could use this architecure to spin up a model handling? Let's supose a database to store all the messages and a central copy of the model, with disbributed copies of the model in each client. Then we would use the command pattern to alter the model based on commands, keeping a stack of commands, and maybe a snapshot to replay comands and have the ability do do and undo changes to the model. I'm facing this challenge right now and would love to hear from you on that.

  • @peterlau9731
    @peterlau9731 10 месяцев назад +1

    Really appreciate the video! Perhaps can also cover the db design/optimization for a chat app?
    I believe many interesting topics like sharding, and database selection can be covered; thanks and looking forward to future videos!

  • @stormilha
    @stormilha 3 года назад

    Awesome content!

  • @lucas_badico
    @lucas_badico 4 года назад

    Just build one like this using Go. Was really satisfying!

    • @hnasr
      @hnasr  4 года назад +1

      Lucas gomes de santana nice work! It does feel satisfying when you finish a project

    • @lucas_badico
      @lucas_badico 4 года назад

      I really wanted to discuss my approach with you. I build my WebSocket server in go, and I have a feeling that I don't need a Redis connection because my pub-sub is inside the application. Anyway, thanks for the videos, learning a lot with them.

  • @earlvhingabuat8984
    @earlvhingabuat8984 3 года назад

    New Subscriber Here! Thanks for this awesome video!

    • @hnasr
      @hnasr  3 года назад +1

      🙏🙏🙏

  • @mytheens6652
    @mytheens6652 3 года назад +5

    I wish I could get you as my senior developer.

  • @developerjas
    @developerjas 3 года назад

    You saved my life!

  • @denisrazumnyi6456
    @denisrazumnyi6456 4 года назад

    Well done !!!

    • @hnasr
      @hnasr  4 года назад

      🙏

  • @ragavkb2597
    @ragavkb2597 3 года назад +1

    Good video and i enjoyed it. In your example you stored the connections in an array in the nodejs. Is this typically how real world application do or are there any patterns ? It would be nice to have tutorials on connection drop from a client and how things get cleaned up eventually on the server.

  • @sergiosandoval3821
    @sergiosandoval3821 2 года назад

    Master !!!!!!!!

  • @kiranparajuli6724
    @kiranparajuli6724 2 года назад

    Hi Hussein, really nice video. It was very helpful, informative. At one part of the video, you talked about the drawback of redis that it have to register two clients for a single server as subscriber and publisher. What software you mentioned to solve this problem? It was little unclear in the video.

  • @anthonyfarias321
    @anthonyfarias321 4 года назад +1

    I recently implemented something very similar for a phone dialer. I used socket io, and a library for connecting socketio with redis, socketio adapter. It works smoothly.

  • @uneq9589
    @uneq9589 2 года назад +1

    That was a really nice explaination. Just have one question on the reverse proxy. What would the limit on the number of websocket connection the reverse proxy be able to handle?

  • @saidkorseir192
    @saidkorseir192 2 года назад +1

    Great work Hussein. Super clean. I have a question. What if I create docker-compose.yml with only ws1 and "docker-compose up --scale ws1=4", how does haproxy config file need to be?
    I couldn't find a way. Also I tried balancing with nginx.

  • @M.......A
    @M.......A 2 года назад +4

    At the end of the video, you mentioned that Redis is a single point of failure. Isn't it also the case with HAProxy? Thanks for the video.

    • @peterhindes56
      @peterhindes56 2 года назад +1

      Yes. If you host at multiple sites, you could replicate redis across. And then dns will handle your load balancing

  • @shailysangwan3977
    @shailysangwan3977 3 года назад

    The content is explained pretty well and spontaneously enough for one to follow but the the pitch of the voice varies too much to keep the volume constant through the video. (i'm using earphones so it might just be me)

  • @FAROOQ95123
    @FAROOQ95123 4 года назад +1

    Please make video on elastic stack

  • @sezif3157
    @sezif3157 2 года назад

    thanks for the video Hussein, one question : 13:02 - all the backend servers in haproxy.config are linked to 8080 , ws1:8080, ws2:8080... and so on. , but in docker-compose you gave them APPID, diferent than 8080, so inside the docker-compose network, those servers will be on the port you gave from environment. should this be ws1:APPID1, ws2:APPID2... etc?

  • @kailashyogeshwar8492
    @kailashyogeshwar8492 2 года назад

    Very nice explanation and demo.
    One question though, demo shows brodcasting of messages to all the connected clients.In case of delivery to single client does the backend to which user is connected also subscribes to user specific topic.
    eg: User 1 connected to Backend 4444, backend will also subscribe to a channel based on userId or something else to receive direct messages.Is there an alternate approach for doing the subscription.

  • @shoebpatel4027
    @shoebpatel4027 3 года назад +4

    Hey, Hussein make a video on Elastic Search in Details.

  • @giangviet5155
    @giangviet5155 Год назад

    This video just explains about load balancing for somethings stateful like WS. Not sure about scaling. While mention to scaling, you must resolve both scale-out, scale-in problems. But it's seem with rounded-robin and HAProxy-config-file like that. It's impossible to scale in/out. Anw, thanks for great video.

  • @ciubancantheb3st
    @ciubancantheb3st 2 года назад +1

    Can you do a tutorial on doing the same thing but with a redis cluster, as redis is single threaded and it might throttle the processes when you are as big as facebook

  • @sreevishal2223
    @sreevishal2223 4 года назад

    Awesome 👌👌, All i wanted at the moment.!!. Also instead of building same container multiple times with different port can i spin up a docker swarm??.

    • @hnasr
      @hnasr  4 года назад +1

      Sure you can!

  • @abhimanyuraizada7713
    @abhimanyuraizada7713 2 года назад

    Hi Hussein, As you have created a simple websocket server here, cannot we spin it up with cluster module as in most cases in production, the servers use Nodejs clustering, so will we connect our websocket in that case to different worker ids?

  • @houssemchr1539
    @houssemchr1539 4 года назад

    Well explained thanks, can you explain how push notifications works like fcm, and if there is any alternative as open source

    • @hnasr
      @hnasr  4 года назад +1

      houssem chr thanks! Made a video on push notifications here ruclips.net/video/8D1NAezC-Dk/видео.html

  • @angeliquereader
    @angeliquereader Год назад

    Great Content! Just a. doubt. So we're spinning up 4 different instances. Each instance will have it's own "connections" variable. So if a client is connected to instance 1 and another client to instance 3, then how id the message sent by client1 reached to client3?

  • @mahmoudsabrah5158
    @mahmoudsabrah5158 2 года назад

    Is there a source ports limitation between the reverse proxy and the websocket server ? , because the reverse proxy has to reserve (Source port) for each websocket connection to the websocket server, and the websocket connection will still alive for a long time, so we will run out of source ports really quick at the reverse proxy

  • @gurjarc1
    @gurjarc1 2 года назад

    nice video. I have one question. What if there are thousand users, how will load balancer know which user's call to map to which stateful server. Will we refer to some db that holds the users and do the mapping?

  • @robinranabhat3125
    @robinranabhat3125 Год назад

    Just curious. In this particular example, would clients from different tabs (not windows) be considered the same or not ?

  • @localghost3000
    @localghost3000 9 месяцев назад

    How would you gracefully handle if one of your server instances with an active connection goes down?

  • @arbaztyagi123
    @arbaztyagi123 2 года назад

    I have one doubt.. the way you stored the connections in an array.. is it the good way...? and how can I store these connections in a central store or memory where all other servers (machine) can access those stored connections. Thanks

  • @5mintech567
    @5mintech567 4 года назад

    Hi First of all i like your videos and watch these stuff u are creating that is awesome but i have a doubt regarding docker file workdir path
    so my question is that while i am creating these docker file i am unable to link the volumes or the path like /home/node/app
    so can u tell me how i can bind the volumes for the images.I am mostly uses the ubuntu system for my development so it can change the folder structure ?

  • @animatrix1851
    @animatrix1851 4 года назад +1

    Could you give a situation where you'd need to scale? When do you do this, when the socket has >64k connections (or) maxed out ram because of high load of messages.

    • @hnasr
      @hnasr  4 года назад +4

      Adithya angara one example when one server can no longer handle all your users this need to be tested because it depends on the app. You app might be very CPU/mem hungry and could only handle 10k web socket connections. However your app might be light and efficient and could handle 100k ..
      You need to monitor your server and your clients and see if the experience starts to become degrading

  • @abdallahelkasass6332
    @abdallahelkasass6332 2 года назад

    How to save opened connections after reload servers.

  • @pickuphappiness5027
    @pickuphappiness5027 Год назад

    In one to one chat case, in redis db we can have user server mapping and when multiple servers recieve message from server 1 - they check whther they are connected to intended user and specific server connected to intended user can process that message(in one to one chat) is this possible?

  • @bisakhmondal8371
    @bisakhmondal8371 3 года назад +1

    Hey Hussein,
    Thanks for the awesome content man. I am extending the application to a multiroom chat server kinda like discord and also for person to person unicast. But in this highly distributed environment, I am choosing apache Kafka for pub/sub (one reason is the connectors for persistency). But I am still thinking about how to serve the pub/sub system because creating a single topic for all chat rooms (with some meta information for each message meant for that room) is a disaster but also creating individual topics for individual chatrooms is also a disaster (because I don't have any idea how to consume messages when the number of topics is humongous).
    My main goal is selective broadcast to all the users connected to each node js server and joined a particular room.
    Any thoughts here, I would love to hear them.
    If possible could you please provide any reference to articles/blogs related to this content?

    • @manglani87
      @manglani87 3 года назад

      Hi Hussein,
      I have a similar question / doubts, can you please help here!

    • @mti2fw
      @mti2fw 2 года назад +2

      Hey! I imagine that you would like to save in your database the user chat groups id, for example. Am I right? If yes, you could test subscribe your user in each of them, so each chat group would have a different channel for the messages. I'm not right if this is scalable but it's a idea that you could try to use

  • @mayankkumawat8802
    @mayankkumawat8802 3 года назад

    How would this work if there are multiple channels. With different users in them

  • @962tushar
    @962tushar 3 года назад

    A dumb question, can we not persist these connection somewhere like Redis (It'll have some cost associated to it due to serialization and deserialization, would it be negligible,? ) but it would make the load balancer avoid sticky sessions.

  • @m_t_t_
    @m_t_t_ Год назад

    Is it a good idea to store all of the messages in a in-memory database though?

  • @ahmeddaraz8494
    @ahmeddaraz8494 4 года назад

    inspiring video hussien, thanks, but I have a question, can we add a HA mode for haproxy (i.e.. by using keepalived) and that has no impact on the established tcp websocket connections ?

    • @hnasr
      @hnasr  4 года назад +1

      Interesting question Ahmed!
      It really depends if its active/active or active/passive.. if you used keepalived with haproxy, keep alived will make sure there is only one active haproxynode and all your sockets will go through that. If that haproxy goes down keepalived will switch to the other haproxy node and all connections will be dropped (because websockets are stateful)
      Active active ensures more well balanced configuration and less likely to fail but still failures can happen and unfortunately here the client has to manually reestablish the connection..

    • @ahmeddaraz8494
      @ahmeddaraz8494 4 года назад

      ​@@hnasr I was thinking that probably the tcp connections can be some how shifted, as the virtual IP is same and tcp is dealing with IP/Port (probably I am wrong here), I am still not quite sure about that and I also did not do any research, but your answer is more sense !

  • @XForbide
    @XForbide 2 года назад

    Can someone help me understand something?
    From what i understand is that load balancers like NGINX have a max connection limit of 66K ish due to limit on number of open file descriptors you can have.
    So if connections are long lived, doesnt that mean in such an architecture youre gonna get bottled necked to 66K at the load balancer level (or any intermediate proxy)? So regardless of how many machines you have behind the load balancer it will always be capped that amount.
    So what is the correct way to scale to say 100K concurrent connections? Ive read somewhere about dns load balancing. is this the way to go?

  • @EhSUN37
    @EhSUN37 2 года назад

    we subscribe and publish to ""livechat" but we are receiving from "message" ? wtf is "message" ? and what happened to "livechat" then? very nice explanation dude !

  • @TheNayanava
    @TheNayanava 2 года назад

    Hi Nasser, I have never implemented websockets ever, but here is something I want to understand.
    When a TCP connection, a persistent one, is established between the client and the server, how do we decide on what ports to open on the server side.
    For example: in a normal http communication scenario, on the edge we would enable 443 to allow only (s) communication, and then on the actual servers open up 443 or 80 depending on whether or not we have a zero trust architecture pattern. But how is it done in case of websockets? I understand we maintain a registry to store the information about which connection the server event should be pushed to, so that it can be routed correctly to the client. How many ports do we open up on the server side.. in short, when any one says we scaled up to 1 million connections on a single machine, how is that achieved??

  • @MAURO28ize
    @MAURO28ize Год назад

    Hi, How could i save the connections of 2 servers ? ,for example : 2 users could connect to different servers , so if a server have to response to 2 clients , it wouldn't find the connection data for response them. Help me please.

  • @neketavorotnikov6743
    @neketavorotnikov6743 Год назад

    So as i understand our ws proxy server hold each ws connection from clients. So the question is If our ws app server need to be scaled to hold N ws connections, why our proxy is able to hold them all by one? Why is so big difference in performance between ws proxy server and ws app server?

  • @adb86
    @adb86 3 года назад

    Hussein ,
    Awesome explanation on haproxy, can you please tell us how to run haproxy on container with https . Creating certificate on host machine works great wen haproxy is also started on host machine , but wen haproxy is running as a docker container with certificates created on host machine does not work . I did not find a way to create cert from container itself .Your input is valuable , please respond.

  • @erlangparasu6339
    @erlangparasu6339 2 года назад

    how about Apache Ignite?

  • @sariksiddiqui6059
    @sariksiddiqui6059 4 года назад +1

    How does Load balancing look like for a websocket?Does sticky sessions at layer 7 is enough, since it's websocket, the TCP connection would remain open anyway no?

    • @hnasr
      @hnasr  4 года назад +1

      Good question, web socket starts at layer 7 proxying (upgrade) then funnels back at layer 4 as stream level.

  • @momensalah8497
    @momensalah8497 3 года назад

    Well explained thanks.
    but I have a question, how can all this node apps listen to one port (8080) without an error?
    should they be ported or exposed to a different ports from each other?

    • @hnasr
      @hnasr  3 года назад +1

      Momen Salah thanks Momen!
      They listen on the same port without any error because they are different containers which each has a unquie ip address. If they were on the same host network then correct you have to pick different ports

  • @fxstreamer238
    @fxstreamer238 2 года назад

    I ran into a redis npm library error on redis publish event in docker compose with error that seams to be incompability issues with latest node version and latest docker. When bunch of noobs have access to open source code and can contribute and write whatever they want that happens. not only they change the way the library was configured but also they mess up with all kinds of nodejs arguments (coding with new way of nodejs syntax just to be fansy) to manipulate and make it suitable or unsuitable for a version of windows or node and sometimes like this when even all are the latest version something breaks

  • @sthirumalai
    @sthirumalai 3 года назад +2

    Hi Nasser. Thanks for the video and is pretty informative.
    What if one of the Websocket server crashes while serving the traffic. How can we guarantee the delivery to the clients connected to the WS server.
    Also how is HA guaranteed in REDIS?
    Awaiting your response

    • @hnasr
      @hnasr  3 года назад +3

      Santhoshkumar Thirumalai since websockets are stateful and a server crashed, the client MUST restart the connection again with the reverse proxy so it goes to another server..

    • @sthirumalai
      @sthirumalai 3 года назад

      @@hnasr : Thanks for the response. Did some research and found an interesting article on Session Management using AWS Elasticcache redis to persist the sessions. The solution you gave may not scale well I suppose.
      aws.amazon.com/caching/session-management/

  • @nailgilaziev
    @nailgilaziev 3 года назад

    Hello and thanks! You say that there is an implementations of reverse proxies (gateways) that can create really one physical tcp connections, but this is another story) can you tell it? at least as answer to this question. Thanks!

    • @hnasr
      @hnasr  3 года назад +1

      If the client of the reverse proxy is within the same subnet, the client can set its gateway IP address as the reverse proxy ip address. This way any packets will immediately go to the gateway (reverse proxy) through the power of ARP. And the reverse proxy simply use pure NAT to replace the packet as its own public ip address before sending it to the backend
      This is exactly how your phone connected through the WIFI router works. All packets go through your router by default because its the default gateway. You can actually see this in ur wifi settings

  • @davidmontdajonc6332
    @davidmontdajonc6332 3 года назад +1

    Im trying to figure out how to to this in aws with the autoscaling groups in case I'd need it. No idea how I will get which servers are suscribed info... Can I code that redis stuff on php or do I need to import all my ratchet ws logic to a nodejs app? Thanks for the video!!!

    • @vewmet
      @vewmet 3 года назад

      Hey david, we are also doing on aws

    • @davidmontdajonc6332
      @davidmontdajonc6332 3 года назад

      @@vewmet Cool, how is it going? Have you found some good documentation or tutorials? Are you using elasticache for Redis? Cheers!

  • @esu7116
    @esu7116 3 года назад

    Do you have any ideas on how to scale the reverse proxy too or this is not necessary?

    • @hnasr
      @hnasr  3 года назад

      Esu you can if your monitoring shows that the reverse proxy can’t handle the load, you can deploy another reverse proxy on an active/active cluster and put them behind a DNS SRV record
      Check out the video here
      Active-Active vs Active-Passive Cluster to Achieve High Availability in Scaling Systems
      ruclips.net/video/d-Bfi5qywFo/видео.html

  • @implemented2
    @implemented2 3 года назад

    How does proxy know which server to send data to? Does it have a mapping from clients to servers?

    • @hnasr
      @hnasr  3 года назад +1

      Great question, you specifically asked about the proxy (not reverse proxy) right?
      the proxy knows because the client actually wants to go the final destination end server which is example is google.com
      Let us say you want to go to google.com and you have configured your client to use "1.2.3.4" as a proxy
      so in HTTP at least the client adds a header call "host: google.com" and that is how the proxy knows where it will forward the traffic to
      looking at layer 4 content of this packet, the client puts the destination IP address (1.2.3.4) as the PROXY not google.com ' ip address..
      proxy is the final destination to from a layer 4 prespective, but the layer 7 the real final destination is google.com

  • @vibekdutta6539
    @vibekdutta6539 3 года назад

    A big fan of your channel always has been. Can you please explain the difference between subscriber.on (subscribe) and subscriber.on(message), I didn't understand the direction of the data flow here.

  • @zummotv1013
    @zummotv1013 4 года назад

    Does Google keep (notes making app) use web socket? What are the things to keep in mind if I am making a cloning of google keep?

    • @hnasr
      @hnasr  4 года назад +1

      zummotv not sure what they are using but if its google probably gRPC, instead of websockets. That being said you get the same result.
      Notes are little tougher specially if you want to reconcile changes

  • @trollgg777
    @trollgg777 2 года назад

    Let's say you have an API gateway, after that, you have an Auth microservice that validates requests. And also you have a cluster behind a load balancer with an instance of WebSocket. How do you connect your clients to the WebSocket? lol i'm struggling with this!!!

  • @predcr
    @predcr Год назад

    Can you please help me in scaling up my redis server

  • @dgalaa5850
    @dgalaa5850 3 года назад

    when i use nginx servers like this can i access to other services by socket id

    • @hnasr
      @hnasr  3 года назад

      Am not sure there is a socket id but you can sure create an id and use it in rules I think

  • @MidhunDarvin625
    @MidhunDarvin625 2 года назад

    What are the connections limit on the load balancer ? and How will we scale the load balancer if there is that limit ?

    • @m_t_t_
      @m_t_t_ Год назад

      There won’t be a limit because the load balancers job is so small. But if we started getting google like traffic then we would need multiple datacentres and DNS would do the load balancing between the load balancers

  • @OneOmot
    @OneOmot 3 года назад

    What if you have just another websocket server that is just connected to all other ws server instead of redis.
    Each message will be send to its clients and one of it is the last ws server that one sends it to the other server. So each ws don't need to know redis server. Just the on ws server is configured to know other ws server and connects in case of failure the other ws can just operate fine. You can scale this just put two or more connector ws!?

    • @hnasr
      @hnasr  3 года назад +1

      Yes that is possible for sure, its just you will be building your own version of a pub/sub system using websockets. Assuming synchronously. Possible and had its own use cases.

  • @diboracle123
    @diboracle123 2 года назад

    HI Hussein,
    No doubt it is good informative video but one doubt, here bottle neck is the load balancer. If we have millions of users and only one load balancer is sufficient to handle those many tcp connections ?
    one more doubt I have (it is different context) let's say I have a trading application like upstock, zerodha , we can create a watchlist of stocks. Those stock price are updating frequently , If UI sends request to the server to fetch latest price then server will be bombarded with lots of request and it is not scalable also. How we can do? pls give some thoughts here..

    • @m_t_t_
      @m_t_t_ Год назад

      If the load balancer started to be the bottleneck, then another cluster would be made and distributed through DNS

  • @saurabhahuja6707
    @saurabhahuja6707 3 года назад

    Here haproxy is maintaning connection between backend and fronend, will that cause bottleneck.. If yes then how to solve it.?

    • @kozie928
      @kozie928 3 года назад

      you can create multiple haproxy/nginx instances with docker compose for example

  • @alshameerb
    @alshameerb 3 года назад

    How can we send some data when we connect...it’s like client wants to store data in a certain location ...I need to send this location to client during connection...how can we do that...

    • @alshameerb
      @alshameerb 3 года назад

      I mean send location to server...

  • @HM_Milan
    @HM_Milan 3 года назад

    can we rederect all websockts to another available docker aws deferent availablity zonez

    • @hnasr
      @hnasr  3 года назад +2

      Yes! You can set a rule in haproxy to redirect traffic to another backend based on the source ip for example. Better approach is to use geoDNS

    • @HM_Milan
      @HM_Milan 3 года назад

      @@hnasr thanks

  • @praneetpushpal1410
    @praneetpushpal1410 4 года назад

    Nice tutorial! Thanks!
    If you have any free time, could you please share your insights on this:
    "Twitter account of top celebrity hacked". How this would have happened even after so much security at twitter.

  • @yelnil
    @yelnil 7 месяцев назад

    Isn't the load balancer here a single point of failure?

  • @karthikrangaraju9421
    @karthikrangaraju9421 3 года назад

    Hi Hussein, pub sub is not real time no? It’s pull based. Instead I think we should use redis only for bookkeeping which server has what connections and the server themselves push messages to other servers directly.

    • @hnasr
      @hnasr  3 года назад +1

      You can implement pub sub as push, pull or long polling.

  • @wassim5622
    @wassim5622 4 года назад

    I dont get it this multiple servers things, does it mean buy mlre hosting plans or what does it exactly mean by multiple servers ?

    • @hnasr
      @hnasr  4 года назад +3

      wassim could be multiple physical machines, or multiple virtual machines in a single physical machines or multiple containers in a single machine .. really depends how far you want to go with scaling

    • @wassim5622
      @wassim5622 4 года назад

      @@hnasr Thanks !!

  • @shei69
    @shei69 4 года назад

    Instead of running a websocket server.. can't you run a gRPC server connected to redis and use redis streams?

    • @hnasr
      @hnasr  4 года назад

      I AM nice idea for sure you can. The only limitations to gRPC is we can’t use it natively in web applications in the browsers .. i know there is a grpc-web proxy you can use that might work. Nice idea

  • @vilmarMartins
    @vilmarMartins Год назад +2

    Would the number of connections in HAProxy be a problem?

    • @hnasr
      @hnasr  Год назад +1

      It can at a large scale (hundred of thousands) Thats when you would have two HAProxy instances and either use keepalive with virtual IP or load balance them at the app client side through DNS.
      I wouldn’t go there unless absolutely necessary of course

    • @vilmarMartins
      @vilmarMartins Год назад

      @@hnasr Excellent! Thanks a lot!!!

  • @RahulSoni-vc8kv
    @RahulSoni-vc8kv 4 года назад

    Does not the ha proxy become bottle neck?

    • @hnasr
      @hnasr  4 года назад

      Rahul Soni it does of course that is why you need to scale the haproxy itself , you can either use active active or active passive cluster
      Active-Active vs Active-Passive Cluster Pros & Cons ruclips.net/video/d-Bfi5qywFo/видео.html

  • @Samsonkwakunkrumah
    @Samsonkwakunkrumah 2 года назад

    How do you handle offline users in this architecture?

    • @jeyfus
      @jeyfus 2 года назад

      One way to handle this could consist of persisting the messages of the related topic(s) in a database. When your (formerly) offline client goes live, they can fetch the whole history using a regular http request.

  • @dmitrychernivetsky5876
    @dmitrychernivetsky5876 2 года назад

    "scaling" with a single point of failure redis.
    FYI, most of the libraries and therefore code with respect to connection to clustered redis is entirely different from what was presented.

  • @randomlettersqzkebkw
    @randomlettersqzkebkw 2 года назад

    I do not understand how this is scaling, when the middle load balancer is actually connected as well to the clients. If it merely routed the request directly to the websocket servers, then ok, but its not doing that :/

  • @anuragvohra5519
    @anuragvohra5519 3 года назад +1

    Isn't load balancer and reddis bottle neck of your application scalling?

    • @hnasr
      @hnasr  3 года назад +1

      Anurag Vohra there will always be bottlenecks for sure. No system is perfect.
      I would however relief that bottleneck by introducing many loadbalancers and throw them behind an active/active cluster.
      Active-Active vs Active-Passive Cluster Pros & Cons
      ruclips.net/video/d-Bfi5qywFo/видео.html

    • @anuragvohra5519
      @anuragvohra5519 3 года назад

      @@hnasr thanx that cover what I was searching for!

    • @anuragvohra5519
      @anuragvohra5519 3 года назад

      @@hnasr Do you have any protal where one can reach you for job offers ? [kind of freelancing]

  • @nit50000
    @nit50000 2 года назад +1

    Thank you for the great article. It is very useful indeed. (Sorry but I feel your voice is very annoying. 😣😂🤣 )

  • @gerooq
    @gerooq Год назад

    But why have multiple WS servers and then use Redis to share messages when you can just run a single WS server that uses in-process memory to store a map of "channel name" to list of sockets that requested to subscribe to that channel. Then it's trivial to simply divvy emitted messages among other sockets in the same channel 🤷‍♂️. I mean it's way more performant especially if done multithreaded.

  • @akhilsharma1808
    @akhilsharma1808 6 дней назад

    Heading of video and content is completely different