Hussein, right on-- "Do we really need to send so many requests?" Though not so easy as a networking change, modify the app so it doesn't download 500 files per page. Relying on http1 to slow and spread page load time is bad design. I wonder if the slow start property of tcp does some effective throttling. Seems like putting http2 after the load balancer doesn't solve spiking requests, it might be even worse. It's like driving your Ferrari in los angeles at 5pm Friday in April 2019. I guess we need a covid for software. Shelter your data in place at the server, and only send essential info. No large gatherings of JavaScript and css libraries and include files.
The OS at Layer 7 needs to catch up to these new protocols. An increase in efficiency in one layer does not automatically mean efficiency gains in another layer. In fact, it may mean that a new bottleneck is created for something that was specifically designed to handle a certain frequency/load of instruction data. If one end of the communication Is designed to handle a certain load at a certain frequency and things change on one of the communication partners then it may require a re-think. The algorithms that handle these instructions need to be optimised for the new pattern that is coming their way. There may even be a requirement for hardware architecture changes to make the most of the new protocols. Usually, if the proxies/load balancers are optimised to handle this and reframe the data so that the end node gets it in its most optimal (what it's "used to") this can be a work-around but it is not efficient. It is better than the algorithms are optimised for a specific protocol and if this is not possible then something needs to change at the ASIC level architecture otherwise you have a situation where people will deploy more infrastructure to spread the load over more hardware in an attempt to get near the performance they used to get which is completely self-defeating in terms of design methodology. But in this case, I think that if both server and client can support the same protocol then why not turn it on on the server-side as well or is that too or is there some negative impact that would have? Having said all that, I have no idea what I just said but it sure does sound clever right?
Hi Hussein. So, wouldn't it be a right way of switching HTTP/1.1 to H2 if we will start from the end to the start? So we will firstly migrate our back-end services and then migrate our reverse proxy? In that case everything seems right isn't it?
Hey, nice video although I would say that enabling throttling at the loadbalancer can have advantages such as dynamically changing the concurrency at times of peak and less flow. Also if we have caching at the loadbalancer, it CAN be a win-win I think.
Hey, Hussein! Last month, you made a video about load balancing in HTTP/2 ( th-cam.com/video/0avOYByiTRQ/w-d-xo.htmlm ), where you explained how a reverse proxy would distribute requests (from an HTTP/2 stream in the front-end) to each back-end server (running on HTTP/1 or /1.1). If the suddenly higher bandwidth was the short-term issue expressed in the article, wouldn't it have been more efficient to horizontally scale the back-end servers instead of outright throttling the proxy? Since their back-end was supposedly designed to be "stateless", wouldn't it be arguably "easier" to temporarily add more instances (of the back-end services) until the architectural issues have been resolved? Personally, the thought of throttling proxies raised my eyebrows because it defeated the whole point of the HTTP/2 migration. By throttling their bandwidth, it seemed like they opted out of the potential benefits of HTTP/2 altogether. In the end, it would be the users who suffered the lower capacity. If the back-end services were horizontally scaled (at least for the time being), then the users wouldn't have been inconvenienced (assuming that this was a "mission-critical" application).
Basti Ortiz Basti Ortiz i agree with your thinking, yeah to me its easier to add more instances but for any reason (cost perhaps) they decided to go the throttling route.
Thanks Nasser for picking up the topic, wishing you all the success in life
Thanks for sharing the article!
Nice video. You should more of these kind where in you debate and contrast about blog posts and articles.
Hussein, right on-- "Do we really need to send so many requests?" Though not so easy as a networking change, modify the app so it doesn't download 500 files per page. Relying on http1 to slow and spread page load time is bad design.
I wonder if the slow start property of tcp does some effective throttling. Seems like putting http2 after the load balancer doesn't solve spiking requests, it might be even worse.
It's like driving your Ferrari in los angeles at 5pm Friday in April 2019. I guess we need a covid for software. Shelter your data in place at the server, and only send essential info. No large gatherings of JavaScript and css libraries and include files.
The OS at Layer 7 needs to catch up to these new protocols. An increase in efficiency in one layer does not automatically mean efficiency gains in another layer. In fact, it may mean that a new bottleneck is created for something that was specifically designed to handle a certain frequency/load of instruction data. If one end of the communication Is designed to handle a certain load at a certain frequency and things change on one of the communication partners then it may require a re-think. The algorithms that handle these instructions need to be optimised for the new pattern that is coming their way. There may even be a requirement for hardware architecture changes to make the most of the new protocols. Usually, if the proxies/load balancers are optimised to handle this and reframe the data so that the end node gets it in its most optimal (what it's "used to") this can be a work-around but it is not efficient. It is better than the algorithms are optimised for a specific protocol and if this is not possible then something needs to change at the ASIC level architecture otherwise you have a situation where people will deploy more infrastructure to spread the load over more hardware in an attempt to get near the performance they used to get which is completely self-defeating in terms of design methodology.
But in this case, I think that if both server and client can support the same protocol then why not turn it on on the server-side as well or is that too or is there some negative impact that would have?
Having said all that, I have no idea what I just said but it sure does sound clever right?
Just scale the backend horizontally and let the loadbalancer balance the load :D
Hi Hussein. So, wouldn't it be a right way of switching HTTP/1.1 to H2 if we will start from the end to the start? So we will firstly migrate our back-end services and then migrate our reverse proxy? In that case everything seems right isn't it?
does http2 also multiplex the segment inside each request or send them in sync and wait for each segment ack ?!
Hussain did you try entity framework , your opinion!
I have not! Ill check it out
Please Hussein can you explain Kerberos protocol
I will! its a complex protocol and require some more research thanks!
Hey, nice video although I would say that enabling throttling at the loadbalancer can have advantages such as dynamically changing the concurrency at times of peak and less flow. Also if we have caching at the loadbalancer, it CAN be a win-win I think.
AAYUSH KUMAR AGARWAL nice use cass Aayush, I can see if scaling is not an option at the backend throttling helps for sure .
Hey, Hussein! Last month, you made a video about load balancing in HTTP/2 ( th-cam.com/video/0avOYByiTRQ/w-d-xo.htmlm ), where you explained how a reverse proxy would distribute requests (from an HTTP/2 stream in the front-end) to each back-end server (running on HTTP/1 or /1.1).
If the suddenly higher bandwidth was the short-term issue expressed in the article, wouldn't it have been more efficient to horizontally scale the back-end servers instead of outright throttling the proxy? Since their back-end was supposedly designed to be "stateless", wouldn't it be arguably "easier" to temporarily add more instances (of the back-end services) until the architectural issues have been resolved?
Personally, the thought of throttling proxies raised my eyebrows because it defeated the whole point of the HTTP/2 migration. By throttling their bandwidth, it seemed like they opted out of the potential benefits of HTTP/2 altogether. In the end, it would be the users who suffered the lower capacity. If the back-end services were horizontally scaled (at least for the time being), then the users wouldn't have been inconvenienced (assuming that this was a "mission-critical" application).
Basti Ortiz Basti Ortiz i agree with your thinking, yeah to me its easier to add more instances but for any reason (cost perhaps) they decided to go the throttling route.