Get my Fundamentals of Networking for Effective Backends udemy course Head to network.husseinnasser.com for a discount coupon (link redirects to udemy with coupon applied)
I am a Network Engineer but I love the backend Engineering stuff that you show and it makes it easier for me to understand application problems while troubleshooting the network side of it. Thank You and Keep posting !!!
thank you Hussein! I want you to understand that this is great content and I love the way you deliver it with such energy and excitement. I frequently download videos from RUclips for offline use and I just downloaded this one because I think it's great content.
First time on your channel. Subscribed after this first video. I like your style of content delivery. Such a themes become much more entertantaining. Thank You!
Hi @husseon Nassar: In the demo for http2, did you used the server push mechanism to push the 100 images or only the protocol was set to http2 (without enabling the server push)?
Hei. Great info. Thnx. Can you also make a comparison between HTTP2 server push and Websockets based push messages? I understand that they dont serve the same purposes but a short video on it explaining when to use what would be great.
Those CONs of HHTP/2 are certainly a stretch, given a server could return a HTML page with all sorts of unused assets in it anyway which effectively creates a PUSH in terms of network load ... and the chances of having a H1 load balancer and H2 backend are almost zero.
HTTP 2 resolves head of line blocking at application layer but in transport layer (TCP) it's still exist and Quic is trying something to achieve even on transport layer level.
Palaniappan RM sure all of those support it! Check out this video where I show h2 on caddy because its the easiest Getting started with Caddy the HTTPS Web Server from scratch ruclips.net/video/t4naLFSlBpQ/видео.html
How can I check if my web server supports Http/1.1 or H/2 or H/3? How can I force H1 and H2 on either client side or server side, if they supports both?
Thank a lot Hussein.one question?for example clent says keep alive and server response not keep alive.what does it happen in tcp connection?client will try to re establish again with 3 way handshake?
If the server doesn’t support keepalive (which I will be surprised if it doesn’t means it is so old its 1997 or prior) the connection will be terminated after the response will be created. In http/2 the connection will be always kept alived
hhellohhello no problem! the Keepalived header instructs the server to not immediately close the TCP connection after it is made so we continue send more data through it. This was available since http 1.1 Multiplexing is the ability to send multiple requests in parallel in that TCP connection. Something we couldn’t in http 1.1 and only available in http2
From Google paper Popular Web browsers, including IE8 [2], Fire- fox 3 and Google’s Chrome, open up to six TCP connections per domain, partly to increase parallelism and avoid head-of- line blocking of independent HTTP requests/responses, but mostly to boost start-up performance when downloading a Web page. static.googleusercontent.com/media/research.google.com/en//pubs/archive/36640.pdf
Hey, that is a brilliant explanation. I have one question. How did you say at 18:40 that there are 6 TCP connections sending images in parallel .. I couldn't make it out from the video
Siddartha Reddy Thanks 🙏 6 connections are opened in case of HTTP/1 In browsers i explain this here Why Browsers have 6 active TCP Connections for each website? ruclips.net/video/Xkr2nm6UPN8/видео.html
Suppose my configuration is client -> gateway -> server, how will http2 behave in this scenario between each layer?? or should I just keep only client and gateway in http2?
Awww yis, another 1k like Either there is an ongoing meme on the community of leaving the like counts at 999 and I'm screwing it, or I'm on fire! XD PS: Appreciate the explanations, both high level and low level.
The idea here , if the server and client keeps this connection open , isn't that some-kind of wasting resources ? if the client doesn't request anything , the server will still need to have maintain this connection even if the user doesn't use it , probably we can set a ttl to the open connection , so after that time with not being active , it will be closed
Both HTTP 1.1 and H2 have persisted connection. (They are not closed as long as requests are being sent) The difference is in H1.1 you can only send one request at a time and you can’t send another request until you get a response.. this is made slightly better with pipelining where you can send multiple requests in the same connection but the server must respond in order the requests where sent. This causes problems with proxies and bad implementations. So browsers in HTTP 1.1 opens multiple connections to get around this limitations. In H2 you can send any number of requests in the same connections in parallel. The reason is each request is uniquely identified with a streamid. So if we get a response we know what request it belongs to All of this is hidden from us programmers but its good to understand.. Watch my HTTP/2 playlist here to learn more HTTP/2 ruclips.net/p/PLQnljOFTspQWbBegaU790WhH7gNKcMAl-
kidsWillSeeGhosts why 6 in particular I think it was anecdotal evidence that more than 6 doesn’t give more performance but more TCP overhead.. If you are asking Why H1 open many connections is to be able to send multiple requests in parallel (can’t do that in a single tcp connection) H2 allowed sending multiple requests in a single tcp connection using streams hope that helps 😊
4 года назад
If a lot of multiple objects are loaded in parallel, each object will just compete for this limited bandwidth, so each object will load proportionally slower. Also clients might be able to open hundreds of connections, but few web servers will want to do that, because they often are processing requests for many other users at the same time. A hundred simultaneous users, each opening 100 connections, will put the burden of 10,000 connections on the server. This can cause significant server slowdown.
This is not right load balancer layer 7 supports http 2. Recently i have created a flask app which only supports http 1.1 which using gunicorn which also does not support http 2 . Then used Cloudrun which was deployed as a backend to Classic application LB in GCP so my api in local shows as http 1.1 but the same Api in the load balancer i get it as http2 so i do not think what you said is relevant now.
Get my Fundamentals of Networking for Effective Backends udemy course Head to network.husseinnasser.com for a discount coupon (link redirects to udemy with coupon applied)
Netflix for developers :)
❤️
LOL great comment
HAHA! literally
And you don’t have to use someone else’s account
hard pass on the chill
I loved the picture grid loading example to nicely visualize the performance and difference in number of connections.
I am a Network Engineer but I love the backend Engineering stuff that you show and it makes it easier for me to understand application problems while troubleshooting the network side of it. Thank You and Keep posting !!!
Thanks!! Glad to see more network engineers in the channel. Cheers
This channel is to addictive that even in the period I’m focusing on front end stuff i end up watching these.
thank you Hussein! I want you to understand that this is great content and I love the way you deliver it with such energy and excitement. I frequently download videos from RUclips for offline use and I just downloaded this one because I think it's great content.
One of the best RUclips channels for programmers...
I like this guy. This guy is smart and also have talent to be teacher...
HTTP/2: We gotta do it fast.
Michael Scott: That's what she said.
First time on your channel. Subscribed after this first video. I like your style of content delivery. Such a themes become much more entertantaining. Thank You!
Joined your channel to support you. Excellent work
❤️ thanks Subham ! Appreciate your support
Your content is great! Thanks Hussein!
excellent!! very clear provocation and easy to understand.
Thank you 🙏
Very good demo.. I loved it.
So many advances for the internet that I was not aware of!
This was very insightful. Thanks for this video.
Thanks for your comment Nataraj
More reasons to love. L4 LBs. Thx Hussein.
Awesome explanation 👍🏻
the more I learn about software, the more I find that every next upgrade boils down to parallelization
Awesome Presentation!...
😊🙏 thanks
Thanks you! In school people don't teach about this stuff.
It was so fun watching!!
Its a huge performance difference - thats what she said
Loved the demo.
Subscribed!! Great content, man!!
Your content is great! Thank you for posting.
Hi @husseon Nassar: In the demo for http2, did you used the server push mechanism to push the 100 images or only the protocol was set to http2 (without enabling the server push)?
Hei. Great info. Thnx. Can you also make a comparison between HTTP2 server push and Websockets based push messages? I understand that they dont serve the same purposes but a short video on it explaining when to use what would be great.
Liked your demo!!
Thanks, Hussein! :)
michael scott..! , that's what she said.. haha :p :p
Nice explanation..!
Those CONs of HHTP/2 are certainly a stretch, given a server could return a HTML page with all sorts of unused assets in it anyway which effectively creates a PUSH in terms of network load ... and the chances of having a H1 load balancer and H2 backend are almost zero.
Thanks so much for this video tutorial.
I have watched a lot of your videos. Thank you very much for sharing your knowledge.
Fantastic explanation, loved it
Seriously great content very very interesting!!@
1. Thanks, Gr8 video
2. It will be helpful if u will show the stream Ids in the http request/response and the TLS connection mechanism 😎
Thanks Chen!
That is a great idea diving deep into the belly of the beast of HTTP/2.. will consider it in the future.
@@hnasr you are the men
excellent video! Thanks! ;)
Amazing demo HTTP/1.1 and HTTP/2.0
Thanks you very very very much.
thanks Roman for all the love and comments!!!
@@hnasr 😃😉
very good , thanks
HTTP 2 resolves head of line blocking at application layer but in transport layer (TCP) it's still exist and Quic is trying something to achieve even on transport layer level.
How to host a server in local with HTTP/2 support? Does node http-server, apache tomcat support it?
Palaniappan RM sure all of those support it! Check out this video where I show h2 on caddy because its the easiest Getting started with Caddy the HTTPS Web Server from scratch
ruclips.net/video/t4naLFSlBpQ/видео.html
cURL supports HTTP/3 (experimental) as well.
Nice! Didn't know that, looks like I need to make a video about cURL, thanks !
@@hnasr Thanks to you.
How can I check if my web server supports Http/1.1 or H/2 or H/3? How can I force H1 and H2 on either client side or server side, if they supports both?
Hey Harash, I recommend chrome dev tools. Good idea ill make a video on this
Thank a lot Hussein.one question?for example clent says keep alive and server response not keep alive.what does it happen in tcp connection?client will try to re establish again with 3 way handshake?
If the server doesn’t support keepalive (which I will be surprised if it doesn’t means it is so old its 1997 or prior) the connection will be terminated after the response will be created.
In http/2 the connection will be always kept alived
@@hnasr thank you very much for you great content that you offer us!keep going
Sorry, but whats the difference between keep alive header and multiplexing?
hhellohhello no problem! the Keepalived header instructs the server to not immediately close the TCP connection after it is made so we continue send more data through it. This was available since http 1.1
Multiplexing is the ability to send multiple requests in parallel in that TCP connection. Something we couldn’t in http 1.1 and only available in http2
powerful stuff :v
I checked again brother
Is there any update for the series
when you showed benchmarks where it was written that there was 6 TCP connections? in network panel
From Google paper
Popular Web browsers, including IE8 [2], Fire- fox 3 and Google’s Chrome, open up to six TCP connections per domain, partly to increase parallelism and avoid head-of- line blocking of independent HTTP requests/responses, but mostly to boost start-up performance when downloading a Web page.
static.googleusercontent.com/media/research.google.com/en//pubs/archive/36640.pdf
Amazing
Hey, that is a brilliant explanation.
I have one question.
How did you say at 18:40 that there are 6 TCP connections sending images in parallel .. I couldn't make it out from the video
Siddartha Reddy Thanks 🙏 6 connections are opened in case of HTTP/1 In browsers i explain this here Why Browsers have 6 active TCP Connections for each website?
ruclips.net/video/Xkr2nm6UPN8/видео.html
Suppose my configuration is client -> gateway -> server, how will http2 behave in this scenario between each layer?? or should I just keep only client and gateway in http2?
does http2 also multiplex the segment inside each request or send them in sync and wait for each segment ack ?!
Awww yis, another 1k like
Either there is an ongoing meme on the community of leaving the like counts at 999 and I'm screwing it,
or I'm on fire! XD
PS: Appreciate the explanations, both high level and low level.
🙏🙏🙏
not much learning. But still good for beginners.
How do I become member of this channel?
great content :)
how would i set up http2 in my backend network after a reverser proxy?
HAproxy supports http/2
The idea here , if the server and client keeps this connection open , isn't that some-kind of wasting resources ? if the client doesn't request anything , the server will still need to have maintain this connection even if the user doesn't use it , probably we can set a ttl to the open connection , so after that time with not being active , it will be closed
What happens when a file is already cached in the client but the server pushes the file because of its configuration?
Thanks 🙏
What is difference HTTP1.1 persistent connection and H2 multiplexing?
Both HTTP 1.1 and H2 have persisted connection. (They are not closed as long as requests are being sent)
The difference is in H1.1 you can only send one request at a time and you can’t send another request until you get a response.. this is made slightly better with pipelining where you can send multiple requests in the same connection but the server must respond in order the requests where sent. This causes problems with proxies and bad implementations.
So browsers in HTTP 1.1 opens multiple connections to get around this limitations.
In H2 you can send any number of requests in the same connections in parallel. The reason is each request is uniquely identified with a streamid. So if we get a response we know what request it belongs to
All of this is hidden from us programmers but its good to understand..
Watch my HTTP/2 playlist here to learn more
HTTP/2
ruclips.net/p/PLQnljOFTspQWbBegaU790WhH7gNKcMAl-
good. thanks.
What, where does it say in the spec that i have to wait for the response to send another request? I hope no actual client does that.
how can we check if it's a 6 TCP connections in case of HTTP/1.1 in developers tool?
ConnectionID, I illustrate this here ruclips.net/video/LBgfSwX4GDI/видео.html
@@hnasr cool thanks!
Why does HTTP 1.1 use '6' TCP connections? Is it a technical reason it is 6 or is this just a common standard?
kidsWillSeeGhosts why 6 in particular I think it was anecdotal evidence that more than 6 doesn’t give more performance but more TCP overhead..
If you are asking Why H1 open many connections is to be able to send multiple requests in parallel (can’t do that in a single tcp connection) H2 allowed sending multiple requests in a single tcp connection using streams hope that helps 😊
If a lot of multiple objects are loaded in parallel, each object will just compete for this limited bandwidth, so each object will load proportionally slower.
Also clients might be able to open hundreds of connections, but few web servers will want to do that, because they often are processing requests for many other users at the same time. A hundred simultaneous users, each opening 100 connections, will put the burden of 10,000 connections on the server. This can cause significant server slowdown.
@ Thanks
thanks
Geodatabase related videos have been removed from your channel
Could u please reupload then
As the videos were helpful for us
Thanks
Muhammad Abnan hey Muhammed can you try again? Let me know..
Hi
It’s not there brother
Last week I saw but since from afternoon I’m looking those
But I cannot find it at all
Nothing is there about geodatabase series and versioning related
Muhammad Abnan Multi-User Geodatabase
ruclips.net/p/PLQnljOFTspQWseiNSmOMgsR5lgugKg_KP
If someone asks me “what is an API?” i’ll send them this one
10:41 I see what you did there xD
This is not right load balancer layer 7 supports http 2. Recently i have created a flask app which only supports http 1.1 which using gunicorn which also does not support http 2 . Then used Cloudrun which was deployed as a backend to Classic application LB in GCP so my api in local shows as http 1.1 but the same Api in the load balancer i get it as http2 so i do not think what you said is relevant now.
21:57 that's what she said😂😂😂
🎉👏🏼👏🏼👏🏼👏🏼👏🏼👏🏼
thanks youtube has 2x speed :)
amflearning by doing
That's what she said. 😂
can you make the videos in arabic too?
matse matse I was considering making another channel as a test or at least include subtitles but it is a lot of work.
انا رفيق علي احمد
Hello
stuff is good, you just need not to make the voices ,
a simple and straight way of talking would be more clear, i presume.
Helllozzzz
Hahaha thats what she said
مرحبا
time consuming video. could be made bit shorter.