Hi Chris.. you did an amazing job explaining congestion control. I watched a lot of your TCP videos. You did an amazing job making me a understand TCP throughout this playlist. Thank you so much. 😀
Amazing , I had written SLOW-START, CONGESTION AVOIDANCE and FAST-RETRANSMIT in my notes, but never understood those. Thanks to Chris I am in better postiion for my upcoming job interview.
This is good stuff and explained in easy way. I have been following your session since I have attended one of you session in Shark Fest. You really know how to simplify the complex concepts and explain it to larger audience.
Very good explanation of this topic. Think I finally got it. A very important point you make is that not all TCP stacks handle this the same / as aggressive.
Hi! "The initial value of ssthresh SHOULD be set arbitrarily high (e.g., to the size of the largest possible advertised window)" (rfc5681), as i understand it, that means Window Scale * Window = 262140 bytes, so why slow start threshold is 32 (46720 bytes)? (time = 07:14) " If (SMSS > 1095 bytes) and (SMSS
Hey great comment. That is because not all OS’s and applications tweak according to the RFC. iPerf is especially picky about it. SHOULD is always a fun word to navigate.
Nowadays, IW is up to 10; and frequently tunable (as root) - RFC 6928. Also, virtually all full-feature stacks cache some path information between subsequent sessions - to not overshoot with slow-start the (previously recorded) capabilities of the network (Host Cache). During slow start, ABC (appropriate byte counting) - RFC3465 - is in common use, since delayed ACKs (with an extreme example in the shown trace - more likely ACK thinning going on by the network, or ACK compression due to receiver side LRO, but less likely based on the involved timings - allows for a growth of up to 2 MSS per ACK, if that ACK covers 2*MSS or more - this is to address ACK splitting attacks, where a receiver once could drive the sender to ramp up slow start extremely quickly, locally (close to the server) resulting in massive congestion and an effective DoS attack variant. I would really like to see real advanced topics - such as Timestamps for troubleshooting, how to spot common signatures of TSO, LRO, ACK compression, ACK thinning, HyStart [++], SACK+RTO, Lost Retransmission (Detection), ECN (AccECN), PRR, Rescue Retransmission, TLP, RACK, BBR and what may be typical misbehaviors associated there would be.
Great video Chris! At 11:20 when we lost data, how would we go about finding out where in the network this was? Could this be something a show command on a cisco device could help find in the path where something was lost? or would you go about determining this from running more pcaps across the path? Sometimes when I am troubleshooting I have access to some network devices in the path but usually closer to clients but not servers.
Great content in all your TCP videos - very informative, clear, concise and professionally presented. Haven't seen one yet that wasn't excellent. Thanks for your generous contributions. In your presentation, the iperf implements congestion control with the Initial Window of 8 and increases CWND by 2 segments each round trip but could double it each RTT as you point out. But RFC 5681 states that the upper bound of the IW MUST be set to no more than 2, 3 or 4 segments depending on the value of MSS. It also states that "During Slow-Start, a TCP increments cwnd by at most SMSS bytes for each ACK received that cumulatively acknowledges new data.", which as written means by one segment per round trip unless it meant to say "by at most SMSS * the cumulative ACK count". Could upi resolve the confusion regarding IW and the CWND increase? Thanks Much!
Hello Chris, Thanks for this new usefull video. What's the difference(s) between the congestion window and the sliding window as both are maintained by the sender ? BR
Hey thanks for the comment! They are very closely related concepts so it's easy to mix them up. Here is an awesome article about sliding windows that should help - www.extrahop.com/company/blog/2017/tcp-windowing/
Hi, I really loved your video can you pick a pcap for IMSI from a telecom node and explain how the handshake is happening with the server and why sometimes IMSI can not access the server in ipv6 mode.
Hi Chris. U explain it with so ease, thanks for that. I am newbie in networking field and one of my mentor told me to practise tcp/ip protocols with wireshark. I was searching for videos and stumbled upon ur channel. Can u pls guide me how to start for a beginner. I am focusing my career on testing and automation. Pls help me.
The TCP stack itself usually has a default window size option that is used. However it is possible for an application to override this option with a different value.
Thank you for the video. The question I have is What makes .196 to ack only after 8/10/12... segments. In other words what determines how many segments the receiving party acknowledges receiving? Why don't we see more acks there? Does it have to do with Delayed ACK mechanism? If so, why the number (in ms) is fluctuating?
Hey! Great question. In short, kinda. ACK frequency is usually dependent on the stack settings and how sensitive it is to incoming data. In this case we do see the receiver delaying the ack for several large ingress MSS's, but it is not time based. We can see that the receiver is striking a balance - ACKing quickly, but not every other packet which isn't really necessary in this case. I've seen this behavior with throughput tests like these where a large amount of data quickly arrives at the receiver.
Haha! Please don’t drop out for my content! 😜 honestly just keep analyzing traffic no matter what and you will have a bright future in this field, college or not.
What odd stacks have created the trace in this example? Ack compression/thinning like this by the receiver is by itself a massive performance restriction? That the cwnd collapses down to 4 MSS instead of 50% (newreno) or 70% (cubic) is also very curious - I'm inclined to suspect this is an artificially created trace with dummynet (100ms delay) and some time-based ACK thinning...
Good spot @richardscheffenegger9138 - this trace was created using iPerf. I tweaked a few of the window size settings so it would be easier to see the collapse of the congestion window. When you are teaching cwin, doing so in a real, uncontrolled environment is very tricky. Not the best way to introduce new learners to the concept.
@@ChrisGreer Sure. I meant which OS (& version) this was captured on. Linux is supposed to infer the slow-start phase and ACK every segemnt then (non-RFC behavior). TSO/LRO/GRO can interfere and create this extreme ACK thinning you showed. So for a clean (textbook) example, disabling TSO/GRO and clearing the HostCache are key. For the highest alignment of the TCP implementation with the RFCs, you may want to use FreeBSD or another BSD variant. If you want to show some fancy effects like continous cwnd growth while the application limits its sending speed, and then provides a huge chunk of data, you want to use uperf. That could demonstrate what happens After Idle, or when the cwnd is kept open, with little data transfern, until a massive burst of data is provided - line rate jump in the TCP sending rate, usually massive induced losses in the Tail-Drop queues, and even lost retransmissions (which again Linux does a very good job handling, even while RFCs are silent on that). You may want to look into RACK/TLP (Netflix traffic) though, since the loss recovery there is quite outstanding. And as I mentioned elsewhere, a good demonstation of the TCP ECN control loop is no where to be found, even though many large players (MSFT, Apple, ...) have started deploying ECN since a few years. (Also check DCTCP). If you want more details, contact me ;)
@@richardscheffenegger9138 As I recall this was between a Win11 and Mac 11.1.x or something like that. A lot of what you are describing is far beyond the intent of this video and even beyond the interest of the standard RUclips viewer. Good fodder for a TCP Congestion course though.
Hello Chris. Loving the content. I was trying to look at the file but I cannot reproduce the column TCP segment. I found what seemed to be the MMS Value location under TCP Options - Maximum segement size but it does not show the same results as what you display on your screen. Can you point to how that column is created? Thanks.
Look above the "Sequence Number" field in the TCP header. You should see TCP Segment Length. That is the one you want to add as a column. Right click - Apply As Column.
Hey Chris, this is really great stuff - just found you channel! Had a question/confirmation: The [TCP Fast Retransmission] goes out in #3361 in response to 3 DUP ACK's. Then in #3388, after another 27 DUP ACK"s come in, we see a [TCP Out-of-Order] go out. I assume #3388 was in response to the SACK Left Edge indicating 2 segments were missing, and thus it was re-sending that second one (first obviously going out from that [TCP Fast RT] ), and that the 20ms gap was just due to the calculating/processing of that SACK data? Interestingly, there's only < 1 ms delay between getting the ACK for first missing segment (#3503) and the ACK for second missing segment (#3505, which also got everything caught up) - presumably the other side was piecing some things back together and then shot out both ACK's back to back despite them coming in 20ms apart?
Hey Nathan, nice job! Great analysis and great question. Without having a pcap on the other side, we can't be 100% sure, but we definitely can measure that 20mSec delta between the RT and the Out-of-order (which is also technically a retransmission but the wireshark TCP analysis bits can sometimes get that confused). Based on the behavior, I would say that there are a two possiblities 1. Like you mentioned, the server could have halted 20msec in transmitting the second retrans. 2. The network could have buffered it along the path. I say that because we already see high latency and congestion-induced loss on this network. So it's possible it got stuck in a buffer for a few ms en route. Either way - great spot and thank you for the comment! This is definitely a case-in-point of why i ask my clients for dual-side captures when I am doing deep TCP analysis. Takes the guessing out of the way. Keep on capturing Nathan!
Reading the RFC alone may be confusing, your videos are making things very clear, Gj! In your experience, does increasing initial MSS value to an extremes value like 64 causes more issues?
I would say so because that is a super small MSS! Are you thinking of the initial window? I haven't ever increased it that much manually. I'm sure the super-smart people who designed the TCP stack did though....
@@ChrisGreer Yeah the value which defines the initial window size. I performed performance tests over TCP to boost performance over lines with latencies and 100-1000 MBPS. ("Fat long pipes") Before sending app data which is very consuming (MB-GB of data), it sends few MBs (1-10). I found that with latency it took x3 times as much because of the slow start. So I boosted it up to 64 initial MSS and got the best result because it no longer slow started. [Was very visible in the Wireshark also thanks for your videos :) ]
Thank you Chris. Can you help me with below issue ? I am stuck with a issue observed in wireshark where I can see that after the complete handshare between server and client is over and the application data is transfered between the server and cllient. After sometime the server sends an encryption alert message with Alert code 21 followed by a FIN signal. The client ACK's the signal and sends its own Encryption alert message with Aler code 21 along with a FIN signal which is ACK's by server. But along with this client also sends a RST signal. I am not understanding why the client is sending an RST signal ? And this issue is happening after every disconnect of the client and server. The server has a keep alive of 6 seconds implemented becuse of which if there is no data to be transfered for 6 seconds the server sends an encryption alert message along with FIN signal.
Without the packets it's tough to say for sure. But I am guessing that the server is timing out the encrypted session first, and the client is just reacting. After the FIN that the client sends, it considers anything other than the ACK to that FIN to be further activity, and it will reset. It could be just how the client's TCP stack settings are configured. Unless it is actually breaking anything with the application, you should be able to ignore it. It's just ending the connection with a reset.
@@anubhavkumar2059 the RST may only a courtesy notificiation to the server, to quickly move on from the TIME-WAIT state, releasing some still held resources on the server side... Allowing for a quick recycling of the 4-tuple for a new session (some OS don't have too much leeway in their ephemerial port selections, and re-using the same one in quick succession could otherwise result in a very delayed reconnection time...
Hey there! Hopefully someone can help me out on this one. I'm currently learning how TCP works and I have a question related to this capture: why does .196's Seq number not increase each time it sends an ACK packet to .184? I understand it isn't sending any large amount of data, but is not the ACK packet itself still counted as 1byte and, therefore, would increase the Seq number by 1?
Hello! That is a great question. So an ACK carries no data and does not have the SYN or FIN bit set, both of which will increase the seq number by one (the ghost byte). So in this case, .196 is not sending any data, so there is no need to move the seq number forward in that direction. Hope that helps!
It's not actually looking for a "successful ACK". It's timing the round trip time (RTT) to receive the ack. If you're on a really slow link (think the old modem days) you can't just send 100K bytes or it will take too long to get the handshake. So as long as the acks are received in an acceptable time it can send a larger payload. if you're on a wireless network and there are a bunch of people sharing the "wire", you don't want 1 app dominating the queues for too long.
@@ChrisGreer I find it amusing that someone who clearly doesn't understand network protocols in any sort of comprehensive way has a "channel". The modern one-eyed man in the land of the blind. Good for you.
It’s ok if you are afraid of creating your own channel and posting content. I really did feel like that for a long time. I felt like I had nothing to share. I felt like everyone would hate my content, think I was stupid, would over criticize the smallest detail. It’s normal. I’m so blessed to have experienced the kindness and good will of the overwhelming majority. You will too. Give it a try. I’ll be happy to reference your stuff!
Ever seen ESP packet spam from an iPhone shut down a network? Talking multiple GB/min. (Non malicious and not isolated to a single device.) I have capture files if you want to see them. Lol
Thank you Chris. Waiting for more and more content on TCP, QUIC & TLS ..
Big big big thanks to you ❤️❤️
Working on more as we speak! thank you for watching and commenting.
I want to say that your channel and content is one of the best I could see related to the TCP, keep going please! Thank you!
Thank you for the comment! I appreciate the positive feedback.
「計算機概論」作業救星,謝謝你!講的超清楚的 :)
Hi Chris..
you did an amazing job explaining congestion control. I watched a lot of your TCP videos. You did an amazing job making me a understand TCP throughout this playlist. Thank you so much.
😀
Clear, concise, articulate explanations. Very nicely done squire.
Mulțumim!
Thank you!
Amazing , I had written SLOW-START, CONGESTION AVOIDANCE and FAST-RETRANSMIT in my notes, but never understood those. Thanks to Chris I am in better postiion for my upcoming job interview.
Awesome! Glad the video helped you!
dude thanks, best explanation on congestion window I've seen so far!
Thanks for the comment!
This is good stuff and explained in easy way. I have been following your session since I have attended one of you session in Shark Fest. You really know how to simplify the complex concepts and explain it to larger audience.
thank you for the comment!
You can't imagine how much you have helped me. Thanks a lot
Happy to help!
God bless this awesome guy and prevent any mess in sequence numbers. :)
I'm so impressed, I bought the T-Shirt!!
Came over from David Bombal's channel and so pleased I did! 😊
Welcome Alex!
Just found your channel, love all your videos! Thanks for sharing your vast knowledge.
Glad you stopped by Mike!
Very good explanation of this topic. Think I finally got it. A very important point you make is that not all TCP stacks handle this the same / as aggressive.
Great to have you! Yeah congestion control is a tough thing to get straight.
Always love your videos, thank you for making difficult things easy
Thanks for the comment!
These are excellent contents, Chris. I'd like to know more about TCP Optimization.
This is a very good explanation. I deal with this every day and you nailed it. Great work
Wow thanks for the comment Colin!
Thanks a lot for this Chris. I am able to troubleshoot a lot better because of the knowledge shared in your videos. You are the best Keep it coming.
Thanks for the comment!
you explain this so well. i am so grateful for you and this channel
Thank you so much Dear Chris, You are such a Good Mentor.
Great video Chris!! waiting for your deep dive in SACK explanation
You got it!
Awesome explanation 🤯. Thank you very much for making this material available and please keep making more.
Thank you! Will do!
Ohhhh .. you made my day. Explaining this congestion window...
Yikes, yeah it's a tough one to understand. I'm glad it helped!
Really great..Hope you do more tcp congestion analysis. Thanks!
I love that stuff. Good idea!
Thank you for the awesome explanation. Loving all the TCP content!
Thank you for the comment!
..as always Chris, thank you for wireshark content!
You bet! Glad it helps.
Great as always Chris!
Glad you enjoyed it!
I know i'm late to the party, but thanks for another gem.
Good illustrative example and explanation. Thank you!
your videos are awesome sir
Thanks for watching and commenting!
You are good at this dude! Thanks so much
Exceptional explanations. ❤️
Outstanding performance. Could you please add more security materials.
Definitely!
lovely explanation 🔥❤️
Great video. It was straight yo the vein!!
Glad you liked it!
Thanks for taking time to deep dive into these things.
My pleasure!
Thank you Chris !! Great content ..My concepts got cleared
Thx for the content! Learning a lot from your packet analysis.
Glad to hear it!
Thank you Chris, nicely explained.
thank you for the brilliant explanation chris.. much appreciated
My pleasure!
Hi!
"The initial value of ssthresh SHOULD be set arbitrarily high (e.g., to the size of the largest possible advertised window)" (rfc5681), as i understand it, that means Window Scale * Window = 262140 bytes, so why slow start threshold is 32 (46720 bytes)? (time = 07:14)
" If (SMSS > 1095 bytes) and (SMSS
Hey great comment. That is because not all OS’s and applications tweak according to the RFC. iPerf is especially picky about it. SHOULD is always a fun word to navigate.
Nowadays, IW is up to 10; and frequently tunable (as root) - RFC 6928.
Also, virtually all full-feature stacks cache some path information between subsequent sessions - to not overshoot with slow-start the (previously recorded) capabilities of the network (Host Cache).
During slow start, ABC (appropriate byte counting) - RFC3465 - is in common use, since delayed ACKs (with an extreme example in the shown trace - more likely ACK thinning going on by the network, or ACK compression due to receiver side LRO, but less likely based on the involved timings - allows for a growth of up to 2 MSS per ACK, if that ACK covers 2*MSS or more - this is to address ACK splitting attacks, where a receiver once could drive the sender to ramp up slow start extremely quickly, locally (close to the server) resulting in massive congestion and an effective DoS attack variant.
I would really like to see real advanced topics - such as Timestamps for troubleshooting, how to spot common signatures of TSO, LRO, ACK compression, ACK thinning, HyStart [++], SACK+RTO, Lost Retransmission (Detection), ECN (AccECN), PRR, Rescue Retransmission, TLP, RACK, BBR and what may be typical misbehaviors associated there would be.
Great video Chris! At 11:20 when we lost data, how would we go about finding out where in the network this was? Could this be something a show command on a cisco device could help find in the path where something was lost? or would you go about determining this from running more pcaps across the path? Sometimes when I am troubleshooting I have access to some network devices in the path but usually closer to clients but not servers.
Great content in all your TCP videos - very informative, clear, concise and professionally presented. Haven't seen one yet that wasn't excellent. Thanks for your generous contributions. In your presentation, the iperf implements congestion control with the Initial Window of 8 and increases CWND by 2 segments each round trip but could double it each RTT as you point out. But RFC 5681 states that the upper bound of the IW MUST be set to no more than 2, 3 or 4 segments depending on the value of MSS. It also states that "During Slow-Start, a TCP increments cwnd by at most SMSS bytes for each ACK received that cumulatively acknowledges new data.", which as written means by one segment per round trip unless it meant to say "by at most SMSS * the cumulative ACK count". Could upi resolve the confusion regarding IW and the CWND increase? Thanks Much!
Very informative video. Thank you Chris.
Glad you enjoyed it!
This is really great! Thank you!
You're very welcome!
Amazing video!
Thanks!
Hello Chris,
Thanks for this new usefull video.
What's the difference(s) between the congestion window and the sliding window as both are maintained by the sender ?
BR
Hey thanks for the comment! They are very closely related concepts so it's easy to mix them up. Here is an awesome article about sliding windows that should help - www.extrahop.com/company/blog/2017/tcp-windowing/
Hi, I really loved your video can you pick a pcap for IMSI from a telecom node and explain how the handshake is happening with the server and why sometimes IMSI can not access the server in ipv6 mode.
Thanks for the comment. I haven't seen a pcap like that yet. If you have one let me know and I could feature it.
Great video. Thankyou. Subscribing right away !
Hi Chris. U explain it with so ease, thanks for that. I am newbie in networking field and one of my mentor told me to practise tcp/ip protocols with wireshark. I was searching for videos and stumbled upon ur channel. Can u pls guide me how to start for a beginner. I am focusing my career on testing and automation. Pls help me.
That is great to hear. Have you checked out my wireshark course - www.bit.ly/wiresharkintro It is a good overview of wireshark
Cool chammel. Thank you for what youre doing
Thanks for the comment!
Thanks Chris! I still have a question for receive window, what and how decide size of receive window? code of program or something else?
The TCP stack itself usually has a default window size option that is used. However it is possible for an application to override this option with a different value.
Thank you for the video. The question I have is What makes .196 to ack only after 8/10/12... segments. In other words what determines how many segments the receiving party acknowledges receiving? Why don't we see more acks there? Does it have to do with Delayed ACK mechanism? If so, why the number (in ms) is fluctuating?
Hey! Great question. In short, kinda. ACK frequency is usually dependent on the stack settings and how sensitive it is to incoming data. In this case we do see the receiver delaying the ack for several large ingress MSS's, but it is not time based. We can see that the receiver is striking a balance - ACKing quickly, but not every other packet which isn't really necessary in this case. I've seen this behavior with throughput tests like these where a large amount of data quickly arrives at the receiver.
@@ChrisGreer Thanks, I appreciate your answer.
Chris, I am going to drop out of college and watch your videos instead. Thank you so much for creating this content.
Haha! Please don’t drop out for my content! 😜 honestly just keep analyzing traffic no matter what and you will have a bright future in this field, college or not.
which CCA are you using in the example lab ? because i have different results with mine, using cubic. thank you!
Gotta check, but I am pretty sure it was cubic as well.
Superb!
Thanks a lot!
What odd stacks have created the trace in this example?
Ack compression/thinning like this by the receiver is by itself a massive performance restriction? That the cwnd collapses down to 4 MSS instead of 50% (newreno) or 70% (cubic) is also very curious - I'm inclined to suspect this is an artificially created trace with dummynet (100ms delay) and some time-based ACK thinning...
Good spot @richardscheffenegger9138 - this trace was created using iPerf. I tweaked a few of the window size settings so it would be easier to see the collapse of the congestion window. When you are teaching cwin, doing so in a real, uncontrolled environment is very tricky. Not the best way to introduce new learners to the concept.
@@ChrisGreer Sure. I meant which OS (& version) this was captured on. Linux is supposed to infer the slow-start phase and ACK every segemnt then (non-RFC behavior). TSO/LRO/GRO can interfere and create this extreme ACK thinning you showed. So for a clean (textbook) example, disabling TSO/GRO and clearing the HostCache are key.
For the highest alignment of the TCP implementation with the RFCs, you may want to use FreeBSD or another BSD variant.
If you want to show some fancy effects like continous cwnd growth while the application limits its sending speed, and then provides a huge chunk of data, you want to use uperf. That could demonstrate what happens After Idle, or when the cwnd is kept open, with little data transfern, until a massive burst of data is provided - line rate jump in the TCP sending rate, usually massive induced losses in the Tail-Drop queues, and even lost retransmissions (which again Linux does a very good job handling, even while RFCs are silent on that).
You may want to look into RACK/TLP (Netflix traffic) though, since the loss recovery there is quite outstanding.
And as I mentioned elsewhere, a good demonstation of the TCP ECN control loop is no where to be found, even though many large players (MSFT, Apple, ...) have started deploying ECN since a few years. (Also check DCTCP). If you want more details, contact me ;)
@@richardscheffenegger9138 As I recall this was between a Win11 and Mac 11.1.x or something like that. A lot of what you are describing is far beyond the intent of this video and even beyond the interest of the standard RUclips viewer. Good fodder for a TCP Congestion course though.
Thank you for share, always top
Thanks for the comment Yohan!
Hello Chris. Loving the content. I was trying to look at the file but I cannot reproduce the column TCP segment. I found what seemed to be the MMS Value location under TCP Options - Maximum segement size but it does not show the same results as what you display on your screen. Can you point to how that column is created? Thanks.
Look above the "Sequence Number" field in the TCP header. You should see TCP Segment Length. That is the one you want to add as a column. Right click - Apply As Column.
@@ChrisGreer Found it. Thank you.
Tilll when will the congestion window keep doubling ? How to find the threshold?
It will usually go up quickly until it reaches to ssthresh value (slow start threshold) which is an internal value in the stack.
Hey Chris, this is really great stuff - just found you channel!
Had a question/confirmation: The [TCP Fast Retransmission] goes out in #3361 in response to 3 DUP ACK's. Then in #3388, after another 27 DUP ACK"s come in, we see a [TCP Out-of-Order] go out.
I assume #3388 was in response to the SACK Left Edge indicating 2 segments were missing, and thus it was re-sending that second one (first obviously going out from that [TCP Fast RT] ), and that the 20ms gap was just due to the calculating/processing of that SACK data?
Interestingly, there's only < 1 ms delay between getting the ACK for first missing segment (#3503) and the ACK for second missing segment (#3505, which also got everything caught up) - presumably the other side was piecing some things back together and then shot out both ACK's back to back despite them coming in 20ms apart?
Hey Nathan, nice job! Great analysis and great question. Without having a pcap on the other side, we can't be 100% sure, but we definitely can measure that 20mSec delta between the RT and the Out-of-order (which is also technically a retransmission but the wireshark TCP analysis bits can sometimes get that confused). Based on the behavior, I would say that there are a two possiblities 1. Like you mentioned, the server could have halted 20msec in transmitting the second retrans. 2. The network could have buffered it along the path. I say that because we already see high latency and congestion-induced loss on this network. So it's possible it got stuck in a buffer for a few ms en route.
Either way - great spot and thank you for the comment! This is definitely a case-in-point of why i ask my clients for dual-side captures when I am doing deep TCP analysis. Takes the guessing out of the way. Keep on capturing Nathan!
Hi Sir you are great 👍 uer video very help us
Thanks and welcome!
Reading the RFC alone may be confusing, your videos are making things very clear, Gj!
In your experience, does increasing initial MSS value to an extremes value like 64 causes more issues?
I would say so because that is a super small MSS! Are you thinking of the initial window? I haven't ever increased it that much manually. I'm sure the super-smart people who designed the TCP stack did though....
@@ChrisGreer
Yeah the value which defines the initial window size.
I performed performance tests over TCP to boost performance over lines with latencies and 100-1000 MBPS. ("Fat long pipes")
Before sending app data which is very consuming (MB-GB of data), it sends few MBs (1-10).
I found that with latency it took x3 times as much because of the slow start.
So I boosted it up to 64 initial MSS and got the best result because it no longer slow started. [Was very visible in the Wireshark also thanks for your videos :) ]
Great Video thx.
Thank you Chris
Very welcome!
Thank you Chris. Can you help me with below issue ?
I am stuck with a issue observed in wireshark where I can see that after the complete handshare between server and client is over and the application data is transfered between the server and cllient. After sometime the server sends an encryption alert message with Alert code 21 followed by a FIN signal. The client ACK's the signal and sends its own Encryption alert message with Aler code 21 along with a FIN signal which is ACK's by server. But along with this client also sends a RST signal. I am not understanding why the client is sending an RST signal ?
And this issue is happening after every disconnect of the client and server. The server has a keep alive of 6 seconds implemented becuse of which if there is no data to be transfered for 6 seconds the server sends an encryption alert message along with FIN signal.
Without the packets it's tough to say for sure. But I am guessing that the server is timing out the encrypted session first, and the client is just reacting. After the FIN that the client sends, it considers anything other than the ACK to that FIN to be further activity, and it will reset. It could be just how the client's TCP stack settings are configured. Unless it is actually breaking anything with the application, you should be able to ignore it. It's just ending the connection with a reset.
Thanks for the initial analysis Chris. I have the wireshark captures with me. Can I mail you the those ? Would you help me with the further analysis ?
@@anubhavkumar2059 the RST may only a courtesy notificiation to the server, to quickly move on from the TIME-WAIT state, releasing some still held resources on the server side... Allowing for a quick recycling of the 4-tuple for a new session (some OS don't have too much leeway in their ephemerial port selections, and re-using the same one in quick succession could otherwise result in a very delayed reconnection time...
Hey there! Hopefully someone can help me out on this one. I'm currently learning how TCP works and I have a question related to this capture: why does .196's Seq number not increase each time it sends an ACK packet to .184? I understand it isn't sending any large amount of data, but is not the ACK packet itself still counted as 1byte and, therefore, would increase the Seq number by 1?
Hello! That is a great question. So an ACK carries no data and does not have the SYN or FIN bit set, both of which will increase the seq number by one (the ghost byte). So in this case, .196 is not sending any data, so there is no need to move the seq number forward in that direction. Hope that helps!
@@ChrisGreer thank you!
Thanks a lot
Most welcome!
Thanks
i dont see the file in github!
Got it! One of the best , probably the best video on tcp packet congestion
Wireshark says pcap damaged.
It's not actually looking for a "successful ACK". It's timing the round trip time (RTT) to receive the ack. If you're on a really slow link (think the old modem days) you can't just send 100K bytes or it will take too long to get the handshake. So as long as the acks are received in an acceptable time it can send a larger payload. if you're on a wireless network and there are a bunch of people sharing the "wire", you don't want 1 app dominating the queues for too long.
You seem to enjoy these “clarifications”. Could you please post a like to your channel? Might be a better way to go.
@@ChrisGreer I find it amusing that someone who clearly doesn't understand network protocols in any sort of comprehensive way has a "channel". The modern one-eyed man in the land of the blind. Good for you.
It’s ok if you are afraid of creating your own channel and posting content.
I really did feel like that for a long time. I felt like I had nothing to share. I felt like everyone would hate my content, think I was stupid, would over criticize the smallest detail. It’s normal.
I’m so blessed to have experienced the kindness and good will of the overwhelming majority. You will too. Give it a try. I’ll be happy to reference your stuff!
@@ChrisGreer Right. The guy who reads stuff off a wireshark display without knowing what he's talking about is brave and I'm afraid.
Ever seen ESP packet spam from an iPhone shut down a network?
Talking multiple GB/min.
(Non malicious and not isolated to a single device.)
I have capture files if you want to see them. Lol
I haven't seen that! Sounds interesting though....
@@ChrisGreer i am smelling some new content! :D