How to Calculate TCP throughput for long distance WAN links

Filed in Uncategorized by on December 19, 2008 116 Comments

So you just lit up your new high-speed link between Data Centers but are unpleasantly surprised to see relatively slow file transfers across this high speed, long distance link — Bummer! Before you call Cisco TAC and start trouble shooting your network, do a quick calculation of what you should realistically expect in terms of TCP throughput from a one host to another over this long distance link.

When using TCP to transfer data the two most important factors are the TCP window size and the round trip latency. If you know the TCP window size and the round trip latency you can calculate the maximum possible throughput of a data transfer between two hosts, regardless of how much bandwidth you have.

Formula to Calculate TCP throughput

TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

So lets work through a simple example. I have a 1Gig Ethernet link from Chicago to New York with a round trip latency of 30 milliseconds. If I try to transfer a large file from a server in Chicago to a server in New York using FTP, what is the best throughput I can expect?

First lets convert the TCP window size from bytes to bits.  In this case we are using the standard 64KB TCP window size of a Windows machine.

64KB = 65536 Bytes.   65536 * 8 = 524288 bits

Next, lets take the TCP window in bits and divide it by the round trip latency of our link in seconds.  So if our latency is 30 milliseconds we will use 0.030 in our calculation.

524288 bits / 0.030 seconds = 17476266 bits per second throughput = 17.4 Mbps maximum possible throughput

So, although I may have a 1GE link between these Data Centers I should not expect any more than 17Mbps when transferring a file between two servers, given the TCP window size and latency.

What can you do to make it faster?   Increase the TCP window size and/or reduce latency.

To increase the TCP window size you can make manual adjustments on each individual server to negotiate a larger window size.  This leads to the obvious question:  What size TCP window should you use?  We can use the reverse of the calculation above to determine optimal TCP window size.

Formula to calculate the optimal TCP window size:

Bandwidth-in-bits-per-second * Round-trip-latency-in-seconds = TCP window size in bits / 8 = TCP window size in bytes

So in our example of a 1GE link between Chicago and New York with 30 milliseconds round trip latency we would work the numbers like this…

1,000,000,000 bps * 0.030 seconds = 30,000,000 bits / 8 = 3,750,000 Bytes

Therefore if we configured our servers for a 3750KB TCP Window size our FTP connection would be able to fill the pipe and achieve 1Gbps throughput.

One downside to increasing the TCP window size on your servers is that it requires more memory for buffering on the server, because all outstanding unacknowledged data must be held in memory should it need to be retransmitted again.  Another potential pitfall is performance (ironically) where there is packet loss, because any lost packets within a window requires that the entire window be retransmitted – unless your TCP/IP stack on the server employs a TCP enhancement called “selective acknowledgements”, which most do not.

Another option is to place a WAN accelerator at each end that uses a larger TCP window and other TCP optimizations such as TCP selective acknowledgements just between the accelerators on each end of the link, and does not require any special tuning or extra memory on the servers.  The accelerators may also be able to employ Layer 7 application specific optimizations to reduce round trips required by the application.

Reduce latency?  How is that possible?  Unless you can figure out how to overcome the speed of light there is nothing you can do to reduce the real latency between sites.  One option is, again, placing a WAN accelerator at each end that locally acknowledges the TCP segments to the local server, thereby fooling the servers into seeing very low LAN like latency for the TCP data transfers.  Because the local server is seeing very fast local acknowledgments, rather than waiting for the far end server to acknowledge, is the very reason why we do not need to adjust the TCP window size on the servers.

In this example the perfect WAN accelerator would be the Cisco 7371 WAAS Appliance, as it is rated for 1GE of optimized throughput.

WAAS stands for:  Wide Area Application Services

The two WAAS appliances on each end would use TCP optimizations over the link such as large TCP windows and selective acknowledgements.  Additionally, the WAAS appliances would also remove redundant data from the TCP stream resulting in potentially very high levels of compression.  Each appliance remembers previously seen data, and if that same chunk of data is seen again, that data will be removed and replaced with a tiny 2 Byte label.  That tiny label is recognized by the remote WAAS appliance and it replaces the tiny label with the original data before sending the traffic to the local server.

The result of all this optimization would be higher LAN like throughput between the server in Chicago and New York without any special TCP tuning on the servers.

Formula to calculate Maximum Latency for a desired throughput

You might want to achieve 10 Gbps FTP throughput between two servers using standard 64KB TCP window sizes.  What is the maximum latency you can have between these two servers to achieve 10 Gbps?

TCP-window-size-bits / Desired-throughput-in-bits-per-second = Maximum RTT Latency

524288 bits / 10,000,000,000 bits per second = 52.4 microseconds


###

Tags: ,

About the Author ()

Brad Hedlund (CCIE Emeritus #5530) is an Engineering Architect in the CTO office of VMware’s Networking and Security Business Unit (NSBU). Brad’s background in data center networking begins in the mid-1990s with a variety of experience in roles such as IT customer, value added reseller, and vendor, including Cisco and Dell. Brad also writes at the VMware corporate networking virtualization blog at blogs.vmware.com/networkvirtualization

Comments (116)

Trackback URL | Comments RSS Feed

  1. Serge says:

    Excellent material, Brad, thank you very much. The only thing, which confuses me the most, it that:

    When using TCP to transfer data the two most important factors are the TCP window size and the round trip latency. If you know the TCP window size and the round trip latency you can calculate the maximum possible throughput of a data transfer between two hosts, regardless of how much bandwidth you have.

    Why don’t we use one-way latency?
    Or for reverse calculation, when we are using BW*RTT formula, what do we receive as a result? Amount of data on the wire (in flight), so why do we need a double-sized buffer?

  2. Brad Hedlund says:

    Serge,
    We use round trip latency because we need to account for the time it takes the TCP sender to receive an acknowledgment from the receiver. In our example here, we want the server in Chicago to continue sending data while the acknowledgment from the New York server is traveling back to Chicago.

    Make sense?

  3. Serge says:

    Well, yes, but …
    Let’s assume that one-way latency is 15ms – that means that Chicago sent 1MB of data and after 15 ms NY received it. Immediately it sends ACK, which is received by Chicago after another 15ms. That means that we really have throughput 1MB per 30ms, but in fact half of the time sender waits till the ACK come. Is it really the way TCP works?

  4. Brad Hedlund says:

    That is correct. Once the TCP window has been transmitted the TCP sender will stop transmitting data until an acknowledgement is received. If we use one-way latency in our calculations the WAN link would be idle 50% of the time while the sender is waiting for acknowledgements from the receiver.

  5. Tingli Pan says:

    I vaguely remember that in tcp protocol, the sender won’t wait for the first acknowledgement before sending the second one. It will send several packages continuously before getting the first acknowledgement, thus to speed up the transfer rate.

  6. Brad Hedlund says:

    A fundamental principal of TCP is the “congestion avoidance window” which represents the maximum amount of unacknowledged data. This window is precisely what we are using for the calculations in this article.

    Some variations and enhancements to TCP optimize the behavior of the congestion avoidance window, such as dynamically adjusting its size in varying conditions. However the fundamental concept of managing a maximum amount of unacknowledged data never changes.

    Most standard Windows machines use a very standard TCP/IP stack without all the additional enhancements.

    You can read more about fundamental TCP behavior and its variations here:
    http://en.wikipedia.org/wiki/TCP_congestion_avoidance_algorithm

  7. Emmanuel Courreges says:

    Tingli pan: you are referring to acknowledging TCP segments for which the size is usually around your MTU so 1400 bytes, which may be delayed and sent on every other segment, but that’s still acknowledged at the beginning of a 64kB burst of segments which is the window size.
    Hope that doesn’t add to confusion…

  8. Ned says:

    Hi, just want to confirm some behavior. If there is physical machines connected to switches at 1000Mb at either side of a WAN link which is only 100Mb than would the servers throttle down to 100Mbps or would they continue sending at 1000 but due to the WAN link being only 100Mb the packets will be dropped. Will TCP Window size effect the rate at which the machine send and receive packet. Please confirm.

    • Brad Hedlund says:

      Ned,
      Two possible things could happen here, assuming a standard TCP/IP implementation on each server…

      1) the latency of your 100mb WAN link is high enough and/or the TCP window is small enough such that TCP windowing is never able reach 100mb of throughput.

      2) if the latency is very low and the window sizes are large enough, TCP will ramp up to 100mb and at which point congestion will occur and packets will begin to drop. When packet loss is detected TCP will cut throughput in half and slowly ramp back up to 100mb again, and the cycle will repeat. This is called the saw tooth effect.

  9. Ned says:

    Hi thanks for replying. Can u pls explain why the devices will only send at 100Mb even though their connection is set to a 1000Mb. The Window size from what I read is the number of byte packets that can be sent before getting acknowledgement and is negotiated at session startup and during the sessions it keeps changing. Correct?

    Also another confusion is there a calculation to check how many packets need to be transferred to get a throughput of 100Mb and 1000Mb given that the packet size is 1460(1500-20-20) Bytes of data = 11680 bits. 1Gb = 1000000000b. Hence if each packet is 11680 bits than to send 1000000000 bits it would 1000000000/11680 = 85616 packets of 1460 bytes each. Given that the size of packet can only be 1460 and the window is usually 65KB=512000b only than it would take approx 43 segments to fill the window size. So its not really possible to send packets at 1Gb. Is this correct calculation because it looks like something is wrong. thanks you.

    • Brad Hedlund says:

      Ned,

      Can u pls explain why the devices will only send at 100Mb even though their connection is set to a 1000Mb

      TCP pays no attention to the physical LAN connection speed of the host (1000Mb in your scenario). The only three factors that matter in TCP throughput are window size, latency, and packet loss. Of course the underlying link speed sets the maximum possible throughput under ideal conditions.

      Given that the size of packet can only be 1460 and the window is usually 65KB=512000b only than it would take approx 43 segments to fill the window size. So its not really possible to send packets at 1Gb.

      Yes, it is possible. In your scenario, if the latency is low enough the sender may have received the first ACK before all 43 segments had been sent, at which point the window size is replenished by the amount of bytes acknowledged in the ACK. This is called a sliding window and allows the sender to continuously send data at the rate of the link speed until there is packet loss.

  10. lisa says:

    Brad,

    Can you clarify one of the equations for me?

    In the section on optimizing window size, you have

    “1,000,000,000 bps * 0.030 seconds = 30,000,000 bits / 8 = 3,750,000 Bytes Therefore if we configured our servers for a 3750KB TCP Window size …”

    So you are taking 3,750,000 Bytes and dividing by 1000 to get 3750KB.

    My confusion relates to what side of the fence ‘window size’ sits on – is it a data storage number (since its size affects memory allocation and storage amounts), or is it a network number (since its size affects the throughput of the link)?

    I assumed it landed on the data storage side of the fence, so I was wondering why you didn’t divide by 1024 instead of 1000, where the window size would turn out to be 3662.11 KB. Since I’m not that familiar with networking, I want to make sure I know when to multiply/divide by 1000 vs 1024.

    BTW Thanks for the excellent, clear explanation. It’s the type of thing that’s really helpful for newbies like myself.

  11. Brad Hedlund says:

    Lisa,
    You are right. To be 100% accurate I should have divided by 1024, not 1000.
    Notice that I started the post by calculating how many bits were in a 64KB window size by multiplying 64 * 1024.
    When talking about bytes, as we are in TCP window sizes, we should be using the 1024 number to represent a KB.
    With serial communications such as in networking, when we are talking about bits, it is normal to use 1000 as the number that represents a Kb (kilobit).
    A TCP window size is more of a storage number than it is a bit rate number.

    Nice catch. You get extra credit points :-)

    Cheers,
    Brad

    • Jack Leverette says:

      This is precisely why I have the value 0.953674316 taped to the side of my monitor.
      Q: Do you see why?
      .
      A: This is the ratio of 10^6 / 2^20 = 1000*1000 / 1024 * 1024 = Mbit / Megabyte. My term is the ‘network to storage correction coefficient.’

      I also just realized if you remember Little’s Law from queueing theory, Queue = Rate * Wait, its congruent to all these calculations, where the ‘queue’ value is the TCP buffer size. Elegant.

  12. jeff says:

    Brad,

    Our branch and the main office are also in Chicago and NY. I‘ve done a lot of low level WAN research using wireshark. What I found is that without any optimization the acknowledgement sometimes coming back after 3 frames of 1460 bytes size, sometimes after 6, 12, 23, and 45. The windows size is always the same – 64KB. How this can be explained?

    I also know about so call Windowing mechanism that requests an acknowledgment after 6 frames. How this fit into the picture?

    Now, in one of our tests we have been using the compressing within application. We achieved 30 times improvements in the transfer speed
    The throughout was 4 MB/sec, which exceeds your calculation limit. How this can be explaned.

    • Brad Hedlund says:

      Hi Jeff,

      acknowledgement sometimes coming back after 3 frames of 1460 bytes size, sometimes after 6, 12, 23, and 45. The windows size is always the same – 64KB. How this can be explained?

      TCP is a very conservative protocol. This is likely the TCP Slow Start mechanism. New TCP flows will start slow and gradually ramp up until packet loss is detected, at which point the Slow Start cycle repeats.

      Now, in one of our tests we have been using the compressing within application. We achieved 30 times improvements in the transfer speed
      The throughout was 4 MB/sec, which exceeds your calculation limit. How this can be explaned.

      The 4MB/sec was likely the throughput as observed by the application because of the compression. However, the actual load on the network should still fit within the TCP throughput calculations.

  13. Bruce H. says:

    TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

    Brad,

    The calculation sounds plausible. However, we have a large ISP network with thousands of routers, switches, layer 1 trasport devices, customer F/W’s etc, etc along one path. In our sceanrio, are we supposed to modify the TCP window size on all devices? What an effort! Shoud we standardize window size? Hmmm…

    The only thing we can truly control is latency, in which case, you must look at each link to reduce latency as much as possible. Also, along the path, the packet may hit 100mbps, 1000 mbps or 10Gbps links or higher. The link may also be saturated or over subscribed increasing latency. It sounds like, reducing latency by standardizing (for example) all gbps links, low link utilization, fast CPU’s at each link, having a clean, error-free network is the best way to reduce latency, hence increase throughput.

    What is the best practical method to get the highest throughput with a very large ISP type network?

    • Brad Hedlund says:

      Bruce,
      Adjusting TCP window settings to improve throughput is only effective on the client and server machines, not the intermediate devices carrying the traffic. Adjusting any such TCP settings on your gear carrying the customer traffic will be a futile effort.

  14. Mario says:

    Hi Brad,

    First off, thank you for the well-written article and the replies posted above. It’s been a very interesting read.

    Regarding your last comment on the TCP slow-start mechanism, is there a way to optimize this to cause TCP to send flows as quickly as possible from the start?
    I realise this is a mechanism of TCP, but am interested to know if there is a calculation available or setting to optimise this.

    Thanks,
    Mario

    • Brad Hedlund says:

      Mario,
      WAN optimization appliances such as Cisco WAAS employ several optimizations for TCP traffic over WAN links and one of those is bypassing TCP Slow Start mechanism and employing large initial windows. The nice thing about using WAN optimization appliances like WAAS is that no TCP modifications are required on the client or server machines.

      Without WAN optimization appliances you can try loading a HS-TCP (high speed TCP) compliant TCP/IP stack on your client and server machines.

      http://www.faqs.org/rfcs/rfc3649.html

      Cheers,
      Brad

  15. Vaclav Molik says:

    Hi Brad, your article is really helpfull, thanks for it. Let me ask if you have also experience with IP Sec? Consider you have MPLS VPN with access circuits mostly E1 or T1. But you encrypt all traffic between HQ and specific branch using IP Sec tunnel. E.g. NY-HongKong or NY-Moscow. What would you advice for the best utilization of available capacity at this case? Thanks a lot, Vaclav.

    • Brad Hedlund says:

      Vaclav,

      I have several customers with IPSec WAN environments that are using Cisco WAAS with great success. Some are using the Cisco ISR router with the WAAS network module, others are using a WAAS appliance deployed inline between the existing router and LAN switch.

      Cheers,
      Brad

  16. Alab says:

    Hello,
    if one is using a low bandwidth WAN link in which the bandwidth-delay product (call it B1) is significantly less than the TCP receive window what is the effective throughput formula?

    Is it
    (1) TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput

    or just
    (2) B1

    • Brad Hedlund says:

      In your case, if the bandwidth delay product is significantly less than the TCP window size then throughput is constrained by the speed of the link, not by TCP.

      Cheers,
      Brad

  17. chris stand says:

    go grab “TCPOPTIMIZER” if you are running win2k or xp.
    then you can turn on sacks. vista & win2k3 & win2k8 all provide this feature.

  18. invisible says:

    Hmm, if I remember correctly there are TCP/IP implementations allowing sliding TCP window sizes. In other words, if either side started with 64K TCP window it does not mean that window size will remain the same all the time – it will increase and decrease depending on underlying conditions.

    • Brad Hedlund says:

      invisible,
      You are correct. Infact, this is how the TCP slow start phase works, start out with a small window and gradually increase throughput by increasing the window size. When packet loss is detected, throughput is cut in half and ramps up again via adjusting window sizes.
      There is usually a set limit of how large the window will go, which would be the “max window size”. You can’t keep increasing the window size without a limit as larger window sizes require larger memory resources on the host.

      Cheers,
      Brad

  19. cp says:

    Brad,

    Thanks for the write-up. My question is with the BDP formula using actual metrics for RTT, window size, and throughput.

    A lot of people give example on how the BDP formula work to give the estimated throughput including the ideal window size. Like in your example.

    The problem that examples they provide are either vague and do not provide real world results to determine some of these things.

    Here is my example and what I have done:

    CLIENT———-SERVER (http)

    Downloading a 6.33MB file from the SERVER.

    ======================
    WINDOW SIZE from BDP:
    ======================

    First, what is the RTT. When I do a “ping” to the server I get 32ms for the RTT.

    Second, during the file transfer (using Vista or XP) it tells me that my data rate around 270KB/s. Convert that to bits, that’s around 2.1Mbps. Which is correct because when I do many bandwidth tests they give me the same thing. Perfect.

    Therefore based on that info my RWIN size should be this:

    Bw (270KB) x RTT (.032) = 8640 bytes.

    Well, when I ran Wireshark during the download session my RWIN on my client, receiving the download was 66,780 bytes. Well that’s not 8640 bytes. Not that it matters, but the SERVER window size was 6816KB. Not close to 8640 bytes.

    Let’s try this in a different way

    ======================
    THROUGHPUT from BDP:
    ======================

    What’s my throughput then,

    RWIN (66,780 bytes) / RTT (.032) = 2,086,875 bytes (or 16.7Mbps)

    Umm, my DSL is only up to 2Mbps, so that is NOT my throughput nor correct.

    ======================
    LATENCY from BDP:
    ======================

    And when I calculate the latency from the BDP formula:

    Bw (270KB) / RWIN (66,780 bytes) = 4 seconds (4043 ms) for the RTT

    Not true either.

    I understand that the BDP is used to help engineers know what window size should be optimized to fill-up or use efficiently a circuit (LAN or WAN). Well using actual data that I shown above, I’m not getting results that prove that the BDP works in real world. Thus, there is something I am not understanding at all and relating it to actual/estimated throughput results that can be calculated.

    Or I’m obtaining the wrong type of data for the window size (through Wireshark that I see in the SYN packet and all the data sessions decreasing the RWIN size at data is being received), RTT (using ping), or the throughput (by looking at the file transfer window I see on Vista or XP).

    Can you and others please help me fill in the gap on what is going on here?

    FYI – it would also be super nice if there was a simple GUI APP (that can be installed on a client and server of coarse) that would show what is the RTT, Throughput, and Window Size for the connection between them. Heck maybe also the packet loss, etc. The numbers would be variable, but that would be a useful tool for troubleshooting and understanding the performance conditions. I don’t think there is anything simplified like that out there.

    Thank you very much!

    cp

    • Brad Hedlund says:

      CP,

      Your math is correct, and proves my statement from a few comments above when I said: “…if the bandwidth delay product is significantly less than the TCP window size then throughput is constrained by the speed of the link, not by TCP.”

      The speed of your link is 2.1mbps

      You could have the largest window size in the world and the lowest possible latency — and running the BDP formulas as you have done will give you a very high max theoretical throughput — but if your link speed is only 2.1 mbps, throughput cannot go higher than 2.1 mbps, obviously.
      So, what your math has told you is that your TCP window size needs no adjustment to improve your throughput. The only adjustment that will make your file transfer go faster than 2.1 mbps would be to get a faster link, OR, compress the TCP data at each end to provide the illusion of a faster link to the servers and clients — Cisco WAAS.

      Cheers,
      Brad

  20. Duke says:

    Dear Brad,
    Thanks for your article. It really helps me a lot to improve network performance.
    But there is still something I can’t figure out in an experiment. Hope you could help me out. Thanks in advance.
    Experiment enviorment:
    A——B (A downloads from B) The bandwidth from A to B is 50Mbps and the latency is 22ms. A and B are both Linux system with version 2.6.18.
    BDP=50000*22/8=137.5 KBytes
    I tuned the maximum number of tcp_rmem in client and tcp_wmem in server to BDP 137500 and to get about 1.2MBps downloading speed. From the formula “throughput=window_size/RTT”, I should have get about 6.25MBps (137.5K/22ms=6.25MBps) theoretically. I don’t know why I only get 1.2MBps. But when I set the max number to 1375000 and 26214000 in client and server, the speed comes to 2.3MBps and 4.42MBps.
    So I guess if there is someting I don’t take into consideration when calculating throughput.

    Thanks and regards,
    Duke

    • Brad Hedlund says:

      Duke,
      There are other factors, for example packet loss has a detrimental effect on TCP throughput. The formulas discussed in this article calculating max *theoretical* throughput assume a best case scenario of zero packet loss. Most well engineered networks have very low packet loss in the .01% range, however some links may have higher packet loss for a variety of reasons (too much congestion, lack of QoS, rate-limiting, etc.)
      Your WAN bandwidth of 50Mbps is interesting, that’s an odd number that doesn’t match a typical physical circuit speed. Why 50Mbps? Is it perhaps a physical link much master than 50Mbps (such as 100Mbps or 1GE) but something is rate-limiting the throughput to 50Mbps?

      Cheers,
      Brad

  21. Duke says:

    Thanks Brad.
    Actually, 50Mbps is the SDH physicall link bandwidth.
    There are no Qos and rate-limiting policies applied to each side of the routers.

  22. Duke says:

    Hi Brad,
    If we set the window size to BDP, we could get max theoretical throughput. The max theoretical throughput should be the bandwidth/8. Am I correct?
    If we put the same client and server into another link of the same bandwidth but higher latency, we should get the same max theoretical throughput(bandwidth/8) because the bandwidth is the same. right?
    The reason why I get much smaller throughput in higher latency is higher latency has more packet loss or what?
    Thanks.
    Duke

  23. Deepak Vyas says:

    Can we change the tcp window size on router to get a better throughput?

    • Brad Hedlund says:

      Deepak,
      The TCP session state is between the client and server machines, the routers are just passing the TCP packets between client/server untouched, therefore changing any settings related to TCP on your routers will have no effect.

  24. CT says:

    I found this blog while researching how long should a 1.7G file transfer take from HQ (45M) to a remote site (2xT1) over MPLS GRE. My file transfer test indicates that I am getting about 2 Mbps. How much should I accommodate for the overhead for GRE and T1 bundling? I already have tcp-mss size on the GRE tunnel interface set to 1436 per one of the Cisco article’s I read…1500-40 (TCP/IP header)-24(GRE header). When I remove this command from the tunnel interface then the performance is even worse. So, are you saying that changing the MSS helps, but changing the tcp window size on a router will not? Any suggestions on how I can optimize my setup further so that I use full 3M? Or what data do I give to our ISP to prove that the traffic is available to be sent but the pipe is not carrying it? Thanks in advance.

  25. Pete says:

    Hi

    I’m wondering about the measurement of the RTT on a link with ping. What should the correct packet size be in the ping when measuring the RTT ?

    regs. pete

    • Brad Hedlund says:

      Pete,
      A very good question. To get the most precise and worst case scenario measurement for your calculations you would want to know how much the latency the TCP stream will see when the pipe is full. The most accurate way to do this would be to saturate the link with a UDP stream of MTU sized packets and measure how long it takes for the packets to get from point A to point B. Then do the same measurement from point B to point A, then add the two measurements together for the worst case round-trip latency (full pipe, large packets).

      Cheers,
      Brad

  26. prabhu says:

    HI brad,
    I am new to wireshark and linux tcp tuning. and i used BIC tcp algorithm
    i underwent a Experiment..
    I downloaded a 100mb of file and captured the packets uing wireshark analyzer.
    From this i need to calculate the
    1.Link Delay..
    2.Bandwidth
    3.Slowstart phase
    4.Window size
    And i need to tune the TCP parameters like rmem and wmem and again i need to download the file measure these things.
    i am pretty confussed abt this concept ..Can u please explain this..

    What will happen when we tune the tcp parameters and what will happen to the window size.

  27. Michael says:

    Hi Brad,

    Thank you for your post. We have two 1gb Ethernet link to site. These 1gb link goes through two different circuits. To check the link currently we just run a ftp process with a very large file. Doing this however, we found out that we can only get a maximum of 30% throughput of the fast link on the slower link. We use the same notebook to do this test. Any suggestion on how to do further troubleshooting? Thank you in advance for your inputs.

    Michael

  28. KK says:

    Hi Brad,

    Thank you for a well written article and subsequent follow-up discussions. Very infromative.

    I’ve following situation and am wondering, if you could comment:
    I’ve a 1.5MB long distance link between HQ and a branch. The average RTT on the link is around 760msec. When I tried a 100Mb file transfer (while link is idle), I got ~330Kbps throughput. Which means I’m using probably 20% of the maximum (theoritically) possible throughput, right?

    Now the question is – if I had 3 more such sessions in parallel (trying to transfer 100Mb file) will I still get ~330Kbps throughput per session (as against overall link throughput)? I believe that’ll be 330/4=82.5Kbps. I’d think overall link throughput stays same at 330Kbps irrespective of number of sessions or amount of data being transferred. Is this correct?

    In case, I want to increase the throughput on the link (in order to reduce the data transfer time – say, by half), which option would help me do that?
    - Double the bandwidth of existing link from 1.5mbps to 3mbps
    - Or take an additional 1.5mbps link and load balance the traffic
    Though both these options look same (as cumulative bandwidth will be 3Mbps) but will TCP behavior be different in both these situations?

    Please clarify.

    Thnaks much in advance.

    Regards,
    KK

  29. Brett says:

    Brad,

    How does the ACK delay timer figure into your equation if the latency is greater then 70 ms between the two communication partners?

    Brett

    • Brad Hedlund says:

      Brett,

      Good question. If the Nagle algorithm is in use (RFC 896) along with the Delayed ACK algorithm (RFC 2581), then our calculations here are basically useless, as the communications partners will only allow one packet outstanding on the wire. If Nagel is disabled with the TCP_NODELAY socket option, then the Delayed ACK timer does not need to be factored in our equation because as soon as the receiver has 2 outstanding ACKs it will begin sending ACKs that can acknowledge multiple packets. With Nagle disabled on the sender (thus sending multiple packets without waiting for an ACK), the receiver will have 2 outstanding ACKs almost immediately after receiving the first few packets. Therefore, in my opinion, as long as the Nagel algorithm is disabled for the session, Delayed ACK should have minimal impact on our throughput calculations discussed here. Infact, the Delayed ACK timer is a *good thing* as it reduces traffic consumed just for sending ACKs.

      Cheers,
      Brad

  30. sajid mahmood says:

    hi,
    i want to calculate the link load using (SNMP), can u help me how to calculate linlk load using
    formula

  31. James says:

    Hi,

    Need your help. Here is the following scenario. A centralized antivirus server is scheduled for version upgrade. There are about 1000 over clients connections to that antivirus server from various WAN links, fastest is about 2MBps, slowest connections comes from VSAT link 64Kbps. Total files upgrade take about 70 MB, which means server will have distribute 70 MB of updates/upgraded files to all clients. Ping furthest clients results in intermittent replies between 100ms-600ms. Meaning worst case scenario is 600ms. My worst fear is that once I have performed upgrades on the server, it will cause a massive jam on the WAN links as the files are too large to be distributed even for one client

    Questions:-

    1) What would the ideal throughput to transmit 70 MB of files over the WAN links as mentioned above ? How is this being calculated?

    Appreciate your feedback

    Best Regards
    James

    • Brad Hedlund says:

      James,
      Per your Question #1 and the scenario – I think it would best to steer your thought process in a more productive direction… Here’s why:

      Im afraid trying to figure out the ideal bandwidth for transferring 70MB files is not going to get you any closer to solving your problem. Let’s imagine you worked the numbers and figured out that a 100MB link to every site would be great — A) that wont be cheap, and B) you still have a latency problem, so much latency that a high BW link to your sites may still deliver poor results – the whole point of this post. Allocating more bandwidth is just a matter of money (often times lots of it). On the other hand, no amount of money can reduce latency as we are dealing with the laws of physics here, namely ‘the speed of light’.

      With that in mind, it would be best to address your problem at hand with a solution designed to work with your your current BW and latency. First and foremost would be implementing a WAN optimization and content caching solution, such as Cisco WAAS. The solution to your problem is handled mostly by way of virus update files needing only to be downloaded just once to a site. This single download would be for pre-positioning the update files on the local Cisco WAAS appliance at that site. The many client machines would then download their update files from their local Cisco WAAS appliance at LAN like speeds. The solution is very transparent in that client machines would believe they are downloading the files from the central anti virus server as they always have done in the past, but in reality this time they are getting the update files locally from the Cisco WAAS appliance, therefore reconfiguration of the client machines may not be necessary. The Cisco WAAS appliances also have the ability to provide impressive optimizations for all of the other traffic traversing your slow WAN links, improving application performance and responsiveness for any other TCP based applications, not just the anti virus updates.

      Cheers,
      Brad

  32. shivlu jain says:

    Brad

    Article is awesome. Lets assume if we are having a latency of 100ms and I want to transfer 1 Gb of data on 10Mb pipe, how long it will take time to transfer.

  33. peter says:

    then for 802.11n what is the maximum tcp size can be used for up to 300mbps

  34. michael says:

    Pls i need clarification on this:
    Assuming a SWS of 10 pkts is used in a 50kps communication link, it is observed that when 100 byte pkts are transmitted, the throughput is close to max(50kps). but when 80 byte long pkts are used the throughput drops considerably.
    Why? and what should be done to transmit smaller pkts efficiently?
    thanks

  35. Munir K says:

    Dear Brad,

    Thanks for this wonderful information which has infact attracted a lot of attention.

    We are actually facing same issue between our DC-DR Replication Link. The Scenerio is as under:

    We have a 10 Mbps replication link between DC-DR which is actually used for host to host replication (Host A to Host B). The traffic flowing on the link is pure TCP. Offlate we could see that the link is chocking at Peak hours thus hampering business activities and the application team has been asking for an immediate upgrade from 10 Mbps to 45 Mbps keeping in view the data increase trend. May be in place to mention that the RTT on the link is 40ms.

    Since, as per your document, we may not be able to reach this thruput as Application team does not want to change the TCP Window which is default 64Kbps, so kindly suggest what options we are left with for getting the desired thruput.

    In case we are left with an option of only using WAAS or any WAN Optimization Controller, I would like to ask if 3745 routers support WAAS Controller Cards?

    Look forward for your Expert Comments.

    Munir K.

  36. Munir K says:

    Hi Brad,

    It is host to host replication and In DC there is only one host which replicates to a host in DR. The only beauty is that every minute around 3 files of 100MB each gets created on DC HOST which needs immediate replication with DR HOST.

    • Brad Hedlund says:

      Munir,
      In your case, I would recommend that you look at deploying a Cisco WAAS appliance at each end of the link. The Cisco 3745 router does not support WAAS modules, but I think the performance profile of an appliance would be better for you anyway … such as the Cisco WAVE-674. You can simply place the WAAS appliance inline between your 3745 and LAN switch.

      http://www.cisco.com/go/waas

  37. Andy says:

    Hi Brad,

    Thanks a lot for your excellent post here.
    I am facing a very critical issue.Need your help(all r welcome to help me)

    I am in China and my company has a WAN link to US (4*T1-Bundled) and then we have a IPSEC tunnel to our Customer network through Internet.We use MRDP to access the Customer’s VM and access some applications.This application is based on Flash and its extremely slow as we are working on MRDP.
    The utilization on our WAN link is not above 30%.
    Latency is 300ms beween the Client and Server.

    Changing the Window size is going to help us
    Most of our desktops in my companies are Vista and i read we can tune the TCP by setting
    netsh int tcp set global autotuninglevel=experimental so that the window size can be upto 16MB.
    After setting this ,we are dont see any performance difference.
    I am really being screwed because of this issue.Kindly suggest a suitable solution.
    I want to know how we can change the Window Size?is it by creating a new Reg Value ?

    These are the parameters on my Windows XP

    MSS: 1440
    MTU: 1480
    TCP Window: 17280 (multiple of MSS)
    RWIN Scaling: 2 bits (2^2=4)
    Unscaled RWIN : 4320
    Recommended RWINs: 63360, 126720, 253440, 506880, 1013760
    BDP limit (200ms): 691kbps (86KBytes/s)
    BDP limit (500ms): 276kbps (35KBytes/s)

    Please help

    Andy

  38. Munir K says:

    Hi Brad,

    Thanks a lot for this. I have already lined up with Cisco for POC. I will surely share the results with you on this forum.

    Regards,

    Munir K.

  39. Andy says:

    Hi Brad,

    This is continuation of my Previous Post.

    Throughput is RWIN/Latency is Sec

    say I have a 2Mbps link and as per the throughput calulation ,i get a throughput of 1Mbps ,Can i get a trasfer ratefor 1mbps for all file trasfer like FTP traffic,http traffic,MS-DS traffic ,MRDP traffic etc.
    How this can be decided?
    Please help
    I am desperately looking for an Answer…

  40. dheeraj says:

    dear Brad
    i am doing research on multipath routing
    how can we prove that throughput of multipath routing is better than single path routing

    thanks and regards

  41. TQ says:

    Brad Hedlund,
    I just wanted to thank you for your effort in publishing this article.

    Regards

  42. jorge luis obregon says:

    Hi Brad:
    Could you help me with a model that calculated the throughput, included loss. I need a formula for estimated a throughput with loss with the same topology. Please help me if you can.

  43. slimer says:

    Brad,

    Thanks for the very interesting explanation here! I have a question and hope you can help.

    We want to perform a data consolidation for some of our application whereby we already know the number of server and we will be able to get the volume of traffic. However, we are not sure how much WAN bandwidth needed in the data center that we will do the data consolidation.

    Let’s say, I have the following parameters:
    - 120GB of aggregated traffic over the LAN (this is the server/client traffic)
    - WAN latency is 75ms (RTD-150ms)

    Objective: To know the WAN bandwidth to be used

    Thanks…Slimer

  44. vince says:

    how about udp throughpout, do you use the same formula.

  45. Ashwin says:

    Hello Brad,

    1.As I understand Windows systems on the LAN uses a TCP window size of 17K+ bytes. In your example, each server has a default setting of 17K+ bytes. Now while replicating the data between the two servers in remote sites, will the TCP window size be changed by the routers to 65K ? Or will it remain to be 17K+ through the entire path ? Assume we are not using any bandwidth optimizers.

    2. Assume there are two servers at the source site replicating to two servers at the remote site via a common router pair. Each replication context will then constitue a seperate stream and hence we will get double the bandwidth over the WAN link [assuming the WAN link to be a fat pipe]. Is my understanding correct ?

    Ashwin

    • Brad Hedlund says:

      Ashwin,

      1) The routers are operating at Layer 3 of the OSI model and therefore pass IP packets paying no attention the upper layer TCP information. A standard router never changes window sizes etc. TCP windowing is managed by the end hosts participating in the TCP exchange.

      2) Correct. The TCP throughput calculations discussed in this article are for the purposes of calculating the throughput potential of an individual TCP session. If you have multiple TCP sessions, each session would add its own bandwidth load to the link. If the link does not have enough bandwidth to carry the potential bandwidth of all the TCP sessions, congestion will occur, packets will get dropped, TCP will detect that and dynamically scale back the window size in half, then slowly increase the window size until packet loss happens again, and the cycle repeats. The result of this behavior is all TCP flows evenly and fairly balancing throughput on the link.

  46. Mina says:

    dear Brad ,
    thanks a lot for your great effot
    my question related to CP question which was :

    =====================
    THROUGHPUT from BDP:
    ======================

    What’s my throughput then,

    RWIN (66,780 bytes) / RTT (.032) = 2,086,875 bytes (or 16.7Mbps)

    Umm, my DSL is only up to 2Mbps, so that is NOT my throughput nor correct

    so my questions are :
    1- is it really that server sense that i can download with 16 M , but i will drop the rest as my physical speed is 2M ( that means i will drop 14 M)?

    2- what about if my speed was 24 M , is that means my max peed will be 16 M ??

    i hope that i could elaborated my questions correctly

    thanks again for your effor ,
    Mina

    • mina says:

      any update please??

    • Brad Hedlund says:

      Mina,
      The throughput calculations will tell you the maximum possible throughput. If your DSL line is much slower than what your calculation says, well, of course your actual throughput will be limited to your link speed.
      If your link speed is much faster than the result of your calculations, you will not transmit any faster than what your calculation states, because that is the maximum possible throughput. This is the entire point this article tries to make.

      Cheers,
      Brad

  47. Brady says:

    My only commnet would be, Be careful when looking at TCP window sizes, and assuming one size fits all. Stating that all applications will operate with the same throughput is not neccessarily accurate.

    Technologies in the middle can and do have an impact on the overall TCP window size. For instance , if you were to look at an “legacy” implemantation of site to site VPN across a 6500 using a VPN IPSEC module you would most certainly find tcp window/mtu manipulation on the router that will impact Windows Servers and clients. This will also without a doubt be application dependent. For instance don’t assume that all applications operate under the same conditions with windows sizes, etc. Your browser may operate a certain throughput and MS outlook at another throughput level.

    If you are using MPLS to connect your site don’t always assume that all the carrier gear is setup the same way to handle a particular MTU size from end to end. For instance Carrier “X” may have legacy devices that do in fact impact the MTU size and therefore TCP stack is impacted indirectly.

    Granted I am not stating that IP MTU and TCP window sizes are the same animal. But what you will find is there is a very close relationship and the MTU will in fact impact TCP applications.
    Most the Cisco VPN Documentation will refer to this a MSS size , but a closer look will revel it’s window sizing.

    I would just caution that you need to look at the underlying technologies that connect the locations and see if they could potentially impact tcp windows size, througput , etc.

    I would highly suggest you get a software package that tests throughput from end to end on server/client before suggesting results. As over time you will see the TCP window sizes and throughput began to degrade on networks.

    Thats my two cents.
    Regards,

  48. Mina says:

    hello Brad ,

    i ‘ d like to thank u alot for that nice forum ,, my questions are :

    1- when TCP indicate that there is packet loss , is it will begin slow start phase from he very begining (1 , 2 , 4 , 8 , ….) ? or it will begin from the last window size before dropping ?

    2- is there any way to overcome packet loss due to congestion ?

    i know my questions may be silly but it really confuse me ..
    thanks
    Mina

    • Mina says:

      dear Brad ,
      any Update please?

    • sean says:

      Mina,

      It will be dependent on the operating system.

      If the stack is based on Tahoe, you start right back at 1 and do slow start over again.

      If the stack is Reno, you will cut the window by 50% and slow start from there. If you drop again, you go to 50% of the new window, and so on…

      There are a whole slew of other algorithms for TCP, above are most common though.

      sean

  49. Mina says:

    Dear Brad ,
    any update ?

  50. Jim says:

    Brad:

    I worked through the following calculation and was wondering if it makes sense. It is based on summing the durations required to place bits on the wire, includes latency, then calculates an effective throughput – based on total time and data moved. The problem is that it basically negates the calculation for optimal TCP window size.

    I apologize for the poor format but a quick cut and paste into Excel will reformat in columns. I would appreciate if you could find a moment to review and respond.

    Thanks

    Jim

    Variable Amount Unit
    TCP Window 65,536 Bytes
    1 way latency 0.015000 Seconds
    Theoretical Maximum (WindowSize/rtt) 17,476,267 bits per second

    All Interface Speeds 100,000,000 bits per second
    Time to put 1 x Window (8 x 65,536 bits) on wire 0.005243 seconds
    Plus 1 Way Latency for last bit in Window to reach client 0.015000 seconds
    Time to Put ACK (100 Bytes) on Wire 0.000008 seconds
    Plus 1 Way Latency for last ACK bit to reach server 0.015000 seconds
    Total Time for 1 Window plus 1 ACK 0.035251 seconds
    Better Max speed calculation 14,873,047 bits per second

    Best Window Size (using link speed x rtt) 3,000,000 Bits
    Best Window Size (Bytes) 375,000 Bytes
    1 way latency 0.015000 Seconds
    Theoretical Maximum (WindowSize/rtt) 100,000,000 bits per second

    All Interface Speeds 100,000,000 bits per second
    Time to put 1 x Window (8 x 375,000 bits) on wire 0.030000 seconds
    Plus 1 Way Latency for last bit in Window to reach client 0.015000 seconds
    Time to Put ACK (100 Bytes) on Wire 0.000008 seconds
    Plus 1 Way Latency for last ACK bit to reach server 0.015000 seconds
    Total Time for 1 Window plus 1 ACK 0.060008 seconds
    Better Max speed calculation 49,993,334 bits per second

  51. Abid says:

    Hi Brad,
    Thanks for excellent material that you have posted. We have WAN link of 45Mbps between point a and point B. We use ODG(oracle dataguard) to transfer arch files between point A & B. We have put WAAS devices at both points but still are getting 26Mbps of utilization. My n/w vendor/speciaalist asked me to increase the number of sessions ODG makes,we increased it to 9 from 4. However problem persists. What i understand is:
    1: The throughput is dependent on Latency and TCP window (WAAS vendors says he has tuned the device for max TCP window).
    2: With WAAS devices in place, even one session should have shown utilization of 45Mbps.

    PLease let us know if we are missign on anything..

    Thanks
    Abid

  52. soulhacker says:

    I just wonder how the latency comes out? If tcp mss is considered. Because the latency rises when tcp mss grows.

  53. Rolf Wiklund says:

    Hi Brad.
    It looks that the window size is more important than the MTU?
    Do you have any calc. about the MTU size impact?

    I thinking mostly how to solv throughtput issues in DCI (40km)

    Thanks
    Rolf

  54. knuckles says:

    Hi…i’m expecting about 1.5 Mbps on a link that i have. UDP works fine, however, TCP has yielded results close to 0.022 Mbps (essentially nothing!). Would the above tweaks be done on both ends of the network (being both PCs)? And also should a TcpWindowSize be added to the Interface Registry key where the network interface details exist? –>(HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesTcpipParameters, TcpipParametersInterface)

    Thanks!

  55. It is worth to mention that besides WAAS, there is a number of other commercial software vendors that provide accelerated file transfer software. At FileCatalyst, we use a UDP based protocol to send data at the maximum available link speed. Unlike other UDP based file transfer protocols, we use an efficient algorithm to keep track of lost pockets and we re-transmit only the missing data.

    With much fewer acknowledgments than any other TCP based protocol, the file transfer speed is not affected by the latency. And the speed loss is linear to the pocket loss (Which is impossible with large Window Sizes)

    We also use our own built-in congestion control that is immune to latency and takes into the account the average latency of the link before slowing down.

    We have an on-line calculator on our web site that provides a comparative of TCP over our UDP protocol. http://www.filecatalyst.com/web_demos/comparison_tool.html

    I recognize that this is plug for a commercial product however this article explains exactly the same problem that we have been trying to fix for the last 5 years.

  56. uday says:

    how to find the throughput,end to end delay,delivery ratio for the protocols using mcbr application as it is a single host application..
    what all properties has to be set for that in nodes,sunet and mcbr and file statistics…
    of scenario in qualnet 5.0 environment..

  57. Georgie says:

    ” unless your TCP/IP stack on the server employs a TCP enhancement called “selective acknowledgements”, which most do not”

    In my experience any recent Linux 2.4/2.6 kernel and any modern Window$ system have SACK enabled by default in the kernel.

  58. nima0102 says:

    Thanks a lot for your article.
    Is there any formula for non tcp packet? such as GRE or UDP.

    Thanks in advance

  59. The TCP header contains a 16-bit field to identify the window size. So far as I know, TCP does not support any extensions which are standard that allows for an extended TCP window size, though there may be some option field I’m unaware of. Can you please point me to an RFC which covers this enhancement?

    On the other hand, TCP is terrible for file transfer and a UDP protocol is generally better suited as TCP does not lend itself well to punching holes in PAT “firewalls”. Typically a better approach is a UDP protocol based on RTP/RTCP with SIP negotiation assisted by STUN, ICE or TURN. Them bandwidth can be throttled by standard routers inspecting the rate. Using RTCP video conferencing extensions, dropped packets can be requested by performing an RTCP to request retransmission of drops by referencing a “Slice number”. In addition the jitter buffer architecture will provide the majority of the same services we expect from TCP.

    In short, payloading file transfer data as video data is probably more efficient than TCP, the overhead would be 8 byte UDP header + 12 byte RTP header as opposed to the larger overhead involved with TCP+ a word aligned TCP option. In addition, tracking packets and performing retransmits would provide for a stable and efficient form of continuing aborted sessions. Even better, since checksum is optional in IPv4 UDP, it would be at more efficient processor-wise to perform authentication on larger chunks and the binary search for smaller blocks when failures occur. Could even use generic FEC to compensate for inevitable packet loss. No new protocol needed, just need to recycle an oldie but goodie.

    Just my two cents.. That link please?

  60. joe says:

    “unless your TCP/IP stack on the server employs a TCP enhancement called “selective acknowledgements”, which most do not”

    When was this article written? 2008? I can’t remember the last time I looked at a packet capture where SACK wasn’t permitted.

  61. RAUSHAN says:

    actually they have given the interval between packets as 0.005.it
    means that 200 packets (inverse of 0.005) could be sent per sec.they
    have calculated the data rate of 0.01Mb.If it is a udp packet its size
    is 552 bytes and if it is a TCP packet its default size is 1000
    bytes.can you tel me how did they calculate bandwidth as 0.01Mb using
    the interval as 0.005 for UDP.

  62. RBNetEngr says:

    In reading through the comments, I wanted to add a few notes…

    1) When you use PING to measure round trip delay, keep in mind that your standard Windows OS will use a 32 byte packet for the payload. This will not give you an accurate measure of round trip latency, as your FTP file transfer will normally use full payload packets (1500 bytes). It’s best to ping with a full payload, to get an accurate measure. Also, use the ‘-f’ option, to not fragment the packet, in order to find out the minimum MTU along the path. If you don’t do this, you may find that there is a place along the path where every full size packet gets fragmented, adding to the processing delay at the receiving end.

    2) Windows 7 and Server 2008 R2, among others, claim to dynamically adjust the RWIN size dynamically, based on RTT of TCP ACKs. Whether it works properly in all cases is another story. It can be disabled by editing the registry.

    3) The most effective way to optimize for a LFN (Long Fat Network) is to use one of the WAN optimizer appliances available from Cisco (WAAS), Riverbed (Steelhead), etc. Yes, it does make the link cost higher, but you’ll be able to use all of the bandwidth that you’re already paying for. Cisco recently announced a new ISR-AX router, which contains a WAAS processor, in order to optimize WAN links. Of course, in order for it to work, you’ll need another WAAS at the other end.

    -rb

  63. dennis says:

    Hi Brad,

    Thanks for the great post. Besides, I was wondering if we should put MTU size into the calculation? Or does the mtu size affect the tcp throughput?

    Thanks

    • Brad Hedlund says:

      Dennis,
      What matters most is, how much unacknowledged data can be in flight. The size of each packet (MSS), and the percentage of packet loss, are additional things you could add to the equation to tinker around the margins.

  64. zeba says:

    hey,
    how can we calculate the latency for TCP, from one workstation of LAN1 and another of LAN2, consisting of various switches in between, having various processing delays of those workstations as well as switches. I would like to know a generic formula for the above.

  65. Umesh Shetty says:

    Hi Jeremy,

    How would I calculate the same if I have a 10 Mbps link where it would tale almost 50 msecs to transfer 524288 bits over the WAN link.

    Thanks in advance
    Umesh

  66. Usman says:

    Hi Brad,
    I have one question.
    Does throughput is affected by delayed acknowledgements. i.e if I delay the acknowledgement from every packet to every other packet. Does throughput will increase or decrease..?

    /Usman

  67. omar says:

    Hi Brad,

    i have 45 Mbps link on which whenever the load crossed 35 Mbps i can see drops in the link . when we take this matter with ISP he say that “whenever traffic reached above the 85 % of total bandwidth, subscriber may experience the packet drop/latency in the network. Because traffic is not a contact parameter. It always varying” although my peaks doesn’t touch 45Mb in any point of time
    i want to check my 45 Mbps link’s “performance & max sustained throughput”

    please guide.

  68. omar says:

    Hi Brad,
    Thanks for your quick response. I have done that but when the traffic crosses 35 Mbps drops starts coming in the link. Does it mean my usable bandwidth is only 35Mb Only? type of connectivity :- P2P Ethernet last mile

  69. Anil says:

    Hi,

    Do you have any Tool readily available to input the data & see the desired results. :-)

  70. Johnny says:

    Hello Brad,

    I live on an island which is geographically far away from Europe. The RTT to a server in Paris is around 240 ms. I would assume that the best throughput I could get would be around 2.1 mbps according to the calculation of 64KB/RTT.

    I have a 4mbps ADSL link at home and when I do an FTP to that server in Paris, I use the full 4mbps. There’s no multi-parallel connections, just a plain FTP session that I do from the command prompt window. How can that be?

    Thanks

    • Simon Leinen says:

      Fortunately for you, the standard TCP window size has grown quite a bit from the 64KB that were indeed common when the article was written. All major operating systems including Windows now support and use “window scaling”, which allows windows (much) larger than that. Most of them (again including Windows) also do a good job at “auto-tuning” the size of TCP parameters including window size according to available bandwidth and delays, so you don’t have to worry about these values anymore.

  71. jugal says:

    Can anybody solve this for me?

    Consider a TCP application sending data, only 5 bytes per segment and a TCP stack that does not implement a Silly Window Avoidance scheme. Assume that IPv4 and Ethernet are used for the lower layers. What is the maximum percentage of the network bandwidth that can be used for the application data?

  72. Ali Raza says:

    Dear Author,
    There is, a mistake usually majority of people including technical and non-technical use *per sec* with bandwidth. Bandwidth is “Capacity of a channel”. So, using per sec is not correct. Thank you..!

  73. GG says:

    How to do this?

    Compare the time requested for a data transfer, assuming that a 6 Mbytes file is transmitted via a 44 kbps modem. On the average, the line goes down once in a period of 5 minutes after the connection has been established. It gets 20 seconds to have a new connection. Consider the three following cases:
    i. Without any specific service from the session layer.

    ii. With the session layer providing bookmarks every 100 kbytes.

    iii. With the session layer providing bookmarks every 1 Mbytes.

  74. karan says:

    Hi Brad, Thank you providing such a wonderful article. However I have doubt. I have a FTP server-client environment where I could basically agree with the formula to calculate the TCP throughput rate while downloading a file from server to client.

    My question is if the formula holds good during uploads from client to server.

    In other works, does the download and upload transfer speed would be same on single session for each transactions?

    • Brad Hedlund says:

      Sometimes latency may differ in send vs. receive directions, such as where your receive bandwidth may be larger than your send bandwidth. So you would run a calculation for each direction of traffic.

      • karan says:

        Brad,

        In my case latency hardly differs from either ways. Moreover capacity availability is not a concern However I am getting ~ 4MB/s while downloading and ~ 350kB/s while uploading. I cant expect it to be same however such a huge difference is a pt of concern here. Pls advice.

  75. karan says:

    As I am downloading from and uploading to server so do you think if anything could be causing an issue here.

Leave a Reply

Your email address will not be published. Required fields are marked *