Feb. 19, 2011, 9:29 a.m.
posted by mp
Latency Versus Link Utilization
Queuing delay increases rapidly as the link utilization gets closer to 100 percent. Intuitively, congestion should not arise as long as the link utilization remains below 100 percent. In reality, the rate at which packets reach an output interface varies significantly over time, and some level of congestion will take place even when the average utilization is well below 100 percent. Figure illustrates this relationship. This figure uses normalized latency time units and assumes random packet arrivals according to a Poisson distribution. The exact shape of the curve depends on the exact statistical nature of the arrivals and the distribution of packet sizes; however, it illustrates the rapid growth of latency as utilization of a link inceases.
Congestion Latency as a Function of Link Utilization
In the past decade, many studies have questioned the validity of the Poisson assumption and have suggested that packet-switched traffic exhibits self-similar behavior. This finding suggests that the traffic variability (burstiness) remains the same at different measuring intervals. Figures 5-5 and 5-6 show the appearance of Poisson and self-similar traffic when you measure them at different time scales. These figures show the same traffic pattern at 10-second and 1-second measuring intervals. Self-similar traffic results in a latency behavior similar to the one in Figure, but latency grows even more rapidly as the link utilization increases.
Poisson Traffic Pattern
Self-Similar Traffic Pattern
A detailed discussion about traffic patterns is beyond the scope of this book. The "References" section at the end of this chapter lists additional sources of information on the subject.
Minimizing latency and maximizing link utilization poses two opposite goals. On one hand, bandwidth has a significant cost and should not be idle. On the other hand, maximizing its use can lead to congestion and a negative impact on latency, jitter, and packet loss. You can manage utilization with adjustments to traffic load, traffic capacity, or both. You can make these adjustments at the link or class level with the different designs that this chapter describes. The upcoming sections show you multiple alternatives to achieve your latency, jitter, and loss targets. Risk tolerance, operational costs, and bandwidth costs, among other factors, will determine what approach is more appropriate in a particular network.