Backbone Performance






Backbone Performance

This section discusses multiple aspects of the performance of a network backbone. The section "Performance Requirements for Different Applications" explains how applications determine the performance requirements for the backbone. Identifying these requirements is crucial before discussing QoS design alternatives. The section "Segmentation of Performance Targets" covers how you can divide performance targets to facilitate their implementation. Finally, the section "Factors Affecting Performance Targets" discusses the different components that contribute to latency, jitter, and loss.

Performance Requirements for Different Applications

A network backbone needs a scalable and flexible QoS design that can satisfy a wide range of services. Some of these services can transport a range of different application themselves. Most networks have the short- or long-term goal of transporting a varied mix of traffic. Think of voice, video, and data as the three main components. However, these applications can have different requirements depending on the nature of the service. From a QoS perspective, latency, jitter, and packet loss are the main performance metrics. They constitute the main criteria to specify the requirements of a specific type of traffic.

Figure illustrates the traffic classification that the Transport Area working group at the Internet Engineering Task Force (IETF) is currently discussing. This list proposes a comprehensive enumeration of different types of traffic. Packet size, burstiness, elasticity, bandwidth consumption, and flow duration are the main criteria to classify all traffic. This list is useful to understand the different nature of the traffic that the network is likely to transport. The original document details how to treat these categories as individual classes. However, deployment experience shows that, with some assumptions, your backbone QoS design can abstract this level of detail and still meet the application requirements.

Service Class Characteristics Proposal at the IETF Transport Area Working Group

Traffic

Characteristics

Latency Tolerance

Jitter Tolerance

Loss Tolerance

Network control

Variable-size packets, mostly inelastic short messages, but traffic can also burst (BGP [*]

Low

Yes

Low

Telephony

Fixed-size small packets, constant emission rate, inelastic and low rate flows

Very low

Very low

Very low

Signaling

Variable-size packets, somewhat bursty short-lived flows

Low

Yes

Low

Multimedia conferencing

Variable-size packets, constant send interval, rate adaptive, reacts to loss

Very low

Low

Low to medium

Real-time interactive

RTP/UDP streams, inelastic, mostly variable rate

Very low

Low

Low

Multimedia streaming

Variable-size packets, elastic with variable rate

Medium

Yes

Low to medium

Broadcast video

Constant and variable rate, inelastic, nonbursty flows

Medium

Low

Very low

Low-latency data

Variable rate, bursty short-lived elastic flows

Low to medium

Yes

Low

OAM [*]

Variable-size packets, elastic and inelastic flows

Medium

Yes

Low

High-throughput data

Variable rate, bursty long-lived elastic flows

Medium to high

Yes

Low

Standard

A bit of everything

Not Specified

Not Specified

Not Specified

Low-priority data

Non-real-time and elastic

High

Yes

High


[*] BGP = Border Gateway Protocol

[*] OAM = Operations and maintenance

Note

Elastic traffic (for example, a le transfer) experiences increasing or decreasing levels of performance according to bandwidth access. On the other hand, nonelastic traffic (for example, a voice call) has hard requirements. Performance remains constant while the requirements are met (regardless of bandwidth access), and it drops drastically otherwise.


This traffic classification also describes the packet latency, jitter, and loss tolerance for each class. The original specification only provides a relative performance characterization (high, medium, low). For packet jitter in particular, the classification identifies some classes as jitter tolerant. This characterization implies that a moderate level of jitter does not affect the application performance. Those applications use buffering at the application endpoint that allows them to adapt to latency variation. In many cases, TCP provides such buffering.

ITU-T Recommendation G.1010 includes a definition of traffic types with general delay targets from a user perspective. Figure lists the different traffic categories. This classification uses error tolerance and latency requirements (interactive, responsive, timely, and noncritical) as main criteria. The delay targets represent the broad end-to-end user requirements. Figure provides a summarized version of more-specific performance targets that the same ITU-T recommendation provides. These targets represent the end-to-end application requirements regardless of particular network technologies or designs. This table focuses on those applications with subsecond latency targets. You can use these values as a guideline to define your own targets.

ITU-T Rec. G.1010 Model for User-Centric QoS Categories

Traffic

Characteristics

Delay Target

Conversational voice and video

Interactive, error tolerant

<< 1 s

Command/control (for example, Telnet, interactive games)

Interactive, error intolerant

<< l s

Voice/video messaging

Responsive, error tolerant

~2 s

Transactions (for example, e-commerce, Internet browsing, e-mail access)

Responsive, error intolerant

~2 s

Streaming audio and video

Timely, error tolerant

~10 s

Messaging, downloads (for example, FTP, still image)

Timely, error intolerant

~10 s

Fax

Noncritical, error tolerant

>>10 s

Background (for example, Usenet)

Noncritical, error intolerant

>>10 s


ITU-T Rec. G.1010 Performance Targets for Sensitive Audio, Video, and Data Applications

Application

Latency

Jitter

Loss

Conversational voice

<150 ms preferred, <400 ms limit

<1 ms

<3%

Videophone

<150 ms preferred,<400 ms limit

<1%

Command/control

<250 ms

0%

Interactive games

<200 ms

0%

Telnet

<200 ms

0%


Note

Remember that the performance targets in Figure represent the end-to-end application requirements. A network does not necessarily require meeting all those targets. For instance, buffering at the application endpoint can help reduce jitter at the expense of additional latency. Also, the application may use error-correcting mechanisms to compensate for packet loss.


Segmentation of Performance Targets

In most cases, several network segments contribute to the final delay, jitter, and packet loss that an application experiences. Figure illustrates an example where you can easily identify five network segments between application endpoints. In this case, the network backbone is only one of them.

Segmentation of End-to-End Performance Targets


The specific targets for latency jitter and packet loss in the backbone should factor the impact of other components. Therefore, the backbone targets will ultimately be significantly lower than those in Figure. By the end of the book, it should be obvious that, under normal operation and with proper engineering, the backbone should have a relatively low contribution to the total end-to-end latency, jitter, and packet loss.

You can further segment your performance targets within your backbone. A simple approach divides the target equally at each hop (see Figure). If you focus on latency for a moment, you can define your latency target per hop as the edge-to-edge latency target divided by the number of hops. This calculation should yield a conservative result. A packet should be more likely to face congestion on one hop than in multiple hops. When following the same approach for jitter, be aware of the nonadditive nature of this metric. Amendment 2 of the ITU-T Recommendation Y.1541 provides details about how to concatenate performance parameters.

Segmentation of Backbone Performance Targets


Factors Affecting Performance Targets

Propagation, serialization, processing, and queuing are sources of latency in a network. Propagation delay is a function of the time signals take to propagate on physical links and their distances. Serialization delay represents the time between the first bit and the last bit of a packet entering a link. As rates increase, serialization becomes less significant. Processing delay represents the time a node takes to process a packet since its arrival until it is ready for transmission. It is also called switching delay. The last component is queuing delay, which results from any buffering attributable to congestion. This congestion may happen mainly on the output interface, but some nodes may have other congestion points (for instance, switch fabrics). Queuing can be the main delay source, but you can control it through careful network design. Fluctuations on these delay sources produce jitter.

Physical link errors, routing failures, node processing errors, and queue drops are the main source of packet loss in a network. In the first place, nodes discard packets when they detect bit errors. In addition, packet loss can result from transient routing failures during topology changes. Nodes also drop packets because of malfunctioning hardware or improper configuration. Queue drops (being the result of tail dropping or active queue management[AQM]) can introduce packet loss in the presence of congestion. As with latency and jitter, careful network design enables you to control the negative impact that congestion can cause in the network.

Network failures can also significantly impact the latency, jitter, and packet loss that a packet may experience at a particular point in the network. A link or node failure will obviously have an impact on traffic while routing converges. However, the performance impact can be significant after convergence because some links will receive higher traffic loads. The exact impact of a link or node failure is highly dependent on the topology of the network. Figure shows an example of how a link failure (the link between P1 and P3) results in another link (the link between P2 and P4) potentially receiving twice as much traffic load. In comparison, a failure of the link between PE1 and P1 is likely to have a lesser impact on the same link.

Impact of Network Failures on Traffic Load Distribution


An unexpected traffic surge can also affect backbone performance. These surges can result from a denial-of-service (DoS) attack or overwhelming application traffic from an extraordinary event for which predicting the traffic load is difficult. In most cases, the backbone QoS design needs to be complemented with other security and admission-control mechanisms. Without a holistic approach, these sources of unexpected traffic could negatively impact backbone performance and thus affect multiple network services.



 Python   SQL   Java   php   Perl 
 game development   web development   internet   *nix   graphics   hardware 
 telecommunications   C++ 
 Flash   Active Directory   Windows