After reading this article you will learn about the principles of congestion control.

A subnet may be a complex system and the solution to congestion control in a subnet may be treated as a control problem. The control problem may be either open loop or closed loop. Both have different approaches to the problem. Control system specialists will be familiar with these terms.

In open-loop systems, the attempt is to make sure that the design is such that the problem does not occur. However, once the system is established and is running, farther corrections cannot and are not made.

For open- loop systems, the decisions to be made at the time of the design are:

ADVERTISEMENTS:

1. Deciding when to accept new traffic

2. Deciding when to discard packets

3. Deciding which packets to discard and

4. Deciding schedules at various points in the network.

ADVERTISEMENTS:

The basic issues to be considered here are that these decisions cannot be based on the present sit­uation in the system. In contrast to this, closed-loop decisions will be based on the feedback obtained from the system about the current state of affairs.

The approach used is usually in three parts:

1. Locate in the system to detect when and where congestion occurs.

2. Send this information to the places where action can be taken.

ADVERTISEMENTS:

3. Adjust system operation to correct the problem.

In order to take corrective action, various measures are possible.

The most important measures are:

1. The percentage of all packets discarded for lack of buffer space

ADVERTISEMENTS:

2. The average queue length

3. The number of packets that are timed out and have to be retransmitted

4. The average packet delay and

5. Standard deviation of packet delay.

ADVERTISEMENTS:

The increase in the number of measures will be a sign of increasing congestion. The location of congestion can be found out. In the next step, this information has to be sent where some action can be taken to reduce the problem.

One way is to send this information to the source of the traffic. In other methods, a bit or a field can be reserved in each packet for routers to fill in whenever congestion crosses a certain predetermined undesirable level.

When a router detects congestion it fills in this field or bit so that other routers in the neighbourhood are warned. Another method could be for hosts or routers to send out probe packets from time to time to find out the status about congestion. This information may help to direct traffic around problem areas.

In feedback schemes, it is hoped that the knowledge of congestion will cause the hosts or routers to take some corrective action. The time factor must also be taken into account. The presence of congestion implies that at that instant, the load is greater than the resource in part of the system.

Therefore, ultimate solutions must be:

To increase the resources and to reduce the load.

For example, the bandwidth could be increased temporarily using a dial-up phone line between certain points. In a system like SMDS, the system may be approached for additional bandwidth for a short time. On satellite systems, increasing transmission power usually gives higher bandwidth. Splitting traffic over multiple routes instead of always using the best rout may also increase the bandwidth effectively.

Finally, using spare routers, which are usually kept to make the system fault tolerant, may also help temporarily. They usually take over whenever there is a problem with some router. However, sometimes it is not possible to increase resources. The only solution left in that case is to reduce the load.

This can be done by denying service to some users, downgrading service to some or all users and having users reschedule their demands in a more predictable way. Some of these can best be applied to virtual circuits internally. Various methods that can be used are in the transport layer, the network layer and the data link layer. A list of congestion prevention policies is suggested in Table 8.3.

Congestion Prevention Policies

Jitter and Jitter Control:

In addition to the above issues, there are certain performance needs based on the type of matter to be transmitted. It has been assumed that application programs have simple needs—they want as much bandwidth as the network can spare.

However, some applications, for example—video applications— can specify an upper limit to their requirements. A video image displayed on a screen with a resolution of 352 x 240 pixels represented by 24 bits of information (as in the case of 24-bit colour) would have a frame size of (352 x 240 x 24)/8 = 247.5 kb, it might want a throughput rate of 75 Mbps.

If the network can provide more bandwidth, it will not matter to this application. Since the difference between any two adjacent frames is usually small, it is possible to compress the video by transmitting only the differences between adjacent frames. This compressed video will not flow at a constant rate and will vary with time.

Therefore, it is possible to fix an average bandwidth requirement, but the instantaneous rate may be more or less. Clearly, therefore, merely knowing the average rate may not suffice.

There are bound to be bursts—that is peaks rates that are maintained for some period of time. It is important to know the rate because if this peak rate is higher than the available channel capacity then the excess data will have to be buffered somewhere, to so that it can be transmitted later.

This knowledge will permit the network designer to allocate sufficient buffer capacity to hold these peak bursts. It is obvious, therefore, that an application’s delay requirements may also be important from the point of view of the designer- whether a one-way latency is 100 ms or 399 ms matters as much as how the latency varies from packet to packet.

This variation is called latency. Suppose that a video source sends a packet every 33 ms. If every packet arrives in 33 ms then we can conclude that the delay faced by each packet was the same. If this inter-packet gap is variable then we say that the network has introduced jitter into the packet stream.

To control this jitter, it has to be bounded. This is done by calculating the expected transit time for each leap along the route. When the packet arrives at the router, the router can check the timekeeping status of the packet. This information is kept in the packet and updated at each hop.

If the packet is ahead of schedule, it is held back long enough to put it back on schedule once again. If it is behind schedule it is pushed through immediately. Thus the packets ahead of schedule are delayed and packets behind schedule are rushed. This is known as jitter control.

Home››Networking››