Congestion Control Techniques in Computer Networks

Last Updated : 10 Feb, 2026

Congestion control is a mechanism that controls a sender’s data transmission rate so the network remains stable under varying traffic loads.

  • It keeps network delay under control by limiting how much data builds up in router queues.
  • It reduces jitter by preventing queues from growing and shrinking unpredictably during traffic bursts.
  • It lowers packet loss by avoiding buffer overflow in routers and switches.
  • It improves fairness by ensuring that multiple connections share available bandwidth more evenly.
  • It maintains stable throughput by reducing retransmissions and avoiding repeated overload cycles.
congestion_control_technique

Types of Congestion Control

These techniques are broadly classified into two categories: Open-Loop (preventive) methods and Closed-Loop (reactive) methods.

1. Open-loop congestion control

Open-loop congestion control prevents congestion by using predefined rules and design-time policies, instead of relying on real-time feedback from the network during transmission.

Open Loop Congestion Control Policies

1. Retransmission Policy: Lost or corrupted packets are retransmitted by the sender. To prevent congestion, retransmission timers must be carefully set to avoid excessive retries, which could worsen network load.

2. Window Policy: The type of sliding window protocol affects congestion. In Go-Back-N ARQ, multiple packets may be retransmitted for a single loss, increasing network load. Selective Repeat ARQ is preferred, as it retransmits only the missing packets, reducing unnecessary congestion.

3. Discarding Policy: Routers can selectively discard less important, damaged, or delayed packets to prevent congestion. This is useful in multimedia applications like audio or video streaming, where losing some packets may not noticeably affect quality.

4. Acknowledgment Policy: Acknowledgment traffic adds load to the network. Congestion can be reduced by employing strategies such as sending a single acknowledgement for multiple packets or delaying acknowledgements until a packet is ready to be sent or a timer expires.

5. Admission Policy: Admission control allows new connections only if sufficient network resources are available. Before establishing a connection, routers or switches check bandwidth and buffer availability. If adding a new flow would risk congestion, the request is denied to protect existing traffic.

6. Traffic Shaping and Rate Control (Additional): Open loop systems may also include traffic shaping, where sources limit the rate of packet transmission, and rate control, ensuring that traffic conforms to agreed limits before entering the network.

2. Closed-loop congestion control

Closed-loop congestion control detects congestion during communication using feedback from the network and then reacts by adjusting the sending rate or network behavior to reduce overload.

1. Backpressure: A congested node temporarily stops accepting packets from its upstream neighbors, slowing traffic back toward the source. This method propagates congestion signals backward through the network. It works primarily in virtual circuit networks, where nodes know their upstream neighbors. For example, if the 3rd node is congested, it stops receiving packets, causing congestion at the 2nd node, which then affects the 1st node and eventually informs the source to slow down.

backpressure

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be get congested due to slowing down of the output data flow. Similarly 1st node may get congested and inform the source to slow down. 

2. Choke Packet Technique: When a router experiences congestion, it sends a special choke packet directly to the source, instructing it to reduce its transmission rate. The router monitors its resource usage and triggers the choke packet when utilization exceeds a threshold. Only the source is notified, while intermediate nodes remain unaware of the congestion.

choke_packet

3. Implicit Signaling: In this method, no explicit congestion message is sent. Instead, the source infers congestion from network behavior, such as delayed or missing acknowledgments, and adjusts its sending rate accordingly.

4. Explicit Signaling: A congested node directly informs the source or destination about congestion by embedding congestion information inside data packets. This differs from choke packets, which are separate control packets.

  • Forward Signaling: Congestion information is sent to the destination, which then takes measures to handle or avoid congestion.
  • Backward Signaling: Congestion information is sent to the source, instructing it to slow down its transmission rate.

Network-assisted Congestion Control

Network-assisted congestion control is a method where routers actively support congestion handling by signaling congestion early or controlling queue buildup, instead of leaving everything to the sender’s loss-based guessing.

Network-assisted Techniques

1. ECN (Explicit Congestion Notification): ECN allows routers to mark packets instead of dropping them when congestion is starting. The receiver echoes this mark back to the sender, and the sender reduces its sending rate, which helps control congestion without waiting for packet loss to occur.

2. AQM (Active Queue Management): AQM is a queue management approach where routers start dropping or marking packets before the buffer becomes full. This prevents long queues from building up, which reduces queueing delay and makes network performance smoother during peak traffic.

3. CoDel (Controlled Delay): CoDel is an AQM technique that focuses on controlling delay rather than just buffer size. It monitors how long packets stay in the queue, and if the delay remains high for a sustained time, it drops or marks packets to force senders to slow down and bring the queue delay back to a healthy level.

4. Why it is useful: Network-assisted methods reduce the chance of buffer overflow and reduce sudden bursts of packet loss, because congestion is handled earlier through marks or controlled drops.

5. Key benefit for real networks: These methods keep latency-sensitive applications like voice and video more stable, because they prevent queues from becoming excessively long during heavy load.

Comment

Explore