Home > Information technology essays > Congestion in a network – proposal

Essay: Congestion in a network – proposal

Essay details and download:

  • Subject area(s): Information technology essays
  • Reading time: 8 minutes
  • Price: Free download
  • Published: 16 February 2016*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 2,336 (approx)
  • Number of pages: 10 (approx)

Text preview of this essay:

This page of the essay has 2,336 words.

CHAPTER-1
GENERAL INTRODUCTION
Presently the Internet accommodates simultaneous audio, video, and data traffic. This requires the Internet to guarantee the packet loss which at its turn depends very much on congestion control. A series of protocols have been introduced to supplement the insufficient TCP mechanism controlling the network congestion. Core Stateless Fair Queuing (CSFQ) was designed as an open-loop controller to provide the fair best effort service for supervising the per-flow bandwidth consumption and has become helpless when the P2P flows started to dominate the traffic of the Internet. Token-Based Congestion Control (TBCC) is based on a closed-loop congestion control principle, which restricts token resources consumed by an end-user and provides the fair best effort service with O(1) complexity. As Self-Verifying CSFQ and Re-feedback, it experiences a heavy load by policing inter-domain traffic for lack of trust. In this work, Stable Token-Limited Congestion Control (STLCC) is introduced as new protocols which appends inter-domain congestion control to TBCC and make the congestion control system to be stable. STLCC is able to shape output and input traffic at the inter-domain link with O(1) complexity.
STLCC produces a congestion index, pushes the packet loss to the network edge and improves the network performance. Finally, the simple version of STLCC is introduced. This version is deployable in the Internet without any IP protocols modifications and preserves also the packet datagram. Modern IP network services provide for the simultaneous digital transmission of voice, video, and data. These services require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion control is therefore, the cornerstone of packet switching networks [28]. It should prevent congestion collapse, provide fairness to competing flows and optimize transport performance indexes such as throughput, delay and loss.
Congestion control of the best-effort service in the Internet was originally designed for a cooperative environment. It is still mainly dependent on the TCP congestion control algorithm at terminals, supplemented with load shedding [1] at congestion Links. This model is called the Terminal Dependent Congestion Control case.
Modern IP network services provide for the simultaneous digital transmission of voice, video, and data. These services require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion control is therefore, the cornerstone of packet switching networks. It should prevent congestion collapse, provide fairness to competing flows and optimize transport performance indexes such as throughput, delay and loss.
Despite this vast literature, congestion control in telecommunication networks struggles with two major problems that are not completely solved. The first one is the time-varying delay between the control point and the traffic sources. The second one is related to the possibility that the traffic sources do not follow the feedback signal. This latter may happen because some sources are silent as they have nothing to transmit. Originally designed for a cooperative environment. It is still mainly dependent on the TCP congestion control algorithm at terminals, supplemented with load shedding [1] at congestion links. This model is called the Terminal Dependent Congestion Control case.
Core-Stateless Fair Queuing (CSFQ) [3] set up an open- loop control system at the network layer, which inserts the label of the flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on the rate label if congestion happens. CSFQ is the first to achieve approximate fair bandwidth allocation among flows with O(1) complexity at core routers.
According to Cache Logic report, P2P traffic was 60% of all the Internet traffic in 2004, of which Bit-Torrent [4] was responsible for about 30% of the above, although the report generated quite a lot of discussions around the real numbers. In networks with P2P traffic, CSFQ can provide fairness to competing flows, but unfortunately it is not what end-users and operators really want. Token-Based Congestion Control (TBCC) [5] restricts the total token resource consumed by an end-user. So, no matter how many connections the end-user has set up, it cannot obtain extra bandwidth resources when TBCC is used.
In this dissertation work a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic is proposed and implemented. In this new method the edge and the core routers will write a measure of the quality of service guaranteed by the router by writing a digital number in the Option Field of the datagram of the packet. This is called a token. The token is read by the path routers and interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token number the edge router at the source, thus reducing the congestion on the path. In Token-Limited Congestion Control (TLCC) [9], the inter-domain router restricts the total output token rate to peer domains. When the output token rate exceeds the threshold, TLCC will decreases the Token-Level of output packets, and then the output token rate will decrease.
1.1: Definition of Congestion in a Network
Congestion can be defined as a network state in which the total demand for resources, e.g. bandwidth, among the competing users, exceeds the available capacity leading to packet or information loss and results in packet retransmissions.
1.2: What Causes Congestion in a Network
• Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the network.
• Congestion occurs when bandwidth is insufficient and network data traffic exceeds capacity.
• When number of packets in the buffer of a router exceed the capacity of the router leading to packet drop.
• Router Buffer too small to handle the waiting package leading packet loss
1.3: Effect of Congestion on Network
Congestion in a network can cause many damages to the network which may render the network un-usable to the users at a point in time. Serious effects of congestion on the network include:
• Increase in the queuing delay
• Packet loss
• Blocking of new connection
• Decrease in the throughput
1.4: Congestion Control
Congestion control deals with adapting the source send rate to the bandwidth available to the transport connection, which varies over time in a non-predictable way because the network is shared by many applications. Consider the data flow from a TCP source to a TCP link. The goal of TCP congestion control is three-fold:
• To reduce the source send rate when the network is congested [more precisely, when the links on the network path of the TCP connection have queues that are growing very large].
• To increase the source send rate when the network is not congested [so as to exploit bandwidth when it becomes available].
• To share the network resources (i.e., link bandwidth and buffer) with other TCP flows in a “fair” way
1.5.1: End-to-End approach (Source base)
Source based congestion control methods are reactive in nature i.e. source host reacts after getting congestion signals from the networks, by reducing its transmission speed. TCP uses implicit congestion signals: packet loss or delay or the combination of both. Based on the types of congestion signals, source based approaches are further categorized as: loss based approach, Delay based approach, and hybrid approach:
1.5.2: Router Approach
Router based congestion control methods are proactive in nature. Router continuously measure the traffic load and if there is any symptoms of traffic overload which causes congestion, it sends a signal to source host to slow down its transmission speed before congestion occurs. Router based methods sends incipient congestion signal by marking packets with a mark probability. Based on the mark probability calculation criteria router based methods are categorized as Queue length based, Rate based and Hybrid approach. Based on the nature of solution, router based approach is classified as Heuristic approach, Optimization approach and Control Theoretic approach (Adams, 2013).
STABLE TOKEN LIMIT CONGESTION CONTROL (STLCC):
STLCC is able to shape output and input traffic at the inter-domain link with O(1) complexity. STLCC produces a congestion index, pushes the packet loss to the network edge and improves the network performance. To solve the oscillation problem, the Stable Token-Limited Congestion Control (STLCC) is introduced. It integrates the algorithms of TLCC and XCP [10] altogether. In STLCC, the output rate of the sender is controlled according to the algorithm of XCP, so there is almost no packet lost at the congested link. At the same time, the edge router allocates all the access token resource to the incoming flows equally. When congestion happens, the incoming token rate increases at the core router, and then the congestion level of the congested link will also increase. Thus STLCC can measure the congestion level analytically, allocate network resources according to the access link, and further keep the congestion control system stable.
TOKEN
In this work a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic is proposed. In this new method the edge and the core routers will write a measure of the quality of service guaranteed by the router by writing a digital number in the Option Field of the datagram of the packet. This is called a token. The token is read by the path routers and interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token number the edge router at the source, thus reducing the congestion on the path.
CORE ROUTER:
A core router is a router designed to operate in the Internet Backbone or core. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the routing protocols being used in the core. A core router is distinct from an edge routers.
EDGE ROUTER:
Edge routers sit at the edge of a backbone network and connect to core routers. The token is read by the path routers and interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token number the edge router at the source, thus reducing the congestion on the path.
1.5.3: Performance Measure of congestion control
2.1Efficiency. Sending packets faster than the bottleneck capacity ensures utilization of all available network resources between source and destination with appropriate use of erasure codes, almost all delivered packets will be useful.
2.2Simplicity. because coding renders packet drops (and reordering) inconsequential, it may be possible to simplify the design of routers and dispense with the need for expensive, power-hungry fast line-card memory.
2.3Stability. Decongestion transforms a sender’s main task from adjusting transmission rate to
ensuring an appropriate encoding. Unlike the former, however, one can design a protocol that adjusts the latter without impacting other flows.
2.4Robustness. Existing congestion control protocols are susceptible to a variety of sender misbehaviours, many of which cannot be mitigated by router fairness enforcement. Because end points are already forced to cope with high levels of loss and reordering in steady state, decongestion is inherently more tolerant. The transmit time for this data is usually dependent upon internal network parameters such as communication media data rates, buffering and signalling strategies, routeing, propagation delays.
2.1 Problem Statement
Despite different solution, congestion control in telecommunication networks struggles with two major problems that are not completely solved. The first one is the time-varying delay between the control point and the traffic sources. The second one is related to the possibility that the traffic sources do not follow the feedback signal. This latter may happen because some sources are silent as they have nothing to transmit. Congestion control of the best-effort service in the Internet was originally designed for a cooperative environment. It is still mainly dependent on the TCP congestion control algorithm at.
Also limited evidence shows that the large majority of end-points on the Internet comply with a TCP-friendly response to congestion. But if they didn’t, it would be hard to force them to, given path congestion is only known at the last egress of an internetwork, but policing is most useful at the first ingress.
Without knowing what makes the current cooperative consensus stable, we may unwittingly destabilize it. At the most alarmist, if this was to lead to congestion collapse [7] there would be no obvious way back. But even now, applications that need to be unresponsive to congestion can effectively steal whatever share of bottleneck resources they want from responsive flows. Also In the existing system, the sender sends the packets without the intermediate station. The data packets has been losses many and time is wasted. Retransmission of data packets is difficulty.
2.2 Proposed Work (solution)
The solution we proposed in this dissertation work uses router Base Control. To alleviate scalability problems that have Plagued per flow solutions such as Fair Queueing [1] new architectures have been recently proposed: Stateless Core (SCORE) [5], [6], [7], [8]. A key common feature of these architectures is the distinction between edge and core routers in a trusted network domain (see Figure 1). The scalability is achieved by not requiring core routers to maintain any per flow state, and by keeping per flow or per aggregate State only at the edge routers
System model
Fig. 1
With SCORE, edge routers maintain per flow state and use this state to label the packets [5], [6], [7], [8]. Core routers maintain no per flow state. Instead they process packets based on the state carried by the packets, and based on some internal aggregate state.
In particular, we propose a design called Self-Verifying CSFQ (SVCSFQ) that uses statistical It is important to note that what makes our approach possible in the case of SCORE solutions is that the state carried by flow verification to check the rate estimates. Routers no longer blindly trust the incoming rate estimates, instead they statistically verify and contain flows whose packets are incorrectly labelled. Thus, flows
2.3 Objective
The main objective of this work is to implement a system that uses statistical verification to identify and contain the flows whose packets carry incorrect information. Thus, flows with incorrect estimates cannot seriously impact other traffic.
Thesis structure

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Congestion in a network – proposal. Available from:<https://www.essaysauce.com/information-technology-essays/congestion-in-a-network-proposal/> [Accessed 19-01-25].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.