Verified Document

Evolution Over Time Of Network Multiple Chapters

4 round trip times, in spite of the connection speed. HSTCP encourages packet losses at a slower speed than STCP, but still much quicker than RCP-Reno. 3. Problems of the Existing Delay-based TCP Versions

In contrast, TCP Vegas, Enhanced TCP Vegas and FAST TCP are delay-based protocols. By relying upon changes in queuing delay measurements to detect changes in available bandwidth, these delay-based protocols achieve higher average throughout with good intra-protocol RTT fairness (Cajon, 2004). However, they have more than a few deficiencies. For instance, both Vegas and FAST suffer from the overturn path congestion difficulty, in which simultaneous onward and overturn path traffic on a simple bidirectional blockage connection cannot attain full link operation. In addition, both Vegas and Enhanced Vegas employ a conservative window increase strategy of at most one packet ever RTT, leading to slow convergence to equilibrium when ample bandwidth is available. Although possessing an aggressive window increasing strategy leading to faster convergence in high-speed networks, we shall see that, FAST has trouble grappling with uncertainty in the networking infrastructure.

Similar to Vegas and Enhanced Vegas, FAST TCP attempts to buffer a fixed number, a, of packets in the router queues in the network loop path. In speedy networks, a must be adequately big to allow a delay-based protocol to calculate the line up delay. But with great values of a, the delay-based protocol inflicts supplementary buffering necessities on the network routers with an increase in the number of flows; the router queues may not be able to handle the demand. If the buffering supplies are not fulfilled, the delay-based protocols suffer failure, which mortifies their performance. In contrast, if ? is too diminutive, the queuing delay may not be detectable, and convergence to high throughput may be slow.

Preferably, in delay-based systems a source's worth of set-point ? must be animatedly attuned consistent with the connection capacities, queuing resources, and the number of simultaneous connections in common queues. To determine a sensible and effectual technique for enthusiastically setting a perhaps time-varying set-point ? (t) has remained as an open problem. Examples of delay-based schemes include TCP Vegas (1), Enhanced TCP Vegas and FAST TCP (C.Jin, 2004). While providing higher throughput that Reno, and exhibiting good intra-RTT fairness, the delay-based schemes still have shortcomings in terms of throughput and the selection of a suitable ?. In contrast to the marking / loss-based schemes, delay-based schemes primarily do not use marking/loss within their control strategies, often choosing to follow the tactics of TCP Reno when marking or loss is selected.

4. Analytical Approaches

In terms of characterizing and providing analytical accepting of TCP congestion evasion and control, several approaches based on stochastic modeling, control theory, game theory, and optimization theory have been presented. (S.Kunniyur, 2003)

In particular, Frank Kelly gave a general analytical framework based on distributed optimization theory. In terms of providing analytical guidance to TCP congestion avoidance methods utilizing delay-based feedback, Low (S.H.Low, 2002) urbanized a duality model of TCP Vegas, interpreting TCP congestion control as a distributed algorithm to solve a global optimization problem with the round-trip delays acting as pricing information. Throughout this structure, the resultant performance improvement of TCP Vegas and Fast TCP are better understood. Nonetheless, the expansion of extra analytical framework of TCP congestion avoidance is necessary.(S.Moscolo, 2006)

Network calculus (NC) offers a scientifically thorough approach to analyze network performance, permitting a system theoretic method of decomposing network demands into impulse responses and service curves by using the notion of convolution developed within the context of a certain min-plus algebra, Previously in (R.Agrawal, 1999), window flow control strategy based on an NC using a feedback instrument was urbanized, on condition that consequences concerning the impact on the window size and performance of the session. In terms to determine the most advantageous window size, the work by R.Agrawal (1999) merely recognizes that the window size ought to be reduced when the network is crowded, and augmented when extra resources are obtainable. In (C.S.Chang, 2002), the authors extend NC analysis to time-variant settings, providing a framework useful for window flow control. However, they do not develop an optimal controller. In (F.BAcclli, 2000), a (max, +) approach similar to NC-based techniques is utilized to describe the packet-level dynamics of the loss-based TCP Reno (S.Moscolo, 2006) and Tahoe, and calculate the TCP throughput. The work in...

(S.Moscolo, 2006)
In (J. Zhang, 2002), several NC based analytical tools useful for general resource allocation and congestion control in time-varying networks are developed. In particular, the concept of an impulse response in a certain min-plus algebra has been used and extended to characterize each network element, and the methods are utilized within a distributed sensor network scenario.

In a study on Internet related traffic, published in 1998, the dominant process transmitting over TCP were file transfer, web, remote login, email, and network news. The applications related to these processes were File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), and Telecommunication network (TELENET) (Willinger, Paxton, and Taqqu, 1998). This study focuses in arrival patterns, data load, and duration with respect to packet transfer. The most frequent flow size for HTTP was about 1KB or less. At the same frequency, FTP flows sizes were about 10 times larger than HTTP.

Six years later, a flow-based traffic study of Internet application at a university campout, found that the data bulk was transferred over TCP. Two sets of data were collected for this study during a year. For each set TCP dominated the byte and packets count over the other observed protocols by about 90%. However, in terms of flows, UDP almost double the flow count of TCP for each set. In this study they found that TCP flows were over five times greater than UDP flows. They also found that over 50% of the collected flows had duration of less than 1 second. They found that in addition to FTP, new file transfer type applications had emerged. Applications such as Peer-to-Peer (P2P) and instant messaging (IM) had immerged and hand taken over as that most popular applications in terms of flows, packets, and bytes. HTTP was one of the most popular applications in terms of byte transmission and IM applications dominated in terms of flow duration (Kim, 2004).

In a similar study conducted in 2006, some of the authors of the previous article found that TCP was still dominant protocol based on bytes and packet count. UDP was still the dominant protocol in terms of flows, dominating TCP flows by twice its count. At the application level, the applications transmitting over TCP had a small changed. HTTP was the dominant application, but abnormal traffic over port 80 might had been the cause of excess bytes. One of the most popular P2P applications was eDonkey. They also found that 50% of the traffic flows were composed of 3 packets, 500 bytes or less, and duration of 1 second less (Kim, Won, and Hong, 2006).

One year later, in an hourly analysis of user-based network utilization from two Internet providers, Internet applications transmitting over TCP were found dominant. File sharing applications over TCP were found to dominate in terms of flow frequency and duration. HTTP processes was displaced to a second place (De Oliveira, 2007). The same year, a 3-year study on inbound and outbound network flows showed that the overall network traffic was dominated by HTTP flows. This study was done at a university campus where students were discouraged from accessing file sharing applications such as P2P. Data for this study was collected in 2000, 2003, and 2006. For every year of collected data, the TCP packet count significantly dominated that of UDP and Internet Control Message Protocol (ICMP). They found that flows bytes and packets were highly correlated and that flow size and duration were independent from each other (Lee and Brownlee 2007).

In 2006, a study conducted in campus wide wireless network, showed the dominant applications were web and P2P. The two types of applications contributed over 40% of the total bytes more than P2P applications. The study does not mention whether P2P application is blocked by campus network administrators. Also, the study categorizes other types of network processes and finds that although many applications do not contribute with a significant percentage of the total bytes transferred, their contribution to the total flows has an impact on the network performance (Ploumindis, Papadopouli, and Karagiannis, 2006).

These studies have tested the behavior of Internet protocols and popular applications in terms of flows, bytes, packets and duration. For the different studies, the datasets collected included data from Internet providers and university campus networks.

A weakness of the current TCP slow start mechanism becomes apparent, when there is a large delay bandwidth product (delay £ bandwidth) path. In a network path with a large round trip time (RTT) value and high bandwidth, slow start is not fast enough. For example, it takes a long time to increase cwnd for…

Sources used in this document:
References

B. Melander, M. Bjorkman, and P.Gunningberg, 2000. A new end-to-end probing and analysis method for estimating bandwidth bottlenecks. In IEEE GLOBECOM '00, volume 1, pages 415 -- 420.

C. Dovrolis, P. Ramanathan, and D. Moore, 2001. What do packet dispersion techniques measure? In Proceedings of IEEE INFOCOM '01, volume 2, pages 905 -- 914.

Cisco Systems Inc. NetFlow Introduction. 2008. http://www.cisco.com/en/U.S./tech/tk812/tsd_technology_support_protocol_home.html (Accessed August 10, 2011)

C-S. Chang, R.L. Cruz, J-Y, Le Boudec, and P.THiran, 2002. "A min-+ system theory for constrained traffic regulation and dynamic service guarantees," IEEE/ACM Transaction on Networking, vol.10, no. 6, pp. 805-817.
Lancope. StealthWatch Management Console. 2008. http://www.lancope.com/products/stealthwatch-management-console / (accessed August 10, 2011)
Cite this Document:
Copy Bibliography Citation

Related Documents

Network Security
Words: 3486 Length: 8 Document Type: Term Paper

Network Security History and Evolution of Network Security: The term 'Network Security' refers to the concept of the creation of a 'secure platform' based upon which the user of the computer or of a program within the computer are allowed to perform only those specific tasks that are allowed within the parameters dictated by the security network of the computer, and banned from performing those that are not allowed. The tasks include

Network Research Encountering -- and
Words: 2984 Length: 10 Document Type: Research Paper

The behavior of both botnets and worms in peer-to-peer networks have been empirically examined and models or simulations of their behavior have been attempted, and the manner in which different nodes in peer-to-peer networks develop in and of themselves and in terms of their relationships with other nodes -- the very architecture of the network itself, in other words, which is necessarily dynamic in a peer-to-peer network -- makes

Network Standards
Words: 2900 Length: 9 Document Type: Research Paper

Network Standards A Brief Look Since 1995 This is a research paper that focuses on network standards and protocols that involve strategies in management. Leadership strategies cannot handle the need for network standards to handle billions of users and user generated applications. Therefore management strategies are more appropriate. Furthermore the history of network standards shows layering through the OSI models follows a management approach vs. A leadership approach. In the OSI model, there

Evolution Vs. Creationism Biological Evolution or Evolutionary
Words: 1129 Length: 4 Document Type: Term Paper

Evolution vs. Creationism Biological evolution or evolutionary biology is genetic change in a population occurring from generation to another (O'Neill 2002). All life forms evolve and continue evolving from earlier species, and these life forms include human beings. Most biological scientists concur that the earliest life forms on earth evolved from chance natural occurrences 3 1/5 to 4 billion years ago. They agree that evidence for evolution comes from fossil records

Network Dating Sites How Type Dating Evolved Include Pros Cons
Words: 1386 Length: 4 Document Type: Essay

Network dating sites. How type dating evolved . Include pros cons. Network dating sites have gained a particular place of social eminence within contemporary times. Online dating sites that members pay to use, such Match.com, made over a billion dollars in 2010 (No author). Many people consider these websites as primary options for dating for the simple fact that they allow expedient access to other singles who are also looking to

Evolution of Email and Internet
Words: 4193 Length: 12 Document Type: Term Paper

This problem was solved in the following way: the program uuencode which is used by email-clients transforms its binary code (code of bits and bytes) into text code using ASCII table principle and it's send in the form of text character set in the following form (begin file name reports text translated binary body end). The recipient's email-client executes uudecode program and transforms it to binary primary code. Telnet Telnet is

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now