HOME





Scalable TCP
Type of Transmission Control Protocol which is designed to provide much higher throughput and scalability. Standard TCP recommendations as peRFC 2581anRFC 5681call for congestion window to be halved for each packet lost. Effectively, this process keeps halving the throughput until packet loss stops. Once the packet loss subsides, slow start kicks in to ramp the speed back up. When the window sizes are small, say 1 Mbit/s @ 200 ms round trip time and the window is about 20 packets, this recovery time is quite fast—on the order of a few seconds. But as transfer speeds approach 1 Gbit/s, the recovery time becomes half an hour and for 10 Gbit/s it's over 4 hours. Procedure Scalable TCP modifies the congestion control algorithm. Instead of halving the congestion window size, each packet loss decreases the congestion window by a small fraction (a factor of 1/8 instead of Standard TCP's 1/2) until packet loss stops. When packet loss stops, the rate is ramped up at a slow f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Transmission Control Protocol
The Transmission Control Protocol (TCP) is one of the main communications protocol, protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliability (computer networking), reliable, ordered, and error detection and correction, error-checked delivery of a reliable byte stream, stream of octet (computing), octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the transport layer of the TCP/IP suite. Transport Layer Security, SSL/TLS often runs on top of TCP. TCP is Connection-oriented communication, connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through three-way Ha ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Scalability
Scalability is the property of a system to handle a growing amount of work. One definition for software systems specifies that this may be done by adding resources to the system. In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a package delivery system is scalable because more packages can be delivered by adding more delivery vehicles. However, if all packages had to first pass through a single warehouse for sorting, the system would not be as scalable, because one warehouse can handle only a limited number of packages. In computing, scalability is a characteristic of computers, networks, algorithms, networking protocols, programs and applications. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Webscale is a computer architectural approach that brings the capabilities of large-scale cloud computing companies into enterprise ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Packet Loss
Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion.Kurose, J.F. & Ross, K.W. (2010). ''Computer Networking: A Top-Down Approach''. New York: Addison-Wesley. Packet loss is measured as a percentage of packets lost with respect to packets sent. The Transmission Control Protocol (TCP) detects packet loss and performs retransmissions to ensure reliable messaging. Packet loss in a TCP connection is also used to avoid congestion and thus produces an intentionally reduced throughput for the connection. In real-time applications like streaming media or online games, packet loss can affect a user's quality of experience (QoE). Causes The Internet Protocol (IP) is designed according to the end-to-end principle as a best-effort delivery service, with the intention of keeping the logic routers ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Network Congestion
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput. Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse. Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/C ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




UDP-based Data Transfer Protocol
UDP-based Data Transfer Protocol (UDT), is a high-performance data transfer protocol designed for transferring large volumetric datasets over high-speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol. Initial versions were developed and tested on very high-speed networks (1 Gbit/s, 10 Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls using UDP. UDT has an open source implementation which can be found on SourceForge. It is one of the most popular solutions for supporting high-speed data transfer and is part of many research projects and commercial products. Background UDT was developed by Yunhong Gu during his PhD studies at the National Center for Data Mining (NCDM) of University of Illinois at Chicago in the laboratory of Dr. Ro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Internet Protocols
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA. The Internet protocol suite provides End-to-end principle, end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the laye ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


TCP Congestion Control
Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and a congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet. To avoid congestive collapse, TCP uses a multi-faceted congestion-control strategy. For each connection, TCP maintains a CWND, limiting the total number of unacknowledged packets that may be in transit end-to-end. This is somewhat analogous to TCP's sliding window used for flow control. Additive increase/multiplicative decrease The ad ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]