Generalized Processor Sharing
   HOME

TheInfoList



OR:

Generalized processor sharing (GPS) is an ideal
scheduling algorithm In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by ...
for process schedulers and
network scheduler A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive ...
s. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity ''according to some fixed weights''. In process scheduling, GPS is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness." Generalized processor sharing assumes that traffic is fluid (
infinitesimal In mathematics, an infinitesimal number is a non-zero quantity that is closer to 0 than any non-zero real number is. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally referred to the " ...
packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as
weighted fair queuing Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity i ...
(WFQ), also known as packet-by-packet generalized processor sharing (PGPS).


Justification

In a network such as the internet, different application types require different levels of performance. For example, email is a genuinely
store and forward Store and forward is a telecommunications technique in which information is sent to an intermediate station where it is kept and sent at a later time to the final destination or to another intermediate station. The intermediate station, or node in ...
kind of application, but
videoconferencing Videotelephony (also known as videoconferencing or video calling) is the use of audio signal, audio and video for simultaneous two-way communication. Today, videotelephony is widespread. There are many terms to refer to videotelephony. ''Vide ...
isn't since it requires
low latency Low or LOW or lows, may refer to: People * Low (surname), listing people surnamed Low Places * Low, Quebec, Canada * Low, Utah, United States * Lo Wu station (MTR code LOW), Hong Kong; a rail station * Salzburg Airport (ICAO airport code: ...
. When packets are queued up on one end of a congested link, the node usually has some freedom in deciding the order in which it should send the queued packets. One example ordering is simply
first-come, first-served Queueing theory is the mathematical study of Queue area, waiting lines, or wikt:queue, queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operatio ...
, which works fine if the sizes of the queues are small, but can result in problems if there are latency-sensitive packets being blocked by packets from bursty, higher bandwidth applications.


Details

In GPS, a scheduler handling N flows (also called "classes", or "sessions") is configured with one weight w_i for each flow. Then, the GPS ensures that, considering one flow i, and some time interval (s,t] such that the flow i is continuously backlogged on this interval (''i.e.'' the queue is never empty), then, for any other flow j, the following relation holds :w_j O_i(s,t) \geq w_i O_j(s,t) where O_k(s,t) denotes the amount of bits of the flow k made output on interval (s,t]. Then, it can be proved that each flow i will receive at least a rate :R_i = \frac R where R is the rate of the server. This is a minimal rate. If some flow does not use its bandwidth during some period, this remaining capacity is shared by the active flows with regard to their respective weights. For example, consider a GPS server with w_1=2,w_2=w_3=1. The first flow will receive at least half of the capacity, whereas the other two only get . Nevertheless, if on some time interval (s,t], only the second and third flows are active, they will receive each one half of the capacity.


Implementations, parametrization and fairness

In GPS, and all protocols inspired by GPS, the choice of the weights is left to the network administrator. Generalized processor sharing assumes that the traffic is fluid, i.e., infinitely divisible so that whenever an application type has packets in the queue, it will receive exactly the fraction of the server given by the formula above. However, traffic is not fluid and consists of packets, possibly of variable sizes. Therefore, GPS is mostly a theoretical idea, and several scheduling algorithms have been developed to approximate this GPS ideal: PGPS, aka
Weighted fair queuing Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity i ...
, is the most known implementation of GPS, but it has some drawbacks, and several other implementations have been proposed, as
Deficit round robin Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, similar to weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (G ...
or WF2Q. GPS is considered as a fair ideal, and all its approximations "use it as a reference to measure fairness." Nevertheless, several
Fairness measure Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness. Transmission Control Pr ...
s exist. GPS is insensible to packet sizes, since it assumes a fluid model.


See also

*
Network scheduler A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive ...
*
Fair queuing Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that gen ...
*
Processor sharing Processor sharing or egalitarian processor sharing is a service policy where the customers, clients or jobs are all served simultaneously, each receiving an equal fraction of the service capacity available. In such a system all jobs start service ...
*
Weighted fair queuing Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity i ...
*
Deficit round robin Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, similar to weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (G ...
*
Weighted round robin Weighted round robin (WRR) is a network scheduler for data flows, but also used to schedule processes. Weighted round robin is a generalisation of round-robin scheduling. It serves a set of queues or tasks. Whereas round-robin cycles over the q ...
*
Statistical multiplexing Statistical multiplexing is a type of digital communication link sharing, sometimes abbreviated as STDM. It is very similar to dynamic bandwidth allocation (DBA). In statistical multiplexing, a communication channel is divided into an arbitrary num ...
*
Fairness measure Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness. Transmission Control Pr ...


References

{{Reflist Scheduling algorithms