Quality Of Service — QoS | Part 2

Quality Of Service — QoS | Part 2

Get Internet QoS

When the Internet was created, was not perceived the need for QoS for applications. In fact the entire Internet follows the philosophy of best effort, that the system guarantees to do everything possible to complete an operation, but no guarantee that the transaction will be completed, nor how. Although the IP protocol provides 4 bits per service type (type of service) and 3 for the priority of each packet, these bits are largely unused.

There are basically two ways to provide assurance on the quality of service:

Quality of Service — Over-Provisioning

The first method, called over-provisioning, is to provide resources in abundance, enough to meet expected peak demand, with a substantial margin of safety. A simple solution, but some believe that in practice is too expensive and does not apply if the peak demand grows faster than when predicted to have new resources takes time.

Priority

The alternative is to administer the available bandwidth, ensuring that packets that must be guaranteed a certain QoS obtain preferential treatment. To achieve this, we must solve two problems:

  • Identify packets that should receive preferential treatment.
  • Applies to these packages a queue discipline to ensure the required performance, applied to the output ports of the router.

Quality of Service — Classification

Structured methods for identifying the traffic priority are:

  • Integrated Services, based on reservations: before starting a session that has the QoS requirements, the application should “ask” if the network can guarantee the performance required. Assess whether the network has adequate resources, and if so, accepts the booking.
  • Differentiated Services, which provides network users prior to entering into a contract that defines the maximum amount of traffic “privileged” that can generate, and Marchini traffic to be given priority using the Service Type field of IP header. Reservations are therefore “static”.

Especially in small networks, you can use simpler methods, which intend to manually identify the routers that prioritize traffic, typically using access control lists (ACLs).

Queue disciplines

In a router that does not apply QoS policies, packets are transmitted on the outgoing ports in the order they arrived. A queue discipline essentially consists in managing different port for each outgoing queues, where packets are classified. The queue discipline will be taken down in what order packets from different queues.

Examples of queue discipline:

  • Priority close: the queues are sorted by priority. Every time you need to transmit a packet, it picks the highest priority queue that has a packet ready. In this way, a higher priority application can monopolize the entire available bandwidth to the detriment of those lower-priority (starving).
  • Weighted round robin: a package is picked up in turn by each queue. In this way, you ensure that all classes of applications can be transmitted. The “weighted” means that each queue can be assigned a weight, or a fraction of the available bandwidth, and packets are taken to ensure that the bandwidth is available. If a traffic class at a certain time does not use the allocated bandwidth, this can be used by others (borrowing bandwidth).
  • Discipline tail advanced, as Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC), to allow the expression of each queue is a requirement that a band on the delay. There are currently only available on software router based on BSD or Linux.

Other tools used to manage the available bandwidth:

  • RED (Random Early Detection or Random Early Detection): When approaching congestion, the network arbitrarily discards a small percentage of traffic. This is interpreted by TCP as an indication of congestion, reducing the amount of traffic sent. A special case of this technique called WRED (Weighted Random Early Detection) allows distinguishing the flow of traffic from which to begin to drop packets in the presence of congestion. With WRED thresholds can be defined using the links that once reached causes the deviation of packets belonging to specific classes of traffic.
  • So to reach the first threshold will be discarded packets flow only unimportant, but to reach the threshold of using increasingly high will also discard packets belonging to major traffic flows. The “weighted” means that the class of traffic that will experience the largest number of packets to drop is that associated with the lowest threshold. The definition of thresholds and use of different traffic flows is made based on configuration.
  • Rate limiting: a class of traffic can be restricted to not use more than a certain band.

The market has not favored the creation of QoS. Some believe that a stupid network that provides enough bandwidth for most applications and for the most part, is already economically the best solution possible, show little interest in supporting non-standard applications capable of QoS.

The Internet has already complex arrangements between providers and there seems to be little enthusiasm in supporting the QoS through connections involving networks belonging to different providers, or agreements about the policies that should be incurred to support them.

Skeptics on QoS indicate that if too many packets are discarded on a connection with low elastic QoS, it is already dangerously close to the point of congestion for applications with high QoS inelastic since there is no way to discard additional packets without violating contracts on traffic.

Qos Problems With Some Technologies

The following properties can be used only on the final door, but not on servers, backbone or other ports, which mediate many competing flows.

  • Half Duplex – collisions on the connection can be varied delays (jitter), because the packets are delayed from each collision with a back-off time.
  • Doors with buffer queue IEEE 802.3x (flow control).

IEEE 802.3x flow control is not real flow control, but control of the queue. An example of problems are the building blocks of the IEEE 802.3x Head of Line. Many of today’s switches use the IEEE 802.3x default – also on port uplink / backbone.

Study: From Wikipedia, the free encyclopedia. The text is available under the Creative Commons.

Sharing

Leave your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.