Tuesday, January 12, 2016

Quantifying traffic policer rate with Burst-Size value - test method and calculation

Quality Of Service (QoS) dictates how a packet or flow is handled in networking world. Task is not as simple as the sound of this magic three letter word. Many functionalities of QOS work together to accomplishes this trivial task.  QoS complexity is dictated by number of modifications performed to return intended result, QOS  concepts are hard to perceive for many networking professionals.

Let's add one of QoS functionality "POLICING/RATE-LIMITING"

Traffic policing is one of the commonly used QoS feature. Policer/rate-limiter helps to allow only defined packet rate for interested flow. Various other sub-tasks like set queue.no or mark  a specific bit in packet for violated and conformed actions are also possible.

Typical traffic policing configuration looks similar to.
 Police cir 100 mbps bc 200ms pir 200 mbs bc 20 ms conform transmit exceed set-dscp 3 violated drop

above config allows traffic rate  of 100kbs, sets dscp value 3 for flow between 100mbps to 200 mbps and drop any further traffic.

Most networking engineers don't really know that traffic is not actually policed at 100 mbbs, there is more to it, BC helps to define it.

What does "BC - Burst Count" do?
Why my actual policer rate is more than defined rate?
How does it helps on practical traffic flows?

Let's find answer here.

Burst count helps to adjust policer rate to absorb traffic burst. Real world traffic flow is bursty in nature. Handling burst help to have handle on policer rate without increasing the traffic drop point.
 very rare to see a constant rate of data flow, even if you observe high traffic rate, it is definitely constituted by several mice flows than one giant elephant flow.

BC in time representation translates to Bytes based on port speed.

for a 100mbps policer rate  200 ms BC
Policer bandwidth in Bytes is,

(100 x  1000 x 1000 )  bps x 0.2 Sec  = 2500 KBytes
  8 bits in a byte

On a 10gbps port speed this translates  (2500 KB / 10gbps)to, 2msec of burst duration.

Bandwidth rate in 2msec on 100mbps link becomes 0.2mbps. Effective policer rate is 100.2 Mbps

Typical BC metrics are either in time (ms/micro-sec/sec) or in Bytes. Now that you know the conversion. effective policer rate can be easily determined. 

Friday, January 1, 2016

Data Center Switch Market - Black magic in White Box

Deploy, Manage and operate Data center switches similar to the way you operate a server. Buy commodity hardware and run operating system you like. These two ideas gave birth to WHITE BOX switches to Switching market.

In campus networking and Data Center  networking Top of Rack (TOR) or Leaf switches are the most deployed networking infrastructure. Every network interconnect should go through LAN switches, In Data Center networking TOR is the 1st server interconnect point. Every server gets a link to a TOR switch through direct or indirect extended links. White box evolution is promises to make a big impact on TOR market with low priced commodity switches. Data Center switching has made a giant stride in recent time, Switches are specially built instead of using general-purpose switches. There has been a quantum jump in amount of traffic handled by data center switches. I have used 100Mbps port-speed switches to connect server, Gone or the days!

White box echo system consists of,

  • Merchant silicon companies - Networking ASICS - Ex: BRCM, xpliant, NxP, Intel
  • Bare Metal Switch providers -  Ex: Quanta computer Inc, Pic8, Accton, Celestica
  • Network OS Ex: Cumulus, Pic8, Big Switch Networks, Juniper, Dell etc

ASIC, HW and OS together makes a switch. Choice for each one of these through various vendors makes life easy for all customers. This Healthy competition is laying founding stones for the fruitful future of White box switches. 

I am neither a support nor an opposer to White box solutions. My data center domain expertise only puzzles me with following questions. 

  1. Server OS != Switch OS. I couldn't accept this point.
    • x86 architecture has been there for a while. hypothetically Ever since computer industry has evolved into mainstream.
    • Server OS is a Very very big market to keep different providers busy.Desktop, Application hosting environment, Cloud, Campus, cellphone and lab)
    • Networking processors have to go through multiple sprints to get the maturity equivalent to PC processors. Evolving protocols needs newer capabilities in ASIC.
  2. Support Onus - Asic, Switch Manufacturer and OS, Out of these three pillars who will take ownership for any issues.
  3. Catch-Up with standard - Networking giants like Cisco, Juniper and Brocade have edge over Merchant silicon vendors in many new protocols.
  4. Support cost is directly proportionate to  knowledge base. New Box/OS - means new training cycle for IT engineers
Certainly WHITE BOX solutions is an important catalyst for SDN and NFV evolution. Data Center and enterprise networking is going through a big consolidation phase. White Box battle will certainly make a big hole in existing networking vendors revenue. I am sure they will have plans to sail through this head wind. 

Open standards is must to increase innovation and hence tackle digital divide across the globe. Only time can reveal the effectiveness of this black magic.