Overview
This article discusses how to improve video over wireless performance. Many companies are using various new video content technologies today such as live video streaming, webcasting, video conferencing and web conferencing. YouTube services continue to be popular and are using increasingly more bandwidth as companies use video for training and marketing purposes. Google language translation service can now convert text, making the English text based videos available to other countries as well. Keep in mind that with the proliferation of company VPNs for security purposes, employees can use wireless at work as seamlessly as at home or on a public network.
They can access the same video services from anywhere. This is why wireless is so popular now and the need for video grade wireless infrastructure performance.
The current 80.11a/g wireless access point is easily swamped when several clients start downloading large files and running video applications. The best solution for guaranteeing acceptable video performance is now the 802.11n wireless standard. According to a Cisco forecast study, the number of wireless devices will exceed wired devices on the internet by 2015 and account for 54% of IP traffic. In addition video traffic will account for 90% of the consumer internet traffic by 2015.
Video Basics
Video and voice are real-time traffic streams by nature that are sensitive to network congestion that causes latency (delay). Video has both a data and an audio component. It should be noted that the same performance metrics such as jitter, latency, packet loss and throughput affect video as well as voice traffic across the internet and company network. Packet loss has a greater effect on video while latency affects voice much more. Guaranteeing specific service levels for video on the network could involve implementing QOS, increasing network bandwidth, network design changes and equipment changes. All these improvements are for the purpose of making the network "video ready". Companies increasingly use web conferencing, webcasts and video conferencing for meetings and for training purposes. Colleges use it to deliver courses as well. It is a very cost effective tool to decrease company travel costs.
Types of Video
It is worth discussing the various types of video services popular today and where, from a networking perspective, consumers source the content. Note how most of the services are across the internet.
- Live Video Streaming over the internet of company Webcasts and TV broadcasts typically delivered to your desktop.
- Web Conferencing to the desktop with applications such as Skype and the very popular Go to Meeting service.
- Video Conferencing service that runs from and across the company network with Cisco Telepresence and equipment from companies such as Tandberg and Polycom.
- Progressive Video download from companies such as YouTube to the desktop.
- Broadcast Video multicast of one to many video streams such as Netflix.
Video Performance
H.323 defines a suite of protocols for audio and video traffic including H.264 and G.729 protocols. It is a framework for developing multimedia applications on a company network. The G.729 protocol is a popular audio codec for compressing audio traffic at 8 Kbps with a 10 ms delay. The H.264 video codec standard is the most current adopted video compression standard. It specifies 24, 30 and 60 frames per second (fps) for high definition (HD) video conferencing with compression from 1.5 Gbps of video traffic to 4 Mbps at a resolution of 1920 x 1080 and 30 fps.
It is important to understand the performance metrics that affect video performance including packet loss, latency, jitter and throughput. Video is sent as a constant stream of traffic in contrast to data traffic such as email that can be re-transmitted with some delay and have no significant effect on service level. Congestion is the basic symptom of a network that is busy and experiencing network performance problems. The queues are busier during times of increased network activity. This causes increased latency, jitter, packet loss, decreased throughput and re-transmission of packets. Implementing quality of service (QOS) will sometimes actually cause dropped data packets to prevent voice/video packet loss. The data packets are then re-transmitted with some delay. The following defines these industry standard performance metrics.
Latency: Amount of time for a packet to travel from source to destination
Jitter: Amount of average variation in latency of each packet
Packet Loss: Percent of packets dropped from source to destination
Throughput: Average number of packets sent during a fixed period of time
Each video service requires different amounts of bandwidth. Some services such as video conferencing are more affected by increased latency, packet loss and jitter than desktop applications. For acceptable video conferencing performance, the packet loss should not exceed 1%, jitter 30 ms and a one-way latency of 300 ms (latency of 150 ms for high definition video conferencing resolution). When these thresholds are exceeded the picture can deteriorate. Bandwidth requirements for video are linked to the specific type of service, amount of resolution and frames per second. For example a standard video conferencing resolution of 704 x 576 at 30 fps requires 768 Kbps - 1 Mbps of bandwidth while a High Definition (HD) resolution of 1080 x 1920 at 30 fps requires 4 Mbps - 12 Mbps. Desktop services such as streaming video and web conferencing have lower bandwidth requirements than video conferencing, however the same latency, jitter and packet loss problems affect video performance. In addition with all services, you have to add an average of 20% additional bandwidth overhead for Ethernet and IP protocol processing.
Video Quality of Service (QOS)
Implementing quality of service (QOS) on a company network is an end to end process starting with the video stream source. Video conferencing end points are often connected to a company edge switch while video streaming to the desktop is internet based. The process of implementing any QOS involves prioritizing traffic for preferential service. Considering video conferencing, the Cisco 3560 and 3750 access edge switches are often used to connect video equipment. The layer 2 data frame has an 802.1p header with 3 bits that can be set for 8 different class of service (CoS) values from 0 - 7. For instance video is assigned CoS of 4 while voice packets are assigned CoS of 5 and the higher number gets better service. High priority data is often assigned CoS of 2.
DSCP is a layer 3 QOS protocol used to specify various types of service (ToS) classes for data, voice and video traffic. DSCP values are layer 3 and as such are set in the first 6 bits of the IP Precedence field of the IP header. The best practice recommendation from Cisco for marking video is a DSCP of AF41. Data traffic is assigned a lower priority such as AF21 for instance while voice is assigned a higher priority of DSCP EF. Video traffic is classified with access lists that define video traffic and a class map is defined for video that matches an access list and points to a specific policy map. The policy map does the DSCP marking of video traffic and the DSCP value is assigned to a queue. Class of service (CoS) packets can be set with a policy map however it is often marked at access switches with SRR and with WRR at distribution/core switches.
Shaped Round Robin (SRR) is a hardware based queuing technique deployed with access switches. SRR allows layer 2 class of service (CoS) and layer 3 (ToS) mappings to queues. The distribution and core network layers typically have 6500 Cisco switches and they use Weighted Round Robin (WRR) hardware queuing. WRR is the same idea however the queuing architecture is somewhat different and only layer 2 class of service values are mapped to queues.
WAN routers are deployed with Low Latency Queuing (LLQ) and Class Based Weighted Fair Queuing (CBWFQ) that assigns video traffic to the high priority queue with a specific priority percentage such as 15%. That guarantees all video traffic will get 15% of the link bandwidth. For instance a 1 Gbps Metro Ethernet circuit will allocate 150 Mbps of bandwidth to video traffic minus protocol overhead. Company WAN links as a best practice should never exceed approximately 33% of available bandwidth for all voice and video traffic. That leaves room for protocol overhead and data packets. Data traffic performance worsens as packets are dropped and video traffic QOS becomes less effective.
Desktop applications use the same QOS tools however the company internet connection and the wireless network factor into the design. In addition the public wireless network you happen to be using affects the overall video network performance. The bandwidth of your home internet connection and congestion affects performance as well as any congestion points across the network. The wireless network is most often where video performance degrades particularly on an 802.11b public network.
Wireless Standards
These describe the industry standard wireless protocols currently deployed.
80.11b
This wireless standard approved in 1999 specifies a maximum data rate of 11 Mbps using the 2.4 GHz unlicensed band in the United States. The band experiences a lot of interference from commercial devices using that frequency. The standard in the United States assigns 11 channels with bandwidth of around 80 MHz at 5 MHz per channel. The United States allocates 3 non-overlapping channels of 1, 6 and 11 with a center frequency separation of 25 MHz per channel. The modulation scheme used with 802.11b is Direct Sequence Spread Spectrum (DSSS) with CCK with characteristics that minimize effects associated with interference. The 802.11b additional data rates include 1, 2, and 5.5 Mbps.
802.11g
This wireless standard approved in 2003 specifies a maximum data rate of 54 Mbps using the same 2.4 GHz band as 802.11b. The 802.11g standard is popular with higher throughput and increased coverage. The same interference problems occur however with the 2.4 GHz band. The 802.11g is compatible with the 802.11b standard and assigns the same 11 channels with 1, 6 and 11 as non-overlapping. The modulation scheme used with 802.11g is OFDM that specifies higher data rates. The additional 802.11g data rates include 1, 2, 5.5, 6, 9, 11, 12, 18, 24, 36 and 48 Mbps.
802.11a
This wireless standard was approved in 1999 specifying a maximum data rate of 54 Mbps using the 5 GHz unlicensed band in the United States. The advantage of 802.11a is higher throughput however the cell coverage is smaller and additional access points will be needed for the same 802.11g coverage. There is much less interference from devices such as cordless phones, bluetooth devices, microwaves and commercial devices using the 2.4 GHz band. There are 23 non-overlapping channels with the current 802.11h specification. Some Cisco devices support both 2.4 GHz and 5 GHz transmitters on the same access point. The modulation scheme used with 802.11a is OFDM, with higher data rates and minimizing effects of interference. Each country specifies the number of channels and frequencies it allows with the 5 GHz band.
802.16
This is a metropolitan (MAN) wireless standard that provides home and business clients seamless wireless access from anywhere. The line of sight technology specifies a distance of around 27 miles and speeds of up to 120 Mbps. The point to multipoint specification operates in the 10-66 GHz range. There is an 802.16a specification with mesh topologies and non-line of sight with frequencies from the licensed and unlicensed 2 GHz and 11 GHz band at a speed of 70 Mbps. The key problem with any MAN implementation using unlicensed frequencies is interference from similar devices.
802.11n
The new 802.11n wireless standard approved in 2009 defines much faster data rates of 300 - 600 Mbps and 1000 Mbps from access point to network switch increasing throughput from client to access point and access point to network switch. It operates in both the 2.4 GHz and 5 GHz bands with effective new performance enhancements such as multiple input multiple output (MIMO) and channel bonding.
Wireless Contention
Access points are essentially a less efficient hub style shared media device with a flat broadcast domain. Contrast that with a Cisco Ethernet switch that has 100/1000 Mbps bandwidth per port and broadcast segmentation with VLANs. The switch uses a much more effective media access contention scheme than wireless access points. The wireless network employs an older less effective carrier sense multiple access with collision avoidance (CSMA/CA) process to manage client access to the network.
The effect of CSMA/CA is increased bandwidth usage, packet loss and packet re-transmits with this shared media. In addition there are the standard wireless problems with the 2.4 GHz band interference and multipath signal fade that occurs when the signal bends or is distorted by the building structure. From a practical perspective 15-25 wireless clients can associate with a single access point at anytime and still maintain good performance. This of course changes as more video and high bandwidth applications are used. The 802.11n can actually support all of those clients running simulataneous live video streaming with 14 of them running high definition video streams.
Data Rate, Distance and Frequency
So by now you know there is no warp speed with older wireless. Data rate (speed) and performance metrics decrease as the wireless clients move further from the access point. Beyond an average of 50-60 feet, the speed decreases and latency, packet loss and jitter increase. The wireless network site survey determines where and how many access points should be deployed so each cell (defined coverage area) has a signal strength with 54 Mbps. The coverage area can be extended with a stronger directional antenna. For instance, these are approximate rated distance, speed and frequency specifications indoor for the Cisco 1240AG access point. Note the 802.11a distance is typically half that of an 802.11g radio however this rating was with a stronger 3.5 dBi antenna.
802.11a (5 GHz): 54 Mbps @ 60 ft - 80 ft with 3.5 dBi omnidirectional antenna
802.11g (2.4 GHz): 54 Mbps @ 80 ft - 100 ft with 2.2 dBi dipole antenna
As the data rate increases your effective network range decreases. Clients that want a continuous maximum bandwidth will need to deploy more access points per design. Increasing transmit power will actually decrease network range at higher data rates while increasing the range with lower data rates such as the case with 802.11g access points. The problem is with increased transmit power, the receiver sensitivity decreases with a process called error vector magnitude. That doesn't apply to the wireless clients where transmit power should be set at maximum for best results. The network length or wireless maximum distance is around 100 meters from client to access point, and with Ethernet wired designs 100 meters from access point to switch. The campus design can be extended with additional switch - switch connectivity of course.
Wireless data rates specify maximum throughput however that isn't a practical value. Mixed environments such as 802.11b and 802.11g will decrease throughput for both clients on the same network segment. As mentioned the 802.11b and 802.11g clients are compatible and can associate with the same access point using the 2.4 GHz band spectrum. Throughput for 802.11b is around 6 Mbps however that will vary with antenna type, distance from the access point and transmit power. Configure the access point with 54 Mbps for 802.11g clients and basic 11 Mbps for the 802.11b clients.
That prevents the access point from operating at less than 11 Mbps. Some access points can operate with dual band 802.11a and 802.11g however they are separate logical networks and must have separate wireless site surveys. The 802.11a access point uses the 5 GHz frequency band. As frequency wavelength increases the network range will decrease. The design with 802.11a covers much less distance compared with 802.11g at the same data rates. The higher frequency (5 GHz) signals don't pass through the building structure as easy as lower frequencies.
These are some average bandwidth throughput values and associated wireless standards. From a practical perspective all 24 channels won't be available with the 802.11h standard and 802.11a access points due to channel overrun interference. Note the effect of mixed environment wireless equipment such as 802.11b/g on the same network and decreased throughput. This occurs as well when there are 802.11n access points with older access points on the same network.
802.11b - 6 Mbps x 3 channels
802.11g - 22 Mbps x 3 channels
802.11b/g - 8 Mbps x 3 channels
802.11a - 25 Mbps x 21 channels
802.11n - 150 Mbps/300 Mbps x 21 channels
Decreasing the transmit power of an access point will minimize channel interference. The effective network range can be extended with repeater access points, increasing access point transmit power or adjusting the access point position. Using a higher gain antenna on the access point is an option as well. Cisco access points have a lot of options for deploying antennas with higher gain and sensitivity. Note you should minimize the cable length of any antenna. The longer antenna cabling will attenuate the signals. Some countries limit the maximum access point transmit power setting.
RF Propagation
As mentioned signal attenuation is worse at higher frequencies. There is however a lot of environmental factors that distort, bend and minimize signal strength. The result is something called multipath fading where a signal takes several paths to a destination. These are some examples.
• Diffraction - signal bending due to building structure angles
• Refraction - environmental factors such as humidity can cause signal to bend
• Reflection - water, glass or any smooth surface can bounce a signal distorting or fading it
• Absorption - structures absorbing signal (trees)
• EMI interference - cordless phones, microwave ovens, electrical motors, bluetooth devices
Fade Margin is the amount of receiver sensitivity power that can be decreased while maintaining acceptable network performance. That is a factor with deployment of outside wireless bridges with point to point topologies such as buildings on a campus. Problems with rain will attenuate signals and knowing the fade margin will avoid performance issues. Polarization is the orientation of the radiated pattern from the antenna and like a key must match with transmitting and receiving antenna. The most often polarization used with access point antenna is linear. Antenna can transmit horizontal or a vertical polarized signal.
Improving Video over Wireless Performance
When discussing bandwidth requirements and various video services it is important to note that a wireless network will always require much more bandwidth than your company LAN or your home internet connection for the same video service. An example is high definition live video streaming where the actual wireless bandwidth needed is much higher compared with the LAN or home cable/DSL internet connection. The home internet connection would require 500 Kbps - 1 Mbps. That is not a problem even for home internet where the cable download speed is an average 10 Mbps. The wireless network with access contention and multipath fading problems aren't as efficient and would use an effective bandwidth of 5 - 10 Mbps. In addition, note that packet loss does affect video over wireless performance more than latency and jitter however all metrics can be improved with the following recommended improvements.
1. Deploy the new 802.11n Access Point and Client Adapters
The new 802.11n wireless access point is now rated at 300 Mbps with the new feature enhancement. That is 6x faster than the nearest 802.11g standard. Deploying 802.11n in the 5 GHz band and you have 21 non-overlapping channels available as well. That allows for higher data rates per coverage area. The new enhancements include multiple input multiple output (MIMO), channel bonding, MAC block acknowledgment, payload optimization and unicasting and QOS prioritizing of traffic classes.
MIMO Explained
802.11n uses multiple input/output antennas on the access point and wireless client to increase data rates and decrease re-transmits and packet loss. The access point and clients can send simultaneous traffic streams increasing the amount of data and extending the network range (distance). The current most popular Cisco 1250 AP uses what is called a 2T x 3R MIMO. That is 2 transmit antenna on the access point and 3 receive antenna on the client. The best results occur when all wireless clients use 802.11n adapters and access points are all 80.11n with no mixed environment of 802.11a/g access points.
Channel Bonding
The technique of channel bonding now allows combining of 2 non-overlapping channels in the 5 GHz band to send data at 2x the standard data rate for a theoretical 300 Mbps. In practice the average data rate has been tested at 180 Mbps and 140 Mbps for video streaming. That is pretty impressive compared with 802.11g average throughput of 22 Mbps.
Payload Optimization
The feature of payload optimization or packet aggregation is basically putting more data in each packet sent resulting in more effective use of the transport media.
MAC Block Acknowledgment
Previous access points required that each MAC layer MPDU packet was separately acknowledged with an ACK packet. The new 802.11n standard now uses a single block ACK to acknowledge multiple MPDUs. This decreases the amount of protocol overhead and less bandwidth required.
Multicast to Unicast Traffic
Video over wireless presents a specific problem with multicasting that the wired world doesn't have. Wireless access points do not support multicasting however 802.11n can now convert multicast to unicast streams per wireless client at layer 2.
2. Network Design
The wireless access points should always be connected to a 100 Mbps full duplex switch port. The 802.11n access points should be connected to a 1 Gbps or 10 Gbps switch port. Video end points should be connected closer to the distribution layer and on a less busy line card. The end point video source equipment can be located at the network edge as well, however you should select a switch with all the performance features and preferably located in the data center. Wireless multiple SSIDs should always be defined to segment traffic and assigned VLANs to match the same VLAN schema implemented on the wired network.
Use a hierarchical design with any new wireless/wired deployments and where possible spread out and connect access points across multiple network switches instead of a single switch. Consider doing some performance monitoring on the network to eliminate media mismatches. For example a network switch with a Gigabit port that is uplinked to a switch with a 100 Mbps interface. As well WAN circuits are most often the slowest link compared with the switch infrastructure.
- Have a proper wireless network site survey done for each band to minimize signal overrun and optimize coverage.
- Deploy internal client adapters instead of external USB style at your laptop for best performance.
- When deploying 802.11a/b/g access points (mixed environment) with 802.11n access points, it is better to assign the 802.11n access points and clients to the 5 GHz band where there is more non-overlapping channels and less interference.
- Use all 802.11n access points and clients where possible instead of mixed environment and at least 2T x 3R x 2S spatial streams.
- Use additional access points per coverage area with 802.11n at 5 GHz for increased data rate, range (distance), number of clients and network availability.
- Deploy more powerful extended range antennas to increase the data rate and range.
- Clean up problems with any sub-optimal routing on the network.
- Consider deploying the WLC 4400 WLAN controllers. This requires a firmware upgrade on all 1100 and 1200 series autonomous access points, however there are advantages such as advanced RF management features.
3. End to End Quality of Service (QOS)
Any good quality of service deployment must consider both wired and wireless QOS techniques for guaranteeing end to end performance. The wired QOS has already been discussed here with Shared Round Robin (SRR) and Weighted Round Robin (WRR) hardware queuing on switches. As well there is Low Latency Queuing (LLQ) and Class Based Weighted Fair Queuing (CBWFQ) implemented on WAN routers. DSCP and CoS packet marking is used to prioritize specific traffic types for preferential queuing. Wireless now has Wireless Multimedia Extensions (WMM) that classifies traffic with 4 categories according to traffic type. These include voice, video, best effort and background. This provides a guaranteed service level for video traffic during times of network congestion.
The layer 2 data frame from the switch has an 802.1p field where the class of service (CoS) bits are set. The access point examines that field and queues traffic with a specific CoS setting to the assigned queue. The voice traffic queue is the highest priority queue and any traffic queued there is serviced before video and data. Any wireless clients not using VoIP will have video prioritized first. Note that although queue 3 best effort has a CoS of 0 that queue is still higher priority than background traffic. Cisco VideoStream application layer enhancement allows assignment of video traffic to a priority stream according to a VLAN or SSID assignment for preferential queuing.
Access Point Priority Queuing:
Queue 1: Voice Traffic CoS = 6,7
Queue 2: Video Traffic CoS = 4,5
Queue 3: Best Effort (Transactional Data) CoS = 0,3
Queue 4: Background Traffic (Email) CoS = 1,2
Call admission control is a type of QOS that limits the number of video sessions to avoid oversubscription of the priority queue at the switches and routers. The use of a gatekeeper service monitors the number of video sessions and denies any additional sessions based on the bandwidth setting of the queue. The priority queue is configured with enough bandwidth for a specific number of sessions and any requests for additional sessions are denied if that exceeds the queue size.
4. Bandwidth
As mentioned, doing a performance assessment of the current network will identify where additional bandwidth is needed. The company WAN is the most common source of problems with bandwidth. The prevalence and low cost of Metro Ethernet Gigabit circuits today make it is a great opportunity to deploy it on the company network.
Copyright 2011 Shaun Hummel All Rights Reserved