What you need to know about Adaptive Bitrate Video Streaming
Post by Bill Weir, Developer Advocate and Mobile Performance Expert at AT&T
Video is some of the fastest growing content on mobile networks and the appetite for video only gets bigger. However, streaming video is complicated and if done poorly, it usually leads to a bad user experience.
As the demand for video advances, developers, producers, and content providers need to advance along with it. This means embracing change, continual learning, research, and adjusting to advances. In order to understand how to deliver the best video streaming experience, it’s good to understand some history.
Streaming Video: How It All Began
Streaming is an alternative to downloading content, where the entire video file is downloaded before it starts to play the video. Conversely, streaming allows the beginning of a video to start playing while the rest of the file is still being transferred.
In the 1990s and into the 2000s, most Internet streaming was delivered using the User Datagram Protocol (UDP). But UDP has some disadvantages when it comes to tracking packet loss and transferring data across firewalls.
In 2002, an alternative option was proposed: Adaptive Bitrate Streaming (ABR or ABS). This method streams content using Transmission Control Protocol (TCP) and Hypertext Transport Protocol (HTTP), two standards widely used on the web. This solved the issues around packet loss and firewall blocking. A number of ABR implementations followed, including Apple’s HTTP Live Streaming (HLS), Microsoft’s Smooth Streaming, Adobe’s HTTP Dynamic Streaming, and MPEG-DASH.
MPEG-DASH is an adaptive bitrate solution that is recognized as an international standard. Work began on it in 2010; and in 2012 ISO, the international standards body ratified it. DASH supports both on-demand and live streaming content. It has specific provisions for the MPEG-4 file format and MPEG-2 Transport Streams, but you can use it with any media format.
How Adaptive Bitrate Streaming Works for Video
When delivering video with adaptive bitrate streaming, multiple copies of the video are encoded at various quality levels. The server then breaks each of the copies into smaller parts called segments, or chunks. The segment length typically varies between two and ten seconds. The segments are stored on a streaming server, ready for delivery. A manifest file that can be sent along with the video files records Information about the available segments and bitrates. With MPEG-DASH the manifest is called an MPD.
When a request for a video is sent by an end user, the server sends the manifest. Then the player determines the current network conditions and requests the video segments with the quality level that best matches. The player is able to adapt to changing network conditions by requesting lower quality content when network speeds are slower and using higher quality content when network speeds are faster.
All of this is designed to help deliver a smooth streaming experience. Our Our Best Practice recommendation is to use an adaptive bitrate strategy for delivering streaming video over wireless networks.
AT&T supports developers, content providers. and video producers by offering video Best Practices guidelines, as well as by adding new features in the AT&T Video Optimizer that can help analyze video streaming through the mobile applications and pinpoint potential ways to help improve it.
Besides helping developers improve video streaming, AT&T Video Optimizer also helps mobile app developers analyze and optimize other application resources like battery drain, data usage, file and image compression, security, etc., to help improve the user experience. Learn more about these mobile app development best practices and recommendations by visiting our Best Practices center.
For more articles on all things video, see our new AT&T Video and VR site.