HLS vs. HDS - What Is the Difference and Why You Should Care
Unless you work daily in the streaming business, it’s sometimes hard to get into the nuances of technologies, and what the impact is for your long term strategy. HLS and HDS are both HTTP based streaming protocols, and sound very similar, but are fundamentally very different.
HLS stands for HTTP Live Streaming and is Apple’s proprietary streaming format based on MPEG2-TS. It’s popular since it provides the only way to deliver advanced streaming to iOS devices. It often mistakenly gets defined as HTML5 streaming, but is not part of HTML5.
Apple has documented HTTP Live Streaming as an Internet-Draft (Individual Submission), the first stage in the process of submitting it to the IETF as an Informational Standard. However, while Apple has submitted occasional minor updates to the draft, no additional steps appear to have been taken towards IETF standardization [Wikipedia].
HDS stands for HTTP Dynamic Streaming and is Adobe’s format to deliver fragmented mp4 files (fMP4). HLS uses MPEG-2 Part 1, while HDS uses MPEG-4 Part 14 and Part 12.
Both formats are MPEG-based, so why should you care? Adobe, Microsoft and Transitions wrote an interesting white paper highlighting the advantages of fMP4 (HDS) over MPEG2-TS (HLS).
Separating Content and Metadata
With MP4 ﬁles, metadata can be stored separately from audio and video content (or “media data”). Conversely, M2TS multiplexes (combines) multiple elementary streams along with header/metadata information [...] This separation of content and metadata allows speciﬁcation of a format using movie fragments optimized speciﬁcally for adaptive switching between different media data, whether that media data is varying video qualities, multiple camera angles, different language tracks or even different captioning or time-texted data.
All crucial features – multiple camera angles for interactive TV/live sports, language tracks and especially effective captioning support with the upcoming government requirements.
Independent Track Storage
In fragmented MP4 (fMP4), elementary streams are not multiplexed and are independently stored separate from metadata, as noted above. As such, fMP4 ﬁle storage is location agnostic and can store audio- or video-only elementary streams in separate locations that are addressable from XML-based manifest ﬁles.
A must have feature, especially for large media libraries like the one from Netflix:
This plethora of combinations can quickly become unwieldy, with upwards of 4000 possible transport streams combinations - a multiple far above the 40 audio, video and subtitle tracks required for the DVD example. Netﬂix® estimates using an M2TS / HLS approach for their library could result in several billion assets, due to combinatorial complexity—a situation they avoid by using an fMP4 elementary stream approach, by selecting and combining tracks on the player.
Trick-play modes—such as fast-forward or reverse, slow motion, and random access to chapter points—can be accomplished via intelligent use of metadata.
A bit more optional, but certainly useful.
Backwards compatibility to MPEG-2 Transport Streams
Separating timing information from elementary video and audio streams allows conversion from several fragmented MP4 elementary streams into a multiplexed M2TS.
HLS is similar to what FLV with ON2 VP6 was before Adobe endorsed H.264 – a risky dead-end format for media libraries.
Seamless stream splicing
Fragmented MP4 audio or video elementary streams can be requested separately because each movie fragment only contains one media type.[...] Using independent sync points for media types along with simple concatenation splicing yields shortened segment lengths: M2TS segment lengths are typically ten seconds, limiting frequency of switching and require extra bandwidth and processing. Segment lengths for fMP4 can be as low as 1 or 2 seconds as they don’t consume more bandwidth or require extra processing.
Absolutely crucial for live sports – it means the difference between close to real-time (TV broadcast), and tens of seconds delayed.
Integrated Digital Rights Management (DRM)
Inherent to the MPEG-4 speciﬁcation, digital rights and—if necessary—encryption, can both be applied at the packet level. In addition, the emerging MPEG Common Encryption (ISO/IEC 23001-7 CENC) scheme can be combined with MPEG-4 Part 12 to enable DRM and elementary stream encryption.
Without DRM no premium content.
When considering comparative complexity, noted in the sidebar above, it’s also worth considering the effect that all the possible combinations has on cache efﬁciency. Each track and bitrate combination required for M2TS delivery means a higher likelihood that an edge cache will not have the particular combination, resulting in a request back to the origin server.
Important for HD content and the increasing traffic volume. It also directly saves delivery costs.
Bandwidth efﬁciencies in segment alignment
A key factor when generating multiple ﬁles at different bitrates is segment alignment. HTTP adaptive streams via fMP4 relies on movie fragments each being aligned to exactly the same keyframe, whether for on-demand or live streaming (live requires simultaneous encoding of all bitrates to guarantee time alignment). A packaging tool that is fragment-alignment aware maintains elementary stream alignment for previously recorded content.
Same as above, the efficiencies reduce bandwidth, increase user experience and battery life.
And what does this mean for open standards? It’s certainly quite a while away, especially since codec licensing costs are making HTML5 codec standardization very challenging, but the most promising initiative is MPEG-Dash.
MPEG-DASH. MPEG, the standards body responsible for MPEG-2 and MPEG-4, is addressing dynamic adaptive streaming over HTTP (MPEG-DASH) through the use of four key proﬁles—two around CFF for fMP4 and two for MPEG-2 TS. For fMP4 both the live (ISO LIVE) and on-demand (ISO ON DEMAND) proﬁles reference Common Encryption (CENC or ISO 23001-7). ISO LIVE, a superset of live and on-demand proﬁles, is close to Microsoft Smooth Streaming, meaning Microsoft could easily update its server and client viewers to meet DASH compliance.
(01/03/2012 – Update) In addition, when comparing the two different approaches, it’s important to understand the following:
- TV is moving to “the web”. That means watching more content in traditional online venues (browsers, mobile apps, etc) and also the development of “hybrid” TV models where MVPDs utilize web/OTT TV technologies to send broadcast TV to connected TVs as well as browsers and mobile devices
- While MPEG2 is “the” technology stack for legacy IPTV, the stack that will be used for this new OTT/Web/Hybrid model is still being determined. This is why the whitepaper is interesting.
- One thing that is clear is that HTTP streaming will be the backbone of the stack. This is why it’s important that the container chosen for this new model work well in the context of HTTP streaming. How well it works for legacy streaming is not relevant.
HDS (fMP4) has some key advantages over HLS, and is recognized by the industry as the better future solution.
- Reduces delivery costs
- Supports more features, most of them critical
- Better user experience
- Future safe format
iOS does not support fMP4 yet and it’s hard to predict Apple’s strategy, but with fMP4′s full backward compatibility to HLS, and the fact that media servers like Flash Media Server support both HDS and HLS with one workflow, fMP4 is the best investment for your future media delivery strategy.