It’s well known that creating well functioning software can involve a great degree of complexity. This is especially true In regards to live-streaming, where a variety of methods and protocols are available.

Every decision from back end architecture to front-end UI can have long-reaching effects. The lack of careful planning and research, creates a strong possibility that an unanticipated result can negatively affect your product.

Luckily for you, the team at Red5 Pro has already made those decisions over the past 14 years creating our live-streaming solution. With the specific focus on reducing latency and increasing scalability, we used those principles to guide our decisions as we designed and configured our platform.

One such decision we made was to not use HLS low latency variants such as LHLS.  As a part of our always ongoing series of technical articles, this post seeks to explain that decision and the factors that disqualified HLS ultimately leading to the implementation of WebRTC.  


What is HLS?

HLS (Also known as HTTP Streaming) is a delivery technology used for sending and receiving live video.  Originally developed by Apple for use with their devices, it has enjoyed widespread adoption expanding to browsers, smart TVs, set-top boxes, and game consoles.

In order to deliver a video stream, it is split up into small media segments. Transmitting many small files of the same length is much more efficient than sending out a continuous file. Usually ranging from 2 to 6 seconds, the maximum length of such a segment is called the Target Duration.

To view the stream, a player downloads those HLS segments one after the other and plays them in the correct order according to a playlist called the manifest file. For live streams, new segments are added at the end of the manifest file. When a player updates the manifest file (the protocol dictates it should be reloaded about every Target Duration), it will see the new segments listed for download and playback.


What Kind of Latency Does This Produce?

While HLS provided solid performance, it came at a high cost in latency due in part to how the HLS specification that states a player should maintain a good size buffer, and start playback three Target Durations from the end of the last manifest. The reason for this buffer is to allow for better performance in case of network or server issues. Furthermore an additional fourth segment can be configured so that it can actively buffer and avoid playback stalls.

All this usually results in 10 to 30 seconds of latency. That is quite a long delay between when the actual event took place and when the viewer sees it. Considering the high speed flow of information with spoiler inducing text messages and alerts, something better was needed.


How to Fix HLS High Latency

You might think that you could simply reduce the segment size. This works temporarily as long as you correspondingly shorten the length between key frames in the video. Unfortunately, the latency will eventually increase again due to adding more key frames. Each segment starts with an Intra frame (IDR-frame) which is much larger than the predicted frames (P-frames). The encoding of key frames as they are inserted into the stream will ultimately increase the compression of the video. Thus, the usefulness of key frames decreases while adding additional bandwidth. At that point it would be easier and equally effective to just send jpeg images in sequence.

Instead, a joint effort of the HLS.js community with others including Mux, JWPlayer, Wowza, Elemental and Akamai to collaborated on a community-driven approach to implement low latency streaming using HLS. The result was Community Low Latency HLS (LHLS).

The first iteration, was carried out by Twitter’s Periscope in mid-2017 to use on their own platform. The goal of LHLS is to provide low latency in the 2 to 5 seconds range while still being scalable and backward compatible with standard HLS so players can fall back to it if needed.

LHLS is able to reduce the latency by using two approaches:


#1: Leveraging HTTP/1.1 Chunked Transport for Segments

Chunked transport allows segments to be downloaded while in the process of being created. Accessing the frames as they are being delivered by the encoder, avoids having to wait for multiple seconds of video to process into a buffer. Furthermore, if a player requests a segment that is unavailable or does not exist at that time, that segment will be automatically be sent as soon as it is available.

This approach is very similar to CMAF’s with the main difference being that LHLS uses the MPEG Transport Stream container.


#2 Announcing Segments Before They Are Available

This approach looks to decrease the latency added by the buffer offset. One of the reasons for this offset came from the fact that a player needs both to load the manifest as well as loading the actual segment before filling up its buffer. As such, a player should know the location for a segment as soon as possible.

By anticipating segment creation, and already listing the location of future segments, a player can anticipate which files need to be loaded. As the player streams out the previous segment with chunked transport, it will know the next segment is available right after the preceding segment was fully downloaded. Through announcing the segments early, players which are not LHLS capable can still play the stream as if it was a normal HLS stream while still receiving an improvement in latency.

The result of these two approaches is simple, but effective. It removes both the latency introduced by segmenting the video stream, as well as the latency introduced by the manifest. Additionally, the approach is scalable over standard CDNs, given they support HTTP/1.1 chunked transfer.


What about Real-Time Latency?

While LHLS is a step forward into low latency video streaming and a considerable improvement over the latencies attainable with standard HLS, it is still far from being suitable for real-time live video experiences.

As we’ve mentioned before, you can only have real-time streaming if the latency is under 500 milliseconds. In today’s world of instant information and constant communication, every second (even partial seconds) count. Drone surveillance, social media chatting, live auctions, broadcasting live events among many other use-cases, all require real-time latency.

That’s why Red5 Pro integrated with WebRTC. Our sub 500 milliseconds of latency is the only way to achieve true real-time live streaming. Importantly, Red5 Pro maintains that performance even when scaling to millions of broadcasters and subscribers.

For a more in-depth view of how Red5 Pro works along with all the live streaming protocols in general (WebRTC, WebSockets and MSE, CMAF, and more), take a look at our whitepaper.


Looking for something else? Send an email to info@red5pro.com or schedule a call.

  • Share: