As streaming broadcasters know, end-users don’t care about how their content reaches them, they care about what it looks like. They judge the quality of their OTT, premium pay TV and broadcasts based on their actual viewing experience: whether there’s a high latency, poor image quality or buffering. Therefore, delivering a poor OTT live streaming service for customers paying premium prices can severely damage a brand’s image and affect revenues.
What’s more, customers are increasingly paying for OTT. In fact, Conviva’s 2018 All-Screen Streaming Census report indicated global TV streaming audience growth to have more than doubled in 2018 compared to 2017. With continually accelerating growth within streaming audiences, companies need to come as close to matching broadcast quality as possible, or even exceed it, if they want to survive and thrive in the incredibly competitive OTT live streaming market.
How do we measure ‘near broadcast quality’ OTT live streaming?
The three key factors in judging the quality of an OTT stream are latency, quality-of-experience (QoE) and quality-of-service (QoS). Regarding the former, the glass-to-glass latency in broadcasting is typically around 8-10 seconds. This means that OTT content providers need to hit this rate or do better. On the other hand, QoE and QoS are interlinked and most large broadcasting corporations have long-deployed monitoring solutions of varying complexity. These solutions measure their score, improve stream quality and give their customers the best possible experience. Let’s have a closer look at these two main quality indicators below:
What are QoE and QoS?
QoE is what the customer experiences on their stream (i.e. image quality, buffering, etc). It’s measurable via customer satisfaction surveys or other subjective metrics and is extremely important to a broadcaster’s profits. The global reach of online content means broadcasters are serving a more international, widely dispersed, customer base, making QoE more important than ever. The accelerating mass consumption of OTT content, and continued improvement in streaming tech, also makes QoE the big differentiating factor for consumers and providers alike.
QoS is a more objective measurement. It refers to the performance of a broadcasting network, and includes important factors such as uptime, the probability of downtime, error rates, bandwidth and latency. QoS is focused more on the process that happens between the IP and the streaming application, rather than on the end-users themselves. When it comes to customer loyalty and brand reputation, QoS is crucial for providers of traffic-heavy content such as live streamed sports and music events, and poor QoS greatly impacts QoE.
Why is QoE so important to the broadcasting industry?
The ever-evolving landscape of OTT broadcasting means viewers now take the experience with them — for instance, audience figures for the Super Bowl LIII show the device landscape is continually shifting to mobile devices and connected TVs. If content stalls, buffers or fails to play, consumers have a wealth of options they can turn to. The result – the broadcaster providing a poor QoE will see their users churn away in high numbers, leading to a huge downturn in revenue. Even if disgruntled customers don’t churn, the sizeable call centre cost that comes with customer complaints is likely to bring a dent in profits. Forrester estimates the cost to service a customer complaint amounts to $12/call on average. For example, a serious QoE issue during a major sporting event causing 100,000 people to complain, could cost $1.2 million in call centre expenditure alone.
Let’s look at the example of the next big opportunity in OTT cloud streaming: the high-quality live broadcast event. Just a few minutes or even seconds of downtime during a sporting event could see a huge chunk of the viewership lose faith and go elsewhere. Imagine a viewer paying $35 a month to watch their favourite football team, only to miss the game-winning penalty in the Champions League final because of a broken stream. To fight this issue, a comprehensive monitoring system can proactively monitor and troubleshoot the stream, as well as trace and help resolve issues from glass-to-glass. Having a monitoring system in place is therefore critical in preventing these damaging outcomes and maintaining a high-quality QoE. Proper OTT live streaming analysis can allow providers to detect problems before they reach the end-user, safeguarding the quality of streaming live on-demand events that can reach millions of viewers. This will become the key to a seamless QoE in a challenging climate, given the ever-more demanding audience and the increasingly competitive streaming market.
How do we get closer to broadcast quality with OTT live streaming?
While streaming feels ubiquitous in 2019, many early providers that set out to test customer uptake and establish market share still retain basic development monitoring infrastructure. In other words, they only monitor basic metrics that simply confirms whether content is being delivered. This approach can be compared to waiting until your car breaks down before addressing the issues. Having such low visibility on content quality, to the point where major errors are not spotted before and prevented from happening, is unsustainable.
Having access to a consistent metric for diagnosis of the delivery chain and reducing the time needed to fix complex issues is crucial for providing users with the QoE they demand and deserve. The question is:
Active monitoring or passive in-player monitoring?
There are two prevalent types of monitoring; passive in-player monitoring and active synthetic monitoring (like Touchstream’s own StreamCAM content availability monitoring). Each has its benefits when it comes to achieving near-broadcast quality.
Passive in-player monitoring software has trackers inserted into the player module to collect data on playback of content on the client device in real-time. This information is then sent back to an online collection point for processing and is usually only active during video playback. This type of monitoring gives you the advantage of being able to see your viewers’ playback efficiency in real-time. If an error occurs, it displays how many viewers are affected, and the extent of the issue.
The drawback, however, is that passive in-player monitoring operates only when an end-user is viewing the content. This means you’re only notified of problems once users are experiencing them too, or have been for some time. In addition, passive in-player monitoring also doesn’t give the root cause of an issue in the streaming chain that could originate in any part of the process. So you’ll never know whether the problem could have been caused at the origin, or within the encoder, the CDN or by the end user’s ISP or WI-Fi router.
Active monitoring essentially creates a continuous simulation of a broadcaster’s viewing content in a specific location so that it can be tested 24/7. The sequence a player would go through during video playback, is recorded and then executed repeatedly by ‘robots’, allowing the data to be collected centrally for processing.
With active monitoring, end-users don’t have to be viewing content for issues to be detected. As a result, problems can be seen early and fixed before they have a lasting impact on a broadcaster’s QoE. Touchstream’s active monitoring, for example, checks every bitrate of every channel and encoded format, and all are monitored from high-quality PoPs located online on diverse transit networks.
So, when it comes to active monitoring vs passive in-player monitoring, it’s not quite as simple as just choosing one over the other. The best approach is, in fact, to deploy both, as the two approaches complement each other rather than compete against each other. By leveraging the benefits of both, you can monitor as many points as possible via an OTT live streaming workflow, so you can be sure of high-quality QoE that rivals even broadcast.
Other issues that can be fixed fast
Another useful way to prevent playback disruption from happening in the first place is by building redundancy into the production workflow by using multiple source encoders. With automated failover, if an encoder stops working, or disconnects from the transcoder, another source can continue to supply the video. By monitoring both the redundant paths, actions can be taken to failover either automatically or very quickly, and to repair the redundant path to ensure no impact on the viewers.
A key expectation for viewers in 2019 will be low latency. Broadcast glass-to-glass latency is typically 8-10 seconds. Applying a best-in-class low latency CMAF solution can help OTT live streaming reach a similar latency. CMAFs provide the lowest latency possible, but do make the process more prone to error, meaning that monitoring and redundancy are made doubly important.
Finally, it is important to bring all data together visually in the monitoring process, something that has historically been difficult with the complex workflow of OTT live streaming. Touchstream’s end-to-end live stream monitoring has one dashboard for the entire delivery chain that is capable of integrating data from external sources including other monitoring tools, allowing quick response and 24/7 view of issues that could impact QoE.
In OTT live streaming, time is everything. That is why it is crucial to pinpoint problems and fix them fast. If we fail to do so, we risk losing viewers to any one of a multitude of streaming alternatives. Thankfully, today we have enough options for high-quality redundant encoders coupled with specialist monitoring so that we can track these problems and, crucially, solve them even before they become an on-screen issue for the viewer.