The Broadcasters' Desktop Resource

Successful Live-OTT Streaming Begins with Monitoring

Cyrus Uible
[October 2022] Whether your stream is audio or video or both, it is critical that the streamer know how it sounds or looks at the listener or viewer’s end. Good monitoring includes knowing what to look for and how to hear/see the results.

If you are involved in the Engineering aspect of Live OTT production, then you will know that this competency requires an extensive amount of working knowledge solving complex problems.


If you think about the digital supply chain from acquisition all the way to delivery out to the consumer, your touch points are numerous.

For example: in order to broadcast a live sporting event, multiple facilities (on-site and remote), vendors, technology providers and end-user consumer devices are all at play. You may have twenty plus cameras feeding streams to a remote production truck that in turn is sending feeds to a broadcast facility potentially hundreds of miles away.

Once processed at the broadcast facility, the content is encoded into numerous feeds of streamable data and prepared as deliverables that are compatible for display on various screen sizes and different bandwidth capabilities. The packaged content sent via a content delivery network (CDN) over IP and then to end-user consumer devices such as Fire sticks, Roku boxes, Apple TVs, Smart TV’s, mobile devices complete the chain.


In addition, there could be 20 plus technology vendors involved across the various applications and workflows.

A capable engineering team will require the system and product knowledge for many types of technology including but not limited to cameras, switches, encoders, origin servers, IP video probes, Windows servers, Linux servers, virtual servers, etc.

The exponential growth of OTT platforms as well as the insatiable demand for content, has also helped drive cloud infrastructures and their associated solutions by virtualizing equipment (except maybe for cameras) and supporting workflows that are more easily scalable at considerably reduced operational costs.

This is a rather over-simplified summary and example of the Live OTT production chain, but it helps illustrate the type of workflow system and process where and how media files are product, managed and delivered.


As the media and entertainment industry continues to evolve and shift to more all-IP or mostly-IP workflows, monitoring, which was once an afterthought, is now a mandatory part of the digital media supply chain.

This realization now ensures that fundamental information and insights is shared across all stakeholders in the Live OTT production workflow.

What does this now mean? In the case of technology vendors and the engineering and maintenance teams that work with the equipment, the requirements demand a cohesive monitoring environment that offers a central dashboard to monitor and analyze devices, monitor availability of services, as well as the information they need to determine the health of the flows themselves.


This visibility should also include the tools involved along the way, and the underlying infrastructure that is making it all possible.

Plus, all of the information should be accessible remotely. Regardless of whether the information is delivered by SNMP, web API, message bus, syslog, or any other protocol, it is key that an NMS or Network Management System (or other entity) can access the information and alert the appropriate team members if and when problems are discovered.

Long gone is the luxury of multiple screens in an outside broadcast truck or center for each vendor and their associated alarms – since there are simply too many data points. A core requirement requires a centralized ‘pane-of-glass’ (if you will) that presents all alarms and performance data across all technologies throughout the entire operation, applications, and workflows.

Being able to monitor every part of a network is crucial
Being able to monitor every part of a network is crucial

Since there are so many different technologies and vendors involved in a Live OTT Production, let us break it down into workflow segments.


When we discuss infrastructure in a Live OTT Production, we refer to all the physical servers and switches (among other signal processing devices and software).

Monitoring basics often include information on CPU load, free memory, available disk space, power supply health, and fan health, among others. For example: if a disk is full, it can cause havoc on the software applications that rely on it.

Other monitoring metrics may include specific appliance hardware devices such as VPN concentrators, fiber encapsulators, satellite receivers, compliance monitors, graphics engines, production switchers, and firewalls to name a few.


Content is flowing from cameras all the way to the home – ever a part of the digital media content chain.

What kind of monitoring are we talking about here? Well, it is the alarms related to the technology product itself; the audio and video, and not to forget ancillary data.

Regardless of whether the signal is SDI baseband or multicast IP, or a combination of both, content alarms require a rigorous monitoring set. Whether they are alarms that detect signal presence, video freeze, video black, silence detection, QOE (quality of experience), they provide the operator with a quick health check of the video without actually having to dig around and look at a feed on a screen.

Although, essentially, this could be a last resort in a facility with hundreds or even thousands of streams – however, not ideal! Content alarms themselves could come from some of the devices that are already being monitored for infrastructure alarms such as IRD’s, encoders, decoders, and compression systems.

Especially popular in multicasting environments, IP video probes analyze multicast traffic and communicate to the operator immense amounts of detail as it relates to each stream or the audio/video quality in those streams.


The end deliverable to the consumer is directly related to the health of the networks, which lies in between the content source and the at-home (consumer) experience.

Always a consideration for Live OTT Production, capacity and latency are great concerns along the digital media chain. It is no longer a matter of if a lack of bandwidth will impact that consumer, but how badly. Hence, a priority strategy is to ensure ample bandwidth across the entire digital media chain.

Live OTT Production planning phase

An obvious data point to know during the planning phase is the number of streams and at what bitrates they are streaming – a truly critical piece of information. Once content goes into production it is a matter of monitoring and being alerted of any bottlenecks in the workflow.


Bandwidth utilization is the most straightforward way to monitor network health.

Monitoring this element of the network should cover as many devices and probes within a network as possible, to provide a strong set of metrics to analyze and follow.

If an operation can visually monitor metrics such as packet loss percentage, jitter, round trip latency, and be alerted of any that are creeping outside of nominal ranges, then you have a pretty good sign of impending issues. This type of monitoring varies, whether it is through independent hardware devices or network devices – all designed to specifically monitor these types of metrics.

In addition, Operators may choose to analyze network traffic directly using technologies such as netflow from Cisco (or sFlow from Arrista), to serve as two examples.

Monitoring network traffic provides the user with a good idea of what applications or streams (and where they are coming from and heading to) is constituting the amount of traffic – this kind of information is most useful when the bandwidth capacities are nearing their limits.


Anyone who troubleshot an outage knows that often the root cause was an error in a configuration in the workflow set-up.

This could be anything from entering in an incorrect compression rate, port name, IP address, label description, destination address – the list goes on! With all of its moving pieces in Live OTT operations, time is of essence.

Remote monitoring of all the configurations can save enormous amounts of time and stress, leading to smoother and more reliable operations. There simply is not enough time to manually check all the configurations before a live event starts, which means it has to be done automatically.

ST2110 Specific Configurations

If you are looking for the real health value for flows in each content stream, operators require the capability to monitor and inspect SDP (Session Description Protocol) – this includes feed and flow discovery in the network, comprehensive ST2110 monitoring including main and redundant signals.

This addresses the diverse complexities of an IP workflow.

Other Data

The above segments cover a wide range of monitoring requirements for a successful Live OTT production environment and workflow, yet there are other areas of monitoring that can still be addressed.

For example, PTP (Precision Time Protocol), a synchronization requirement for ST2110 or SCTE data, including ad-insertion, custom databases, environmental metrics such as power availability from smart PDUs, temperature or AC health, and secure area access – all examples of data that could all be important to an organization.


To bring a monitoring environment together in a single and centralized ‘pane-of-glass’ view and deliver real-time monitoring of an entire (and often complex) ecosystem requires an enterprise-class NMS-Network Management System, such as Kybio).

Ideally, such a tool should be vendor agnostic, protocol agnostic, secure, customizable, user-friend, configurable, and cloud capable. This type of operational continuity for an entire digital media chain from acquisition all the way to delivery to the end-user, empowers businesses to centrally monitor and control devices, applications and network health.

Monitoring is no longer an after-thought; it is imperative that a network management system and all of its monitoring capabilities be integrated at the beginning of a project and scaled along the way rather.

Addressing monitoring requirements in the early stages of an infrastructure expansion or new build-out will flush out monitoring blind spots while they can still be addressed – with the goal to bring visibility to the entire workflow in a single and centralized screen.

– – –

For more information on the Kybio, click here.

– – –

Cyrus Uible is a Solution Architect at WorldCast Systems.

– – –

Would you like to know when more articles like this are published? It will take only 30 seconds to click here and add your name to our secure one-time-a-week Newsletter list. Your address is never given out to anyone.

– – –

Return to The BDR Menu