The Broadcasters' Desktop Resource

Preparing for Streaming Services

Mark Shander author

[February 2024] One very popular conversation regarding streaming, either from a broadcast company, a University Website, a corporate Site, or a streaming service (from YouTube and Spotify to NetFlix, Apple Music, StreamGuys, and others) is the audio quality of the stream. 

Keeping clean, persistent audio running on your service and server(s) starts with giving your program producers and providers specific guidelines on preparing audio for your specific streaming workflow.
Issues from Phase Cancellation to Peak Distortion, artifacts, and many other aural challenges make even the prettiest video perceived as unwatchable or unlistenable.

MINIMIZING YOUR AUDIO CHALLENGES IN STREAMING

Understanding some of the most common challenges – and more specifically how your experience with analog audio can be of value here – is critical to potentially modifying workflow for streaming.
As with audio production for what is being widely called “linear media” (terrestrial audio and video broadcasting), winning protocols are often kept close to the breast for many successful engineers and producers, especially in the music industry. We recently read in the BROADCAST (BC) email forum and recently heard in Q&A on a spectacular StreamGuys-presented Thursday Lunch Gathering here on TheBDR.net, how audio quality continues to be a predominant concern when streaming content.

Using proactive techniques to help ensure your files arrive optimized helps fix many of the causes of poor audio, and it is always better to address the signal flow path from point to point than to treat the symptom in post-production or QC workflow. It also helps increase Quality Control, which is what audio processing does in our air-chain when we transmit content.

TOOLS TO CONTROL THE AUDIO FLOW

In this column, I will share a few important ways that you may not have been aware of, from methods of how to help make sure all your audio files are consistent in volume and loudness to what to do in case they are not when it is outside of your control to set standards.
One of the easiest best procedures, especially if you are a broadcast engineer, is to consider the above example, where I compared streaming audio and video to transmitting it. The biggest issue from a workflow perspective is that receiving a poor quality “dub” from a distributor or physical music source meant that reordering or requesting a replacement was necessary. From a workflow perspective, that led to airings of re-feeds, bicycled programming from other stations when time-to-air was cut close, and other “quick fixes” that, in some cases, required remastering.

In streaming, by comparison to broadcasting, audio-hitting servers are like your programming feeding the transmitter, and automating the process of level-control can be inexpensive if planned for in the workflow stage.

MEETING CHALLENGES

The challenge is that since an encoding origin server is like a transmitter, it should be considered as being after the audio processing chain.

Yes, some transmitters give engineers audio dynamics control within the transmitter itself. However, streaming servers often do not for several reasons, including digital transparency, synchronization of audio and video, and maximizing control of content quality over their network.
Publishing a guide to workflow in the production phase for preparing the audio portion of a digital file is critical to unifying audio results with a minimum of artifacts and a maximum of clear, clean audio perceived by the human ear as long-term listenable. It increases CUME and helps ultimately to minimize negative issues detected in the Quality Control phase.

QUALITY CONTROL

Quality Control should always be a step that is in place in your workflow, even if it consists of loading a tool or plugin (I like to use StereoTool for example) to help make sure the audio is in-phase, bass, and treble, compression, and limiting, dynamic range, clipping and so many other aspects of audio are dealt with.

Some tools that utilize Command Line Interfaces (CLI) can even help make sure that everything that you queue for uploading directly to a server or streaming service can be pre-processed before you QC it.

In fact, this is a part of the workflow built into a lot of audio-only radio automation software processing all uploads for audio dynamics control.

LEVELS

It may surprise you to know that there are standards set for consistent level control for all audio content uploaded to services like Spotify, Apple Music, YouTube, Amazon and the like, with these services frequently enforcing these standards if you do not.

With a “Preparing for Streaming Services” search, followed by the name of the service you are looking for specifics for, you can find these settings based on where you stream content (when available).

Many of these services use LUFS over dB because LUFS is a weighted unit, and in searching for software to prepare for streaming services, finding software that looks at the whole file as opposed to moments in time is considered to have advantages as well as being able to process perceived loudness best. The services are also frequently listed to include whether limiting is used if they auto-normalize, if they have an auto-gain control, and whether whole programs or tracks are Normalized.


www.henryeng.com

NORMALIZATION VERSUS COMPRESSION

It is important to note that Normalization and Compression are different.

Consider Normalization to be like setting gain often using peak levels, which helps ensure the audio signal does not distort by creeping too high into the headroom of your system. Consider Compression to increase the perceived loudness from the softest sound possible to the loudest sound possible and dynamic range control rather than forcing a user to adjust the volume to try to compensate.

Monitoring the Stereo Imaging is important too. Stereo streaming does not have the exact same multipath issues that receiving analog stereo broadcasts does, but it does have issues with things like phase cancellation and how some programming encodes additional audio signals into multichannel content by out-of-phase encoding.

This is less of a challenge than in the past with updated CODECS. However, we are looking at more of a global approach. And this is where the question of .mp3 or AAC encoding comes into play.

MP3? AAC? OR WHAT?

How are your listeners getting the stream?

Do they listen on a cellphone speaker? Or in their music room? There is a wide range of possibilities.

Consider your least common denominator concerning your target audience, however, consider your most common denominator as well.

From mobile devices like phones to home theater systems and company meeting rooms, your target audience is important in deciding how to prepare your signal best. Each person involved should understand the differences between mp3 and AAC, and note that. AAC supports up to 48 audio channels, sounds better at smaller bit rates, and is part of the MPEG-2 and MPEG-4 ISO and AEC specifications.

Using the right tool to correct all audio uploaded helps keep your audio consistent regardless of program source – and what CODEC you are using.

– – –

Mark Shander is a Contributing Editor of TheBDR.net. Based in Phoenix, AZ, Mark has experience in both on-air broadcasting and streaming and putting together the right equipment to produce the best possible program audio. Contact Mark at mark@thebdr.net

– – –

 

Would you like to know when other articles like this are published?
It takes only 30 seconds to add your name to our secure one-time-a-week Newsletter list.
Your address is never given out to anyone else.

– – –

 Return To The BDR Menu