Learn what is bitrate, audio quality, audio codes, and bandwidth in any live-streaming elements in this blog.
What is Bitrate, Audio Quality, Audio Codecs, Sample Rate & Bandwidth?
Audio quality is an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. It depends on the bit rate, sample rate, file format, encrypting method, and also on the ability of the encoder to receive the essential bits right.
Bit Rate mentions the audio quality of the stream. We evaluate it in Kilobits Per Sec (kbps or k). Bit rate is the number of bits encrypted per second or the number of bits transferred or received per second.
A higher bit rate with more sampling rate needs high bandwidth and gives good audio quality. Low bit rates refer to mini file size and less bandwidth with a drop in audio grade.
Audio bits take less time of the whole bandwidth than your video bitrate. The recognized quality range is 96 to 320Kbps.
Most encoders on the market use bitrate encoding. By using a variable bitrate encoder, you can fix bitrate, and a target. Based on the level of movement in your video content and your keyframe interval, the encoded bitrate of the stream will reach higher and lower than the target.
It is one of the essential reasons why having sufficient headroom in your bandwidth is so important.
Higher motion content needs a higher bitrate to reach the same recognized quality video stream. Using high and low bitrate can lead to poor picture quality or moderating for your viewers and you will see a jump in your group of pictures ( GOP ).
If your available bandwidth is restricted, you should decrease both your resolution and your bitrate appropriately.
Audio Sample Rate
We measure Sample Rate by the number of samples per unit time. The sample rate is a calculation of signal amplitude, and it includes the data of the amplitude rate of the signal waveform over a particular period.
The sample rate is also known as the sampling frequency. The higher the sampling frequency earns a signal which is the same as the original analog signal for excellent audio quality—the file size based upon the sampling frequency. It is the number of bits in each sample, describing the maximum signal-to-noise ratio.
The bit depth may either be 16-bit, 24-bit, 32-bit, for audio CD 16-bit is preferred. We calculate the sample rate in hertz(Hz). According to the Nyquist Sampling theorem, the sampling frequency to generate the exact original waveform should be double the initial frequency of the signal.
Usually, the human hearing bandwidth is 20Hz-20kHz; the audio sampled can be at a rate above 40kHz.(Usually, 44.1KHz is preferred).
48000 kHz is the most considerable supported sample rate. Most producing equipment will work at either 44100Khz or 4800Khz. It suggests you match the sample rate of your stream with the help of your production tool. A mismatch in sample rates causes audio handicrafts, including dropouts, clicking, pitch changes, or other problems.
When you send or receive data, measure the speed by bandwidth. It is based on the bit rate at which the data is sent or received. For more bit rate, the bandwidth ingested is more for which the cost to broadcasters will rise.
As the bit rate raises, the amount of data streamed per second also grows at a reasonable rate to give the copy of the analog signal with more bit depth thus enlarging the bandwidth and file size to provide high audio quality.
Frame rates must always match the frame rate of the video source. NTSC standard equipment commonly works at 30 fps and in that case, must align encoding parameters to match the rate of 30 fps. PAL standard equipment usually works at 25 fps and in that case and should align encoding parameters to match the source frame rate of 25 fps.
IBM Watson Media carries a high frame rate (HFR) video. It is the platform that down streams with frame rates higher than 30 FPS up to 60 FPS. You can also pass through the Original HFR streams. If you locate cloud encoding, all lower resolution performance will be generated using lower frame rates as mentioned in the configuration, for example, 30 or 25 fps.
HFR video forces extra stress on player devices. Lower-end laptops or smartphones could stammer or strike when playing HFR video. Therefore, many end-users seeing on low-resolution computers and mobile devices will not be able to decode 60 FPS streams correctly, and this can result in playback issues.
- IBM Watson Media needs a specified keyframe interval of 1 or 2 seconds. The default setting on some encoders may vary from this, so it needs to edit it to meet this specification for optimal flexible bitrate performance and stream quality.
- Some encoders can have settings like auto keyframe interval or scene change detect. It is essential to remove these modes as they may result in a random keyframe interval.
- Sending a stream with keyframes at intervals of more than 2 seconds or asymmetrical intervals can result in a stream or note-down failures. Please make sure you have a suitable keyframe setting to avoid this.
Common Audio Codecs
There is a broad range of audio codecs available nowadays. However, not all audio codecs are uniformly encouraged; some devices may support one audio codec, but not the other.
Some give better quality, while others have a clear view of compression above all else. These are essential thoughts when it comes to selecting the best audio codec for a particular situation. Here are some most common and best audio codecs.
The most popular audio format is probably MP3, which is practically called MPEG-2 Audio Layer III. Initially introduced in the 1990s, MP3 transfigured digital audio. Files were much smaller than the old formats, permitting them to be streamed and downloaded through the internet.
After a few years of development of MP3, AAC designed on the success of that format but improved compression effectiveness. AAC generally gives better audio quality at the same bitrate as MP3 or similar quality at lower bitrates. AAC has updated many times. The latest version is HE-AAC. It is a related source format but is the most broadly used audio codec on the internet today.
- WAV (LPCM)
WAV is known as Waveform Audio File Format, which was initially released more than 25 years ago. It is known to be mainly used on Windows computers to save uncompressed audio in the LPCM format.
AIFF is a Mac format that is related to WAV. It saves uncompressed audio using the PCM (Pulse-Code Modulation). Like WAV, AIFF files are huge around 10 MB for one minute of a standard audio recording.
Another codec on the market is WMA Windows Media Audio. This codec was developed as an option to MP3 but has become somewhat of an outstanding product.
Opus is not in extensive use yet, but they view it as a next-generation codec. It gives higher audio quality at all bitrates compared to other codec listed here. Opus also has the advantage of being royalty-free and open source.
The Best Audio Codecs
We trust that AAC is an excellent audio codec for most circumstances. AAC is supported by a broad range of devices and software platforms, containing iOS, Android, macOS, Windows, and Linux. Other appliances like Smart TVs and set-top boxes encourage AAC.
Apart from full support, AAC also has the advantage of excellent audio quality compared to MP3. Visually listening tests usually show that AAC is the perfect codec available for general use. It may adjust in the future as Opus becomes more widely supported. However, hardware and software changes move steadily. For internet video, AAC is the perfect audio codec for live-streaming as well as video on demand. You configure it via settings in your hardware or software encoder.
Interlaced vs Progressive
- IBM Watson Media does not hold up the ingestion of another video. You must deinterlace the picture before sending the stream to the IBM Watson Media ingest server.
- Sending interlaced video can get an outcome with stream or recording failure or reduced image quality.
- If your camera can only post an interlaced image, many decoders will have the choice to deinterlace the video. Select this choice before sending it to IBM Watson Media to avoid streaming problems.
Protocols for Streaming
IBM Watson Media currently uses three different types of protocols.
- RTMP protocol is used mainly for absorbing the streams from the source encoder.
- A registered fragmented MP4 protocol uses it to transport to the IBM Watson Media HTML5 player for some playback environments.
- HTTP Live Streaming (HLS) is used to transport streams for iOS, Android, some desktop browsers, and other linked tools.
- At this time, it does not encourage the straight absorb of HLS streams. Instead, IBM Watson Media cloud translating is used to generate the HLS versions from the incoming RTMP stream.
- Deliver to desktop players and mobile devices for details on IBM Watson Media cloud translating and bitrate streaming with IBM Watson Media, containing the correct resolutions and bitrates.
Recommended Network Settings
- For successful live-streaming, you require a high-quality internet connection. A connection that is adequate to verify email or load web pages may not be better for streaming. You do not need an internet connection, you require a high-quality internet connection, individually to do uninterrupted HD streaming.
- Overall connections will not have the same quality. You need to use a wired Ethernet connection and not use Wi-Fi. Wi-Fi connections are more sensitive to variation in quality and can more easily drop.
- Cellular (3G/4G/LTE) connections can be very irresponsible. It strongly suggests using a hardwired Ethernet connection or a Wi-Fi connection through a cellular connection. But the type of relationship is only one component. For every kind of context, it is most important to perform bandwidth tests ahead of time to know you have enough bandwidth to stream.
- It is essential to use a connection that a person should not share with more than one user. For example, when streaming from a representative corporate office or event venue, the amount of bandwidth obtainable for your stream may be inconsistent based on the number of other users on the same network.
- It suggests asking your IT department to give bandwidth that is reserved only for the stream. If you have enough bandwidth and not many users sharing the same bandwidth, this may not be needed. But if you discover you are on a shared connection and cannot always get sufficient bandwidth, you may need to ask for this or to try to decrease how many other users are on the same link at the time of your stream.
- If you do not have a corporate network or do not have an IT department to communicate, you can talk with your internet service provider to purchase a plan that has a suitable level of service for streaming.
- When selecting your encoding settings, you should take into account your obtainable upload bandwidth.
- A good rule is for the bitrate of your stream to do not use more than 50% of your available upload bandwidth volume on a dedicated line. For example, if the result you receive from a speed test displays that you have 2Mbps of upload speed available, your merged audio and video bitrate should not exceed 1Mbps.
- To stream in HD, you will need at least 3 to 8Mbps upload speed available.
- To calculate available bandwidth, you should use a standard speed test.
- If you discover that your stream regularly rebuffers, pauses, or disconnects, try to utilize a lower bitrate and resolution on your encoder.
Encoder Hardware Recommendations-CPU Resources
- Make sure the encoding CPU / GPU can hold your encoding settings.
- HD streams and high bitrate streams take more CPU and GPU resources importantly to express and encode.
- If your streams are rough, pause and resume, show encoding artifacts, are dropping frames, or show to be functioning back in a lower than anticipated frame rate, these can all be signs that your CPU is not apt to keep up with the live video encoding.
- Decreasing the input resolution size and or decreasing your stream output resolution and bitrate can fix these problems.
- Most encoders have a signal to display to you how much of the available resources you are using. Concentrate on this and lower your settings accordingly if it shows you are nearing the maximum funds available.
- Low frame rate streams are terrible. Till you have shallow movement content like static slide images, frequently it would be better to stream full frame rate at a decreased resolution, for example, 640×360 at 30fps vs 720HD but only being apt to stream at 12 frames per second.
Choose live encoder settings, bitrates, and resolutions.
It is essential to ensure your live stream is of high quality. Ensure that you select a class that will result in a good flow relying on your Internet connection.
We suggest functioning a speed test to experiment with your upload bitrate If you are using Events, you can select a variable resolution stream key to gain the advantage of Stream now. You can also mention your desired resolution and frame rate manually.
YouTube will automatically translate your live stream to generate various output formats so all of your viewers on more than one device and network can watch.
Ensure to test before you begin your live stream. Tests should contain audio and movement in the video related to what you’ll be doing in the stream. When it is happening monitor the stream’s health and review the messages.
Although it is essential for live-streaming, audio can be a mystifying topic. Be confident that once you start to understand the concepts, they will all come together.