Necessary Compromises in the Development of Live Video Broadcasting Systems
Handling video sources and distributing them involves encoding, decoding, converting between different formats, and adjusting to different data transfer speeds and resolutions. The Zynq UltraScale + MPSoC processor is designed to provide a flexible response to these needs.
The use of streaming video is constantly increasing, which is also reflected in the growth of the live video broadcasting market on the Internet, which is being pushed by giants such as YouTube and Facebook. At the same time, a category of eSports services has emerged that produces a new type of video broadcasting source, including thousands of streamed computer game files. All of these need encoding, decoding, and conversion between different formats, between varied information transfer speeds and different resolutions. This requires many conversions that serve as a bridge between the incoming and outgoing signals.
Market trends dictate the requirements
At the same time, the transition to high-definition 4K video resolution is intensifying. 4K screens are already found in many homes and pose special challenges to live IP streaming due to the limited availability of bandwidth. While the 1080p standard is expected to be the most common streaming format, the increasing weight of 4K-capable cameras and smartphones requires capturing and reprocessing 4K streaming signals to broadcast this video in a different, more shrunken format. It is worth remembering that converting a 4K HEVC file to various formats requires five times the processing power compared to converting H.264 signals (also known as MPEG-4 AVC).
Finally, the effort continues to reduce the total end-to-end latency of live broadcasts. In many cases, the delay lasts for a full minute, and this requires the installation of tools to shorten the delay times in each of the network components. Many service providers require support for multiple video streams and multiple encoding technologies. The H.264 standard is very common for signal compression purposes, with its next generation, the H.265, already entering the market.
Today systems are required to display the ability to encode signals on one or both of these devices together, and to perform simultaneous encoding of the signal at different resolutions and transmission rates. In terms of performance, this is a parameter measurement called SWAP, short for “size, weight and power consumption”. Using dedicated and hardened coding cores provides a dramatic reduction in the SWAP index. In addition, a smart solution is needed that knows how to adapt to the network and also allows reprogramming on demand.
The best solution is a successful compromise
That is, we are looking for the flexibility needed to support the streaming of many video files, in different formats and network conditions – and for different needs. Take for example a video source based on a camera documenting a real-time sporting event. The speed of information transfer in this case may be very limited, so it is possible to compromise on image quality to meet the available bandwidth conditions. Even during a two-way video conference, where lag times are critically important, video compression is more strongly compressed to meet the available bandwidth conditions.
The result is that there is no single encoder suitable for all use cases, and we need an encoder so flexible that it will be able to provide a solution to the most possible requirements. Although ready-made solutions can be purchased off the shelf in the format of ASIC or ASSP components, they will not provide an answer to the exact requirements and will have additional disadvantages.
For example, you can get excellent video quality, but the delay will be too great. Alternatively, the delay can be accurate and the video quality excellent – but the power consumption of the end accessory will be too high. We need a flexible approach that will allow the handling of files to be adjusted according to the application requirements and while making the best compromise between the conflicting requirements.
Introducing the Zynq UltraScale + MPSoC
Xilinx’s Zynq UltraScale + multiprocessor (MPSoC) processors are based on a real-time processor, a planned logic platform (FPGA), peripherals and fast communication interfaces, and appear in several versions including dual-core processor, quad-core application processor and GPU processors. The EV version of the family includes an ARM quad-core A53 processor, and a rugged video encoder (VCU) installed in the programmable logic unit to provide the necessary functional flexibility.
Necessary Compromises in the Development of Live Video Broadcasting Systems - /10