Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


Adaptable High Dynamic Range Streaming Xilinx

(Source: AMD Xilinx)

High Dynamic Range (HDR) and Why It Is So Attractive

We've all come to expect high-quality when viewing video streams for work or play. The delivery of media using IP networks, known as AV-over-IP, helps quench the demand for content, equipment, and bandwidth. In most use cases, it's expected that compressed videos are delivered with higher detail, greater contrast, and lifelike colors. Ultra-high-definition (UHD) video combines improvements in resolution, wide color gamut (WCG), increased frame rates, higher bit depths, and high dynamic range (HDR) to deliver more realistic visual experiences.

HDR is a relatively new feature to the video ecosystem, which focuses on increasing the contrast between black and white, resulting in detailed shadows and brighter reflections. But the overall visual effect of HDR is much more compelling, as pictures appear richer, more lifelike, and the improved contrast delivers sharper, more detailed images. Viewers often perceive a greater visual impact due to HDR than an increase in resolution. It's no surprise that after experiencing the visual benefits, both content developers and viewers want this same feature when distributing or consuming content. Considering transmission bandwidth or storage are constraints throughout the chain, HDR can provide a remarkable visual impact while consuming only marginally more bandwidth or storage.

How HDR Delivers Improved Detail and Image Quality

Figure 1 shows the concept of a high level HDR system. In the beginning of the HDR process, captured scene light is converted to an electrical representation using an opto-electronic transfer function (OETF). Sensors and cameras capture a fair amount of scene light , which is often managed in post-production, where further processing can optimize color and light levels. That electrical representation is distributed to one or more displays, where it's converted back into scene light using an electro-optical transfer function (EOFT). The original EOFT designed for CRTs (cathode ray tubes) is known as gamma.

Figure 1 : The image provides a high-level illustration of the HDR scene capture, transmission, and display process. (Source: AMD Xilinx)

We are now in a period of transition, where most sources and displays continue to support standard dynamic range (SDR) and the original gamma EOFT, while the market adopts HDR.

The hybrid log gamma (HLG) EOTF was developed to enable backward compatibility to the existing majority of SDR displays with a hybrid gamma curve that can utilize higher luminance capabilities of newer displays without the complications of additional metadata to produce an image on both SDR and HDR displays.

There are other HDR formats that are based on the perceptual quantizer (PQ) EOTF, a non-linear transfer function based on human perception designed to provide more representative bits based on the eye's light sensitivity. These PQ-based formats require metadata as defined by SMPTE-2086, which standardizes parameters such as color primaries, and luminance range of the original mastering display—potentially allowing all viewers to have the same experience. Each format delivers HDR content with tradeoffs in content fidelity, workflow impacts, metadata requirements, and royalty obligations that need to be considered when deploying HDR systems and workflows.

Streaming HDR Content Using AMD Xilinx Platforms

Streaming enables high-quality media distribution and consumption to happen virtually anywhere, resulting in increasing demand. Streaming adds another layer of complexity as bandwidth is a physical limitation and an additional cost. For streaming content, HDR adds perceived detail and resolution impact with a negligible change to bandwidth, even with the additional metadata required by some formats.

The Zynq® UltraScale+ MPSoC provides a low-power, single-chip solution that combines a full-featured multicore Arm® processing subsystem, and an embedded real-time 4kp60 4:2:2 10-bit video codec unit (VCU) capable of simultaneous H.264/H.265 encoding/decoding. Based on common industry tools and frameworks such as Linux, V4L2 and GStreamer, AMD Xilinx enables customers to evaluate functionality and develop customized solutions with driver and application stacks. The Zynq UltraScale+ MPSoC is flexible enough to be both a single-chip capture/encode or decode/display device that supports existing HDR formats such as HLG and HDR10.

For HLG EOTF HDR formats that do not require metadata, the appropriate colorimetry information is extracted from physical connections such as SDI or HDMI, and the data is saved in the encoded bitstream within standardized video usability information (VUI) fields as defined by ITU for H .264 or H.265 bitstreams. For PQ EOTF HDR formats, the key parameters defined by SMPTE-2086 are captured from the physical connection such as color primaries and elements associated with the mastering display such as display color volume (MDCV) and content light level (CLL). These are stored in the appropriate fields in standardized VUI and supplemental enhancement information (SEI) fields of the compressed bitstream so that the data can be distributed, decoded, and extracted for the display element(s) of the system (Figure 2). AMD Xilinx supports transporting of any required HDR metadata via an open, standardized mechanism, allowing for proper interoperability and flexibility for future HDR formats.

Figure 2 : Conceptual flow of PQ HDR metadata within AMD Xilinx multimedia stack. (Source: AMD Xilinx)

Getting Started with Future-Proof HDR Streaming

AMD Xilinx created targeted reference designs (TRDs), which provide all the required sources, and project files to recreate systems that operate at up to 4kp60 with industry-standard interfaces such as SDI and HDMI so you can evaluate the system and rapidly move to customization. There are two TRD variants to show examples of both HLG and PQ type HDR formats (Table 1) implemented on the ZCU106 Evaluation Kit.

Table 1: The ZCU106 Evaluation Kit from AMD Xilinx implements two TRD variants to show examples of both HLG and PQ type HDR formats. (Source: AMD Xilinx)

Design Module

Description

PL DDR HLG SDI Audio Video Capture and Display

HLG/non-HLG video + 2/8 channels audio capture and display via SDI with VCU encoding from PS DDR and decoding from PL DDR

PL DDR HDR10 HDMI Video Capture and Display

HDMI design to showcase encoding with PS DDR and decoding with PL DDR. Supports HDR10 static metadata for HDMI, as well as DCI4K.

The adaptable hardware and software architecture of the Zynq UltraScale+ MPSoC enables multimedia system developers to implement new HDR formats as the ecosystem evolves, so they will always be ready for the future!

Gordon Lau authored the Adaptable High Dynamic Range Streaming blog, which is repurposed here with permission.

Author

Gordon Lau is a Video Systems Architect for AMD’s Adaptive and Embedded Computing Group (AECG) based out of Toronto, Ontario, Canada. He has an Electrical Engineering degree from Ryerson Polytechnic University and a career in electronics and programmable FPGAs spanning 20+ years, with a focus on video interfaces, CODECs, and broadcast workflows.



« Back


Xilinx develops highly flexible and adaptive processing platforms that enable rapid innovation across a variety of technologies, from the endpoint to the edge to the cloud. Specializing in programmable logic devices, Xilinx is the semiconductor company that invented the Field Programmable Gate Array (FPGA), the hardware programmable System on Chip (SoC), and the Adaptive Compute Acceleration Platform (ACAP).


All Authors

Show More Show More
View Blogs by Date

Archives