Somewhere along the way, “compression” in the context of HDMI technology became a bad word. Why? It probably has something to do with all the years of HDMI connectivity being promoted as uncompressed. In part this was about quality, but mostly it was about interoperability. But things are changing, and integrators should embrace compression as a friend that will increase capabilities while reducing cable stress due to ever-increasing bandwidth demands.
Keep in mind that the only truly uncompressed video we see is from a graphics processing unit (GPU), such as in a gaming console. Everything else is delivered compressed. A LOT. Broadcast, streaming, disc, it’s all highly compressed to the source, then uncompressed for transmission through the HDMI system. I think we can all attest to the extraordinary performance potential of these sources, so compression itself is not a problem for quality. Well, it is if done badly, but we should all be avoiding anything done badly anyway!
The key is interoperability. There’s an enormous range of permutations with video compression methods and processing capacity, and devices need to all be on the same page. Furthermore, many methods take some time to perform compression, and that means latency. That’s a problem too. Since its inception, HDMI transmission has been resolutely uncompressed so the devices didn’t have to deal with that one extra layer of complexity, and to always remain latency-free. But with the way video has evolved, quadrupling in bandwidth every time resolution steps up, compression becomes an inevitable necessity.
The HDMI 2.1 specification introduces compression by way of VESA’s Display Stream Compression (DSC). This is an ultra-fast, line code “mezzanine” compression codec with very light, variable ratios ranging from 1.3:1 up to around 3.5:1. I’ve crunched the numbers and found that latency was low double-digit microseconds at 4K/60. It’s completely imperceptible, as is any impact on picture quality. By the way, this is the same codec included in the DisplayPort 1.4 and 2.0 specs, and also for optional use with HDBaseT 2.0 to enable 18Gbps support.
The use of DSC in HDMI transmission can achieve two things:
- It enables formats that would otherwise require beyond 48Gbps, such as 8K/60 4:4:4 or 8K/120.
- It can be used to reduce cable stress where bandwidth demands exceed the capabilities of the link. For example, to support 4K/120 HDR over an 18Gbps link. In fact, it can reduce the load right down to 9Gbps!
DSC is optional, and where supported is automatic, but only as required — if a system can send a given format uncompressed, it will. The caveat for interoperability is that all devices in line from Source to Sink need to support DSC in order for it to work.
The exception is any bit-exact throughput without decoding the HDMI signal; e.g., a fiber extender with direct bit mapping doesn’t care what’s inside — it’s DSC, HDCP, etc agnostic. Raw data rate is all that matters. However, it’s a different story for AVRs and the like, as they do decode the HDMI signal. But if such a repeater device doesn’t support DSC, then the Source won’t send it in the first place (the EDID will see to that).
The fact that DSC is standardized for use in the HDMI specification is the first big step towards interoperability. But for that, integrators need information to make educated decisions. This is where the manufacturers come in, and how important disclosure of capabilities is through adequate labelling. That is, what a product can do, as well as what it can’t.
The HDMI Forum have proposed a method for disclosing DSC capabilities — to list each supported video format appended with an “A” for uncompressed only, “B” for compressed only, or “AB” to designate that both can be supported. This is proposed for use on manufacturer spec sheets and marketing to indicate format support, but it’s up to manufacturers if and how they disclose this. I expect such a list would be indicative of a device’s available data rate, so the info is there for interpretation, not as absolute.
For example, a device with 40Gbps capability might state 4K/120AB and 8K/60AB (albeit 4:2:0), but only be able to support 8K/120B as there’s not enough bandwidth for uncompressed 8K/120. So, at 40Gbps, 8K/60 4:2:0 media will be sent uncompressed.
But here’s where it could get confusing — say the media is 8K/60 4:4:4, then it will need to be sent compressed. Stepping up to 4:4:4 from 4:2:0 is enough to push it beyond 40Gbps, so some knowledge and interpretation is needed. In another example, say the media is 4K/120 but there’s something (device or cable/extender, etc) between Source and Sink that restricts the bandwidth, and the Link Training protocol limits the link speed to 24Gbps. The Source would have to send 4K/120 compressed instead, so long as all devices support DSC (and 4K/120B).
Incomplete Labeling Leads to Confusion
This is just my opinion, but I think this labelling convention is incomplete as it focuses on resolution and refresh rate only, and doesn’t differentiate between 4:2:0 and 4:4:4/RGB, or bit depth. What I’d really like to see would be clear distinction. For example, state a 40Gbps capable device with DSC as being able to support 8K/60AB 10-bit 4:2:0 (uncompressed possible), or 8K/60B 10-bit 4:4:4 (compressed only).
As always, the availability of information, and education to enable interpretation of that information, are key to successful HDMI system implementations. I encourage a conversation around this, and to perhaps consider a standardized, industry-wide labeling convention for use of DSC in HDMI systems.
And remember to keep an eye out for the new CEDIA/CTA-RP28 (formerly referred to as CEB28) HDMI System Design and Verification Recommended Practice, an important, FREE industry tool which does include info about compression in HDMI transmission. Watch this space, and thanks for reading!