The compression is “visually lossless”, at least that is what they say. Typically the compression efficiency (quality reduction / data reduction) is much higher for lossy compression algorithms. It gets nasty when “visually lossless” means you have to be practically blind not to see the artifacts.
To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used. Lossless compression cannot guarantee any reduction in bandwidth. Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise.
I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people". This is the same kind of test which concludes that h.265 is "visually the same" as h.264 at half the bit rate, when other than at extremely low bit-rates that is very obviously not the case.
@Sivar: "To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used."
Raw uncompressed streams do not have an inherent limitation on resolution or bit depth that dictate that lossy compression *must* be used. Raw uncompressed streams transferred across a fixed bandwidth medium have a maximum combination of resolution, bit depth, and frame rate. This could drive a requirement to use lossy compression. However, depending on the use case, lowering resolution, bit depth, frame rate, or a combination thereof may provide a better experience than using lossy compression. Of course, the last option is to just wait. By the time this compression really picks up, there is likely to be a higher bandwidth medium available that is suitable for the high resolution, HDR displays that this compression is designed to allow. Then again, with this compression already available, it could stay perpetually one step ahead (in a sense).
@Sivar: "Lossless compression cannot guarantee any reduction in bandwidth. Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise."
While lossy compression algorithms can guarantee a reduction in bandwidth, they suffer from the same pathological cases. Processing times for individual frames can vary more wildly than with lossless. This isn't much of an issue with movies, but gaming or any use with a feedback loop would rather not have another point of inconsistency in the rendering loop. When processing exceeds the allotted time this can manifest itself one of many different artifacts, a sudden stutter followed by a rather jarring (if temporary) change in resolution, blanking, or any anomaly you might see while trying to play a high def video on netflix with an inconsistent internet connection (though presumably to a much lesser extent). Lossless anomalies may experience dropped frames (stutter), blanking, or possibly block artifacts (depending on algorithm). I'd have to see the lossy compression to make a determination, but I'm fairly confident that I'd rather frame skip on the 0.001% of frames (likely white noise) that lossless can't keep up with than have a general degradation in picture quality in the other 99.999% of frames (your percentages).
@Sivar: "I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people"."
As am I. Depending on how the compression is implemented, it will very likely negate some of the benefits it is supposed to enable. Many compression algorithms use chroma compression on the blue and to a lesser extent red channels to remove detail where it is not as noticeable. More noticeable is using luminance compression. This hurts the bit depth and HDR that the compression is supposed to enable (especially if they mess with the green channel). Using a nearest neighbor averaging compression algorithm is akin to using a lower resolution and interpolating. Using motion compensated frame rate reduction is another obviously counterproductive method.
Hard to tell until we can actually see and judge the results. If it is as "visually lossless" as mp3 music was claimed to be "audibly lossless" then it is thanks but no thanks. (Not sure whether audibly lossless is a correct term)
Depends what you play it on. The better amp/speakers/cords/audio card the greater the difference. But if the video compression is like what you claim about MP3 I'll avoid it by a huge margin. With my settings (entry audiophille with tight budget) you can hear the difference within first few seconds (320kbps vs. audio-cd, not even speaking about SACD or DVD-Audio). Even my father with hearing issues could tell the difference, so I am pretty sure everyone can. You just need to play it on decent audio system.
Human beings listening to music cannot detect a difference between a properly encoded* 256kbps MP3 and the source CD, regardless of the equipment used. This has been proven with double-blind testing. "Transparent" (audibly lossless) compression of audio has been a solved problem for quite some time.
*LAME encoder, using a preset profile. Other formats can do it at lower bitrate, but encoders are not created equal. AAC is better than MP3, but most AAC encoders are not as good as the LAME MP3 encoder.
Sorry, but no. I hear what I hear. Fail of MP3 to reproduce well is especially noticable with big orchestra recordings, organ, violin, breath instruments and in an extent piano. Use an AXB test and see yourself. Even with good guitar recordings you can hear the artist touching the strings and pressing accords which tends to vanish with MP3. Don't expect to hear any difference with general pop music and such. The better quality of original recording and the better the equipment used, the more significant the difference gets.
The thing is, with proper compression, you have to be actively trying to listen for the differences. Some of which are sometimes produced in your head because you come in with bias. Have you listened to a singer and heard the lyrics one way, then the moment you saw what the lyrics are supposed to be, you hear it that way? Someone else I had a discussion about this with said in one of his favorite songs (made some time in the 80s), he always knew there was a subtle grunting, but when people were actively listening to the same song in 24-bit 192KHz audio, they suddenly started "hearing" that grunting, even though it was always there to begin with.
I also seriously doubt that most people in the world actively pay attention to the music they listen to when they're playing it.
@ddriver: "I know, such a shock, because it is not like 99.99% of the images, videos and even textures in games aren't using lossy compression ;)"
Have you ever recompressed a lossy image in a different lossy format? It may not work out as well as you hope. Try encoding a file in MPEG2 and then reencoding it into H.264. You may note that it works better in some scenes than others. These formats aren't even very different.
SuperMHL looks fairly similar to me. Same signaling, albeit using 6 6Gbps lanes instead of 4 8Gbps ones. Nearly same color compression too (using DSC 1.1 instead of 1.2). Maybe the 6Gbps lanes are the reason they get away without using FEC. Though I'm not quite sure I buy the argument FEC is needed just due to DSC. HDCP isn't error tolerant neither.
One thing the MHL consortium didn't really mention is that SuperMHL is *not* actually lossless for the 8K/120hz setups. The first SuperMHL chips on the market are "perceptually lossless" (e.g. lossy) at 4K: http://www.anandtech.com/show/9484/lattice-announc...
Why do you think it is too late? The article says the physical layer has not changed. The difference then is simply firmware and controller capability, which may simply mean swapping one chip for a different one.
From AMD AMA on reddit the other day: - Are the cards going to have any overclocking potential, or is this going to be another fury situation? Will the cards come with DP1.4?
Answer: We will discuss specific SKUs and overclocking capabilities at product launch in mid-year.
They will come with DP1.3. There is a 12-18 month lag time between the final ratification of a display spec and the design/manufacture/testing of silicon compliant with that spec. This is true at all levels of the display industry. For example: DP 1.3 was finished in September, 2014.
Hi there. Its understood the need for a higher resolutions support. But aren't we missing a fact that such an extra compression could also be useful for driving 4k gaming monitors at higher refresh rates? It seems like we really need that for newer premium gaming monitors. Especially the ultra-wide ones. No mention of that in the article here.
what is the limit for bandwidth for wire like this (cable monitor) ? Currently DP 1.3 have 8 Gbps for single wire, and this is good speed, the same level as PCI-E 3. Can it go further, to 15 or 20 Gbps, or there are limitations ?
Seems like the Displayport, HDMI, and other display standards need to start looking into optical protocols/cables moving forward. The bandwidth needed for 8K @ 144hz is probably not gonna be possible over copper, and definitely not ideal for a peripheral that could have varying lengths of cable.
Yep. Never mind the likes of 8k at 240 Hz with 16-bit HDR, which would begin to approach the ideal specs for high-quality VR (at 4k per eye). Optical cables are also thinner, lighter, more flexible. And with potential for multi-wavelength multiplexing (at the cost of wider encoder/transmitter/receiver/decoder implementations), optical cable bandwidth can grow well past the terabit/sec range - with virtually no cable length limits (within reasonable constraints of a typical home's dimensions.)
> The bandwidth needed for 8K @ 144hz is probably not gonna be possible over copper
Ahh. But it what about HMDs with foveated rendering? If the standard included an appropriate mechanism to compress the the outer regions of the display much more. While leaving the central area intact / uncompressed. That would result in maximum fidelity / per the available bandwidth. And higher refresh rates too.
I doubt foveated rendering will ever be practical. It would require very accurate and very fast iris tracking, with hardware response (full round-trip) in single-digit milliseconds if not microseconds. And that's even before accounting for effects of accommodation (where the effective FOV projected onto the fovea can change dynamically just by flexing the lens of the eye.)
They are most certainly not more flexible than copper. Depending upon how many fibers the cable needs and accounting for bi-directionality (needed for DRM and EDID among other functions), to get the bandwidth needed you could very well be looking at a heavier and thicker cable, at least with modern, affordable technology. I can't even imagine how expensive the interconnects for that type of optical cable would be. I think we'll have a copper solution long before we get an optical one.
What? You can send signals bidirectionally through a single fiber, without those signals interfering (light passes through light without interacting), unless you were deliberately designing the fiber and tuning the frequencies to create some kind of nonlinear-optics side-effects. Similarly, if you need to send multiple data streams through a single fiber, you just send them concurrently on separate wavelengths. All it (and bidirectional transmission) requires, is a beam-splitter and a prism at each end (receiver and transmitter).
As far as flexibility and weight, compare a typical optical SPDIF cable vs a typical DP/HDMI cable. The latter have to be thick and heavy due to multiple twisted pairs, and shielding/insulation.
I have no idea about the rest of your discussion, but I was so surprised how thin and flexible my sound system's optical cable was. I was like...did they finish manufacturing the cable? Did they forget to fill in the wire in the middle? Felt like a thin empty plastic tube....that is, until you plug it in and see the red lights shining out of the cable, hahahaha.
The solution could be useful for industries who can't wait for the next generation speeds but it is still not a optimal solution for a bandwidth problem. Though another interface is what we need right now, making a new display interface based on optics is the next step.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
38 Comments
Back to Article
iwod - Wednesday, March 2, 2016 - link
Could some one enlighten me. Why would you EVER want a lossy compressed images in your display port?And it seems SuperMHL is better in every way.
jann5s - Wednesday, March 2, 2016 - link
The compression is “visually lossless”, at least that is what they say. Typically the compression efficiency (quality reduction / data reduction) is much higher for lossy compression algorithms. It gets nasty when “visually lossless” means you have to be practically blind not to see the artifacts.nathanddrews - Wednesday, March 2, 2016 - link
You can see read about the testing methods and definition of "lossless" here:http://www.vesa.org/wp-content/uploads/2015/01/Cal...
Sivar - Sunday, March 6, 2016 - link
To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used. Lossless compression cannot guarantee any reduction in bandwidth.Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise.
I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people".
This is the same kind of test which concludes that h.265 is "visually the same" as h.264 at half the bit rate, when other than at extremely low bit-rates that is very obviously not the case.
BurntMyBacon - Monday, March 7, 2016 - link
@Sivar: "To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used."Raw uncompressed streams do not have an inherent limitation on resolution or bit depth that dictate that lossy compression *must* be used. Raw uncompressed streams transferred across a fixed bandwidth medium have a maximum combination of resolution, bit depth, and frame rate. This could drive a requirement to use lossy compression. However, depending on the use case, lowering resolution, bit depth, frame rate, or a combination thereof may provide a better experience than using lossy compression. Of course, the last option is to just wait. By the time this compression really picks up, there is likely to be a higher bandwidth medium available that is suitable for the high resolution, HDR displays that this compression is designed to allow. Then again, with this compression already available, it could stay perpetually one step ahead (in a sense).
@Sivar: "Lossless compression cannot guarantee any reduction in bandwidth.
Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise."
While lossy compression algorithms can guarantee a reduction in bandwidth, they suffer from the same pathological cases. Processing times for individual frames can vary more wildly than with lossless. This isn't much of an issue with movies, but gaming or any use with a feedback loop would rather not have another point of inconsistency in the rendering loop. When processing exceeds the allotted time this can manifest itself one of many different artifacts, a sudden stutter followed by a rather jarring (if temporary) change in resolution, blanking, or any anomaly you might see while trying to play a high def video on netflix with an inconsistent internet connection (though presumably to a much lesser extent). Lossless anomalies may experience dropped frames (stutter), blanking, or possibly block artifacts (depending on algorithm). I'd have to see the lossy compression to make a determination, but I'm fairly confident that I'd rather frame skip on the 0.001% of frames (likely white noise) that lossless can't keep up with than have a general degradation in picture quality in the other 99.999% of frames (your percentages).
@Sivar: "I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people"."
As am I. Depending on how the compression is implemented, it will very likely negate some of the benefits it is supposed to enable. Many compression algorithms use chroma compression on the blue and to a lesser extent red channels to remove detail where it is not as noticeable. More noticeable is using luminance compression. This hurts the bit depth and HDR that the compression is supposed to enable (especially if they mess with the green channel). Using a nearest neighbor averaging compression algorithm is akin to using a lower resolution and interpolating. Using motion compensated frame rate reduction is another obviously counterproductive method.
ddriver - Wednesday, March 2, 2016 - link
I know, such a shock, because it is not like 99.99% of the images, videos and even textures in games aren't using lossy compression ;)I assume this compression's lossiness will not be distinguishable to the human eye, so it isn't really an issue.
HollyDOL - Wednesday, March 2, 2016 - link
Hard to tell until we can actually see and judge the results. If it is as "visually lossless" as mp3 music was claimed to be "audibly lossless" then it is thanks but no thanks. (Not sure whether audibly lossless is a correct term)ddriver - Wednesday, March 2, 2016 - link
There have been plenty of tests where "pretentious audiophiles" were completely unable to tell wav from mp3 encoded at higher bitrate.HollyDOL - Thursday, March 3, 2016 - link
Depends what you play it on. The better amp/speakers/cords/audio card the greater the difference. But if the video compression is like what you claim about MP3 I'll avoid it by a huge margin. With my settings (entry audiophille with tight budget) you can hear the difference within first few seconds (320kbps vs. audio-cd, not even speaking about SACD or DVD-Audio). Even my father with hearing issues could tell the difference, so I am pretty sure everyone can. You just need to play it on decent audio system.LtGoonRush - Thursday, March 3, 2016 - link
Human beings listening to music cannot detect a difference between a properly encoded* 256kbps MP3 and the source CD, regardless of the equipment used. This has been proven with double-blind testing. "Transparent" (audibly lossless) compression of audio has been a solved problem for quite some time.*LAME encoder, using a preset profile. Other formats can do it at lower bitrate, but encoders are not created equal. AAC is better than MP3, but most AAC encoders are not as good as the LAME MP3 encoder.
HollyDOL - Thursday, March 3, 2016 - link
Sorry, but no. I hear what I hear. Fail of MP3 to reproduce well is especially noticable with big orchestra recordings, organ, violin, breath instruments and in an extent piano. Use an AXB test and see yourself. Even with good guitar recordings you can hear the artist touching the strings and pressing accords which tends to vanish with MP3. Don't expect to hear any difference with general pop music and such. The better quality of original recording and the better the equipment used, the more significant the difference gets.xenol - Thursday, March 3, 2016 - link
The thing is, with proper compression, you have to be actively trying to listen for the differences. Some of which are sometimes produced in your head because you come in with bias. Have you listened to a singer and heard the lyrics one way, then the moment you saw what the lyrics are supposed to be, you hear it that way? Someone else I had a discussion about this with said in one of his favorite songs (made some time in the 80s), he always knew there was a subtle grunting, but when people were actively listening to the same song in 24-bit 192KHz audio, they suddenly started "hearing" that grunting, even though it was always there to begin with.I also seriously doubt that most people in the world actively pay attention to the music they listen to when they're playing it.
iwod - Thursday, March 3, 2016 - link
It is exactly this reason, we are lossy compressing everything already and then we get another lossy display connection.ddriver - Thursday, March 3, 2016 - link
The lossy content would have already cut the detail that could potentially be lost. It won't get any worse.skrewler2 - Thursday, March 10, 2016 - link
that'd be pretty sweet if it worked that way huh?BurntMyBacon - Monday, March 7, 2016 - link
@ddriver: "I know, such a shock, because it is not like 99.99% of the images, videos and even textures in games aren't using lossy compression ;)"Have you ever recompressed a lossy image in a different lossy format? It may not work out as well as you hope. Try encoding a file in MPEG2 and then reencoding it into H.264. You may note that it works better in some scenes than others. These formats aren't even very different.
mczak - Wednesday, March 2, 2016 - link
SuperMHL looks fairly similar to me. Same signaling, albeit using 6 6Gbps lanes instead of 4 8Gbps ones. Nearly same color compression too (using DSC 1.1 instead of 1.2). Maybe the 6Gbps lanes are the reason they get away without using FEC.Though I'm not quite sure I buy the argument FEC is needed just due to DSC. HDCP isn't error tolerant neither.
mukiex - Wednesday, March 2, 2016 - link
One thing the MHL consortium didn't really mention is that SuperMHL is *not* actually lossless for the 8K/120hz setups. The first SuperMHL chips on the market are "perceptually lossless" (e.g. lossy) at 4K:http://www.anandtech.com/show/9484/lattice-announc...
jann5s - Wednesday, March 2, 2016 - link
Just in time for Polaris and Pascal? or just too late?xthetenth - Wednesday, March 2, 2016 - link
Very doubtful, the hardware's already designed for those two.Pork@III - Wednesday, March 2, 2016 - link
DP 1.4 is too late. Maybe first in real devices we use it in 2018.Fallen Kell - Wednesday, March 2, 2016 - link
Why do you think it is too late? The article says the physical layer has not changed. The difference then is simply firmware and controller capability, which may simply mean swapping one chip for a different one.fazalmajid - Wednesday, March 2, 2016 - link
I would expect the compression and FEC logic to be significantly more complex than any other DP feature other than HDCP.Pork@III - Wednesday, March 2, 2016 - link
You maybe master of change of chip in your display and your videocard?blahsaysblah - Saturday, March 5, 2016 - link
https://www.reddit.com/r/Amd/comments/48e8rl/radeo...From AMD AMA on reddit the other day:
- Are the cards going to have any overclocking potential, or is this going to be another fury situation? Will the cards come with DP1.4?
Answer:
We will discuss specific SKUs and overclocking capabilities at product launch in mid-year.
They will come with DP1.3. There is a 12-18 month lag time between the final ratification of a display spec and the design/manufacture/testing of silicon compliant with that spec. This is true at all levels of the display industry. For example: DP 1.3 was finished in September, 2014.
dreamcat4 - Wednesday, March 2, 2016 - link
Hi there. Its understood the need for a higher resolutions support. But aren't we missing a fact that such an extra compression could also be useful for driving 4k gaming monitors at higher refresh rates? It seems like we really need that for newer premium gaming monitors. Especially the ultra-wide ones. No mention of that in the article here.nathanddrews - Wednesday, March 2, 2016 - link
DisplayPort 1.3 already supports 4K 8-bit @ 120Hz and 4K 10-bit @ 96Hz.http://www.displayport.org/faq/#DisplayPort%201.3%...
TristanSDX - Wednesday, March 2, 2016 - link
what is the limit for bandwidth for wire like this (cable monitor) ? Currently DP 1.3 have 8 Gbps for single wire, and this is good speed, the same level as PCI-E 3. Can it go further, to 15 or 20 Gbps, or there are limitations ?jasonelmore - Wednesday, March 2, 2016 - link
Seems like the Displayport, HDMI, and other display standards need to start looking into optical protocols/cables moving forward. The bandwidth needed for 8K @ 144hz is probably not gonna be possible over copper, and definitely not ideal for a peripheral that could have varying lengths of cable.boeush - Wednesday, March 2, 2016 - link
Yep. Never mind the likes of 8k at 240 Hz with 16-bit HDR, which would begin to approach the ideal specs for high-quality VR (at 4k per eye). Optical cables are also thinner, lighter, more flexible. And with potential for multi-wavelength multiplexing (at the cost of wider encoder/transmitter/receiver/decoder implementations), optical cable bandwidth can grow well past the terabit/sec range - with virtually no cable length limits (within reasonable constraints of a typical home's dimensions.)dreamcat4 - Wednesday, March 2, 2016 - link
> The bandwidth needed for 8K @ 144hz is probably not gonna be possible over copperAhh. But it what about HMDs with foveated rendering? If the standard included an appropriate mechanism to compress the the outer regions of the display much more. While leaving the central area intact / uncompressed. That would result in maximum fidelity / per the available bandwidth. And higher refresh rates too.
boeush - Thursday, March 3, 2016 - link
I doubt foveated rendering will ever be practical. It would require very accurate and very fast iris tracking, with hardware response (full round-trip) in single-digit milliseconds if not microseconds. And that's even before accounting for effects of accommodation (where the effective FOV projected onto the fovea can change dynamically just by flexing the lens of the eye.)nathanddrews - Wednesday, March 2, 2016 - link
They are most certainly not more flexible than copper. Depending upon how many fibers the cable needs and accounting for bi-directionality (needed for DRM and EDID among other functions), to get the bandwidth needed you could very well be looking at a heavier and thicker cable, at least with modern, affordable technology. I can't even imagine how expensive the interconnects for that type of optical cable would be. I think we'll have a copper solution long before we get an optical one.boeush - Thursday, March 3, 2016 - link
What? You can send signals bidirectionally through a single fiber, without those signals interfering (light passes through light without interacting), unless you were deliberately designing the fiber and tuning the frequencies to create some kind of nonlinear-optics side-effects. Similarly, if you need to send multiple data streams through a single fiber, you just send them concurrently on separate wavelengths. All it (and bidirectional transmission) requires, is a beam-splitter and a prism at each end (receiver and transmitter).As far as flexibility and weight, compare a typical optical SPDIF cable vs a typical DP/HDMI cable. The latter have to be thick and heavy due to multiple twisted pairs, and shielding/insulation.
ikjadoon - Wednesday, March 9, 2016 - link
I have no idea about the rest of your discussion, but I was so surprised how thin and flexible my sound system's optical cable was. I was like...did they finish manufacturing the cable? Did they forget to fill in the wire in the middle? Felt like a thin empty plastic tube....that is, until you plug it in and see the red lights shining out of the cable, hahahaha.B3an - Thursday, March 3, 2016 - link
Any ETA on DP 1.3 hardware? It's taking way too long.xenol - Thursday, March 3, 2016 - link
Variable refresh rates are still optional. *sigh*I would love it if VESA would make it mandatory, thereby not giving monitor makers an excuse to slap on "gaming" markups on monitors with it.
zodiacfml - Monday, March 7, 2016 - link
The solution could be useful for industries who can't wait for the next generation speeds but it is still not a optimal solution for a bandwidth problem. Though another interface is what we need right now, making a new display interface based on optics is the next step.