What I did was record a 6 second video using my Samsung Note 5 in 1080P of the sky one afternoon with a few clouds since it was convenient :-). As you probably know, gradual shades of a relatively uniform color like the blues of the sky can be prone to quantization "banding" with lower bit-depths and inadequate bitrate. My hope in this simple experiment, was to first of all see the benefits of HEVC over AVC for general image quality. Secondly, I wanted to compare if using 8-bit vs. 10-bit encoding made a difference to the perceived output. Thirdly, let's just make sure file sizes were not significantly different if we witness significant qualitative variation.
Procedure:1. Capture a short sky image with gradual gradients - original video was captured ~17Mbps AVC.
2. Encode with Handbrake Nightly (2016121501, 64-bit) and appropriate HEVC and AVC 10-bit .dll's as described last time. Also as previously mentioned, let's use "Encoder Preset" at "Medium" speed, framerate as "Same as source" (30 fps), and let's tell the encoder to use a low quality setting (so imperfections are more obvious). Average bitrate of 500kbps, single pass processing.
3. Create a composite of the same frame in the video for comparison... Hopefully able to demonstrate the difference AVC vs. HEVC and 8-bit vs. 10-bit encoder made. Playback software was K-Lite Codec Pack (12.7.15 Standard, Dec 20, 2016) with MPC-HC program.
For those with large / hi-res 2160P monitors, here's the 3x2 array:
To the left side we see the original video frame captured at 1080P, 30fps, around 17Mbps bitrate. And on the right are 2 images using either AVC or HEVC settings on Handbrake with either 8-bit or 10-bit processing; all transcoding done at very low bitrate of 0.5Mbps as noted! We're looking for the "gracefulness" of the deterioration and whether improvements can be seen between 8-bits of color shades versus a larger 10-bit range available to the encoder**...
So, what do you think?
This is what I see as per the 3 items I listed in the second paragraph above:
1. Image quality comparing AVC to HEVC - it is obviously a "win" for HEVC overall. Macroblocks are less obvious in the HEVC image and details like in the branches of the tree have been much better retained. Obviously, fine details such as in the subtleties of the clouds have been lost due to the very low bitrate... However, I trust that subjectively the deterioration is less objectionable with the HEVC encoder.
2. Comparing 8-bits to 10-bits, we see an improvement with deeper bitdepth. Although the original AVC recording was less than ideal in terms of the blue sky gradients, I think we can see that the AVC 10-bit encoding had less "blockiness" especially in the upper mid portion of the deeper blue sky. As for the HEVC recording, again, the extra bitdepth seems to help provide a smoother shading in the sky despite the loss in general resolution with such a low bitrate. Also, there appears to be better retention of details in the cloud (like the right "tip" of that cloud and the shades of white/gray).
3. Of course, the improvements would not be good if the file sizes were significantly larger with the 10-bit encodes! Here they are:
Conclusion:Okay, this is a very simple test and clearly results cannot be overgeneralized. At least in situations like this one where bitrate is purposely restricted, it's quite clear that HEVC is superior in retaining detail (as expected, remember, the estimate is 50% reduction in bitrate for approximately equivalent quality as AVC).
What I find interesting is the difference in this example between 8-bit and 10-bit encoding despite the source being an 8-bit video. Although image quality loss is evident, it just seems that 10-bit encoding resulted in a less objectionable image for both AVC and HEVC. All without increasing file size / bitrate of the video file.
The question is whether this kind of difference between 8-bit and 10-bit encoding can be meaningful with more reasonable bitrates appropriate for the video resolution (for example at least 2Mbps using HEVC for 1080P video). My suspicion is the difference is "probably not meaningful" for the most part. However, if 10-bit HEVC decoding hardware is relatively ubiquitous and difference in encoding times not too burdensome (xx) between an 8-bit vs. 10-bit encoder, then maybe just letting the encoder have that extra 2-bit lattitude could be reasonable to extract a little more image quality in challenging situations...
** By the way, we have an analogy to this in the lossy audio world... Remember the accuracy afforded by the MAD MP3 decoder from the turn of the 21st Century that outputs 24-bit audio from the MP3 data? I thought that was cool back then and the decoder still sounds great today. Obviously we cannot "add" more info to what was not there to begin with; at best just hopefully make things a little more accurate and with less objectionable lossy artifact.
(xx) ADDENDUM: For those wondering about processing speed, I did a simple test with my Ratatouille 1080P Blu-Ray rip compressing the whole 1:50 movie with 8-bit HEVC and 10-bit HEVC. The average framerate using my Intel i7-3770K @3.5GHz with 8-bits is 13.2 fps, and with 10-bits it's 10.7fps (remember that later generation CPU's like Sky Lake are significantly faster). Therefore 10-bit encoding is ~81% of 8-bits... As I mentioned previously, going with 10-bit encode, we're looking at a 10-20% penalty depending on the computer and video of course.
Okay, with that, I'm back to the holiday festivities :-). Have a great time everyone...