Monday 26 December 2016

QUICK COMPARE: AVC vs. HEVC, 8-bit vs. 10-bit Video Encoding

As I mentioned in the last blog post on HEVC encoding in response to "Unknown" in the comments, I do believe there are potential subtle benefits to the use of the 10-bit x265 encoder in Handbrake even with an 8-bit video source. I figured I'll run a very quick test to show what I've seen...

What I did was record a 6 second video using my Samsung Note 5 in 1080P of the sky one afternoon with a few clouds since it was convenient :-). As you probably know, gradual shades of a relatively uniform color like the blues of the sky can be prone to quantization "banding" with lower bit-depths and inadequate bitrate. My hope in this simple experiment, was to first of all see the benefits of HEVC over AVC for general image quality. Secondly, I wanted to compare if using 8-bit vs. 10-bit encoding made a difference to the perceived output. Thirdly, let's just make sure file sizes were not significantly different if we witness significant qualitative variation.

Procedure:

1. Capture a short sky image with gradual gradients - original video was captured ~17Mbps AVC.

2. Encode with Handbrake Nightly (2016121501, 64-bit) and appropriate HEVC and AVC 10-bit .dll's as described last time. Also as previously mentioned, let's use "Encoder Preset" at "Medium" speed, framerate as "Same as source" (30 fps), and let's tell the encoder to use a low quality setting (so imperfections are more obvious). Average bitrate of 500kbps, single pass processing.

3. Create a composite of the same frame in the video for comparison... Hopefully able to demonstrate the difference AVC vs. HEVC and 8-bit vs. 10-bit encoder made. Playback software was K-Lite Codec Pack (12.7.15 Standard, Dec 20, 2016) with MPC-HC program.

Results:


You'll need to click on the pictures above to expand the images to original size (hopefully without any further loosy degradation from Blogger). They were captured as lossless 1:1 PNG on an 8-bit panel computer monitor.

For those with large / hi-res 2160P monitors, here's the 3x2 array:

To the left side we see the original video frame captured at 1080P, 30fps, around 17Mbps bitrate. And on the right are 2 images using either AVC or HEVC settings on Handbrake with either 8-bit or 10-bit processing; all transcoding done at very low bitrate of 0.5Mbps as noted! We're looking for the "gracefulness" of the deterioration and whether improvements can be seen between 8-bits of color shades versus a larger 10-bit range available to the encoder**...

So, what do you think?

This is what I see as per the 3 items I listed in the second paragraph above:

1. Image quality comparing AVC to HEVC - it is obviously a "win" for HEVC overall. Macroblocks are less obvious in the HEVC image and details like in the branches of the tree have been much better retained. Obviously, fine details such as in the subtleties of the clouds have been lost due to the very low bitrate... However, I trust that subjectively the deterioration is less objectionable with the HEVC encoder.

2. Comparing 8-bits to 10-bits, we see an improvement with deeper bitdepth. Although the original AVC recording was less than ideal in terms of the blue sky gradients, I think we can see that the AVC 10-bit encoding had less "blockiness" especially in the upper mid portion of the deeper blue sky. As for the HEVC recording, again, the extra bitdepth seems to help provide a smoother shading in the sky despite the loss in general resolution with such a low bitrate. Also, there appears to be better retention of details in the cloud (like the right "tip" of that cloud and the shades of white/gray).

3. Of course, the improvements would not be good if the file sizes were significantly larger with the 10-bit encodes! Here they are:
I had asked Handbrake to encode all files at around 0.5Mbps so I obviously did not expect wildly differing sizes. But notice that in both instances with HEVC and AVC, the 10-bit files were actually smaller than the 8-bit encodes, yet as described above, I think the loss of quality is less distracting with the 10-bit result.

Conclusion:

Okay, this is a very simple test and clearly results cannot be overgeneralized.  At least in situations like this one where bitrate is purposely restricted, it's quite clear that HEVC is superior in retaining detail (as expected, remember, the estimate is 50% reduction in bitrate for approximately equivalent quality as AVC).

What I find interesting is the difference in this example between 8-bit and 10-bit encoding despite the source being an 8-bit video. Although image quality loss is evident, it just seems that 10-bit encoding resulted in a less objectionable image for both AVC and HEVC. All without increasing file size / bitrate of the video file.

The question is whether this kind of difference between 8-bit and 10-bit encoding can be meaningful with more reasonable bitrates appropriate for the video resolution (for example at least 2Mbps using HEVC for 1080P video). My suspicion is the difference is "probably not meaningful" for the most part. However, if 10-bit HEVC decoding hardware is relatively ubiquitous and difference in encoding times not too burdensome (xx) between an 8-bit vs. 10-bit encoder, then maybe just letting the encoder have that extra 2-bit lattitude could be reasonable to extract a little more image quality in challenging situations...

** By the way, we have an analogy to this in the lossy audio world... Remember the accuracy afforded by the MAD MP3 decoder from the turn of the 21st Century that outputs 24-bit audio from the MP3 data? I thought that was cool back then and the decoder still sounds great today. Obviously we cannot "add" more info to what was not there to begin with; at best just hopefully make things a little more accurate and with less objectionable lossy artifact.

(xx) ADDENDUM: For those wondering about processing speed, I did a simple test with my Ratatouille 1080P Blu-Ray rip compressing the whole 1:50 movie with 8-bit HEVC and 10-bit HEVC. The average framerate using my Intel i7-3770K @3.5GHz with 8-bits is 13.2 fps, and with 10-bits it's 10.7fps (remember that later generation CPU's like Sky Lake are significantly faster). Therefore 10-bit encoding is ~81% of 8-bits... As I mentioned previously, going with 10-bit encode, we're looking at a 10-20% penalty depending on the computer and video of course.

---------------------

Okay, with that, I'm back to the holiday festivities :-). Have a great time everyone...

10 comments:

  1. I'm not really a video guy, more audio, but man that was really interesting.
    Great job Arch!
    Still waiting for that hdmi vs usb breakdown...maybe throw I2S in there as well!
    Best wishes, Happy Holidays

    ReplyDelete
    Replies
    1. Hey there sk16!

      Can't say I'm much of a video guy either :-). Just having fun with the tech and getting familiar with what's out there and the potential for us consumers.

      So, any ideas what you'd like to see when it comes to HDMI vs. USB +/- I2S?

      Delete
  2. Back in the days of analogue TV, broadcasting engineers in Europe made the lower half of their "colour bars" line-up screen an area of solid red. This was done because the human eye is most sensitive to small changes (such as those caused by quadruplex video tape artifacts) at the red end of the visible spectrum.

    It would be interesting to see how the results you obtained compare to a similar video taken of say, a red flag waving in a gentle breeze.

    ReplyDelete
    Replies
    1. Interesting comment Roderick.
      I assume this is chroma subsampling you're referring to? The AVC 1080P file from the phone is compressed 4:2:0 so there will be subsampling of Cb and Cr to every 2 lines both horizontally and vertically... This is maintained through the HEVC/AVC transcoding of course.

      I haven't personally testing the effect of this but I agree, could be interesting to see what happens in the CODECs!

      Delete
    2. Actually, I was referring to the simple fact that the human eye is most sensitive to small changes in hue or saturation at the red end of the spectrum. Therefore if you are making a subjective comparison of the impact on video of different encoding/decoding methods you should ideally do it at the red end of the spectrum, not the blue end. That is all.

      Delete
    3. Good point. So we're I think both on the same wavelength...

      Red (and blue) do both tend to be sacrificed in resolution when it comes to chroma subsampling but red can be more noticeable in the compression.

      I'll see if I can find a good scene with gradations of red to check out...

      Delete
  3. Arch,
    Reference the hdmi/usb/i2s remark, I'm curious to know if there is a measurable difference in noise or otherwise, any quality, between these transmission methods.
    In other words, and all else equal, if an audiophile had a choice in transmitting an audio file from computer to dac, which should they choose?
    Thanks, Happy New Year.

    ReplyDelete
  4. Would you mind making the files (or at least the AVC 10-bit file) available for download? I can't find any files in that format to test.

    ReplyDelete

  5. It is actually a great and useful piece of information. I am glad that you shared this helpful information with us. Please keep us informed like this. Thank you for sharing.
    PCAP Touch Screen

    ReplyDelete
  6. HEVC 10-Bit is smaller in normal video, but in anime videos HEVC 8-bit is smaller than the 10-bit hevc videos.

    ReplyDelete