Codecs are one of the last things I wrapped my head around when I got into video; it’s not exactly an engaging subject, and all the technical terms can be overwhelming. But depending on the camera, your codec can be a real limiting factor. Overall, I’m really happy with my a7S II’s. The full-frame sensor, lens system, dynamic range, and overall image quality are incredible. But no system’s perfect, and the weakest point of the a7SII the codec. So let’s figure out how to work with it.
To put it simply, a codec is a file format. There’s a DVD codec, a Blu-Ray codec, a YouTube codec; all the media you watch has been encoded into a format that fits the playback system. Consumer codecs are fairly standardized, but when you’re on the professional side, there are a huge range of options to choose from (hence the confusion). Luckily, despite the vast number of individual codecs out there, the principles are pretty simple. Frame rate (frames per second) and resolution (number of pixels) are easy to understand, and both are set by the camera before you press record. The main factors controlled by the codec are bit rate and bit depth.
Bit rate is a measure of how much total information (e.g. visual detail) can be captured. Think of it this way; there’s a huge amount of information passing from the lens into the camera, but a consumer camera can’t process all of it at once (mostly due to overheating). Because of this, it has to compromise and throw out some of that information. What the codec’s trying to do is figure out which parts are important, and which parts can be discarded. For example; in a shot of a skyscraper against a blue sky, the codec will try to preserve the fine details of the building while assigning less data to the solid backdrop of the sky.
Now, when you’re using high bit rates (say, 400 megabits per second and up), there’s a lot less need for compromise; the details can be preserved across the whole image. But when you’ve only got a limited amount of data to work with, you have to compromise more. And that’s where we run into trouble; the Sony a7S II has a maximum bit rate of 100 megabits per second, which is a far cry from current professional formats (DNxHR HQX is about 800 megabits per second). As such, it’s throwing away a lot of data, and it doesn’t always make the right call. For example, here’s a wide shot from a recent live recording (click here for full size):
As you can see, the codec is preserving the detail in the back curtain and the drum set, but it’s throwing away a lot of detail in the faces. Of course, that’s the opposite of what we want. A colleague of mine ran into an extreme example of this while shooting a concert with an LCD screen behind the artists; the codec put all the information into the background while turning the performer’s faces into a blurry mess. So yes, this is a real problem.
One workaround is to shoot with a shallow depth of field. When you keep the subject in sharp focus while blurring the background, you’re essentially forcing the codec to keep the detail where it belongs. As an example, here’s another shot from the same show (click here for full size):
An image like this is ideal content for a limited codec. But there is no in-camera fix if your shot just has a lot of detail in it. For that, we’ll have to turn to an external solution.
There’s a reason external recorders are so popular right now; they allow you to send your video signal to dedicated hardware that can use a much higher bit rate. Here’s another shot from the show, this one recorded to an Atomos recorder (click here for full size):
Despite the fact that there’s a lot more going on in the background, the detail in the face and hair is preserved; and that’s with a bit rate of only 200 megabits per second. (Note that this image was not denoised, so there’ll be some digital artifacts visible.) By increasing the bit rate, we vastly improve the detail and quality of the footage we can capture. However, bit rate is just one side of the equation.
Where bit rate is the amount of information in video, bit depth is the kind of information that can be recorded; more specifically, the kind of color. 8-bit video is capable of storing about 17 million colors, where 10-bit video is capable of storing over 1 billion. Think of it this way; let’s say you have a gradient that’s going from one color to the next. With 10-bit color, you have about 60 times as many “steps” in between compared to 8-bit. Here’s what that looks like in practice (courtesy of Dave Dugdale):
As you can see, the colors on the left transition sharply around the nose and mouth, creating some chromatic errors (looks like pink noise here). On the right, the skin tones blend much more smoothly. This becomes even more obvious when you’re grading footage, as pushing the exposure or changing the color exacerbates those errors.
Problem is, the Sony a7S II doesn’t output 10-bit footage. And if it doesn’t output it, an external recorder can’t capture it. So there’s no way around this; we’re stuck with sharper transitions between colors and the artifacts that result.
While using an external recorder can get you a much more detailed image, unfortunately the a7S II will always be limited by the bit depth of its codec. So I’d always recommend setting your exposure and white balance as accurately as possible to cut down on color correction in post. I’d also recommend denoising & dithering your footage, which I write about here.
Check out Pete’s music on Facebook! // Pete Muller