This post is my attempt to organise my thoughts in advance of this week’s Chit-Chat Across The Pond segment on this week’s NosillaCast. This is of course just my initial thoughts for starting a conversation, so this post will be less than half of what you’ll get if you listen to the actual show!

So, what’s all this about then? On higher-end digital cameras (including some point-and-shoot models), you can either save your images as JPEGs, or as so-called RAW files. RAW is a sort of blanket term, it’s not a standard, and it’s different for different makes of camera. RAW files also have different file extensions for the different RAW formats. On my Nikon for example, RAW files are saved with a .NEF file extension. What these formats do all have in common is that they save the raw data that is collected by the camera’s sensor, hence the name.

RAW files are bigger than JPEGs, so that’s the trade-off, there are however many advantages to be gained in exchange for that lost disk space. In this post I’m going to talk about the advantages RAW offers me, and why I choose shoot RAW over JPEG these days. This is not an exhaustive list of the advantages of RAW, and others will have different reasons for shooting RAW.

Without a shadow of a doubt the very biggest reason I shoot RAW is White Balance. Or more specifically, the ability to adjust my White Balance in post processing without loosing quality. To make sense of this I think it’s important to start by explaining what White Balance is, and then to describe at a very high level how a camera sensor really works.

In theory white light is light with an equal amount of all colours, and white surfaces are surfaces which reflect all colours of light equally. A white surface only gives off white light when white light hits it. In reality, we never get pure while light, so we never see true white light coming off a white surface. However, we still see things as being white, Why? Because our brains compensate for the colour of the ambient light, and make us see things as white, even when in reality the light coming from these white objects has more of some colours than others.

We can tell that a piece of paper is white when we’re outside in the sun, when we’re in the shade, when we’re outside in the evening, when we’re inside under incandescent lights, when we’re inside under fluorescent lights, and so on and so forth. This is despite the fact that sunlight is yellow, evening light is orange, incandescent ligh-bulbs are VERY yellow, and fluorescent light-bulbs are blue. In all these cases if we were to make a measurement of the light from the white sheet of paper we would record the colour of the light source, not equal amounts of all colours.

What a digital camera does is measure two things, how much light is hitting each pixel, and what colour that light is. The way the colour of light is measured is by counting the amount of blue, red, and green light that hits the sensor. Hence, at it’s lowest level, a digital photo is a collection of groups of three brightness counts. For each pixel in the image we know how much blue light hit it, how much red light hit it, and how much green light hit it. If we were to render images directly without processing them our piece of white paper would never look white in a photo, and everyone photographed indoors under incandescent lights would look like an oompa-loompa! Clearly this is not what we want.

How do we get around this? We compensate for the colour of the source light illuminating the scene by sticking a mathematical formula between the raw RGB values, and the rendered RGB values, and this formula compensates for the colour of the ambient light. The details of this formula are not important, but if you care, it’s called a black-body curve. All light sources approximately emit light following the same shape curve, and the difference between different light sources is where the peak of the curve is. Scientists measure this peak in terms of temperature, hence you hear photos with too much red described as “warm”, and photos with too much blue described as “cool”. To get a photo with too much red to look right you “cool” the white balance, and to get a photo with too much blue to look right you “warm” the white balance. This is also why the white balance slider tends to have a colour scale drawn on it, and why the readout for the value of the slider is usually in degrees Kelvin or Celsius. The actual physics is not important, what matters is that the raw RGB values get pushed through a mathematical formula to make them look right, and that formula has just one input, temperature, which we call “White Balance”. Different light sources have different White Balances, and if you do use the wrong White Balance to generate the image it will look wrong.

So, regardless of how you store your image on your camera’s memory card, at the moment the photo is taken, your sensor collects the RAW RGB data and passes it to the computer inside your camera. This is where things change if you shoot RAW or JPEG. If you shoot JPEG, your camera immediately applies the white balance mathematical formula and saves ONLY the result of that calculation. The rest of the data is discarded. If you shoot RAW, ALL the data is kept. This is why RAW files are bigger, they simply contain more data. Assuming the JPEG was generated with the correct White Balance value, the data thrown away is worthless. However, that is often not a valid assumption. If you set your camera to Auto White Balance (as most people do), then the white balance is chosen by the computer in the camera making a guess at the right value. Computers are stupid, hence, that guess if often wrong. You can try set your own white balance manually, but that’s not easy either. It’s almost impossible to accurately judge the White Balance on a tiny LCD display that is being illuminated by the same colour light that you’re trying to compensate for!

The main reason I shoot RAW is that it’s really hard to accurately set your white balance out in the field. If you get the white balance wrong and you’ve shot JPEG, the information you need to correct that mistake has been discarded! On the other hand, if you shoot RAW it’s still all there so you can loss-lessly correct the White Balance afterwards.

The second big reason I shoot RAW is that camera sensors have a larger dynamic range than JPEG image files. The dynamic range is the difference between the brightest and the darkest thing in the image. A camera sensor has quite a large range, and when your camera stores an image as a JPEG, all the data outside the limited range that a JPEG can store is discarded. Again, if you shoot RAW all that data is kept, and can be used in post-processing to bring out more detail in both the shadows and the highlights. If you only ever shoot in ideal conditions this doesn’t matter, but really, who only uses their cameras in ideal conditions? Certainly not me! In fact, I really love shooting in more extreme conditions because I think you can get really interesting photos when you push the envelope a bit.

So, in practical terms, what does this higher dynamic range give you? It gives you much more scope to use the exposure, recovery, shadows, and highlights sliders, as well as the levels tools within Aperture or what ever editing software you’re using. You can use these adjustments on JPEGs too, but with RAW files you get a much bigger range of adjustment before the image quality starts to degrade.

Finally, I love shooting RAW because I can tonemapp single RAW files. Since RAW files contain more dynamic range than JPEGs they are, by definition, HDR images! Granted, the dynamic range from a single RAW file is nowhere near as large as the ranges you can generate with multiple exposures. All the same, it’s still a higher range than normal, so you can still get something out of the image by tonemapping it. This is quite a controversial idea, but I have used it to great effect a number of times. For a start, although the range is smaller than you can get by combining multiple exposures, the fact that it’s a single exposure does have advantages. You can tonemap images of moving objects like trains, wind doesn’t ruin your shot by adding ghosting all over the place, and you don’t need a tripod.

An argument can be made that you can also get at the extra data stored in the RAW file by using PhotoShop tools and adjustments rather than tonemapping. This is certainly true, but only if you have the time and the skills to do so, and only if you actually HAVE PhotoShop! I don’t own PhotoShop because it’s just too expensive, so I haven’t been able to develop the needed skills. What I do have is a good tonemapping tool (Photomatix Pro), and a lot of experience generating subtly tonemapped images. So, for me, in my situation, tonemapping single RAWs does make sense for images where Aperture isn’t giving me the results I want. At the end of the day, what matters is the end product, not how it was generated!

So, these are the three reasons I choose to shoot in RAW. You can summarise the lot by saying that RAW files allow you a greater range of control in post-procerssing, because no data is thrown away by your camera.