Understanding Tonemapping

DISCLAIMER: This is a post for me to write down the things I learn about tonemapping and colour and is a constantly evolving post. There will be technical terms here that are incorrect but this post will evolve as my understanding does.

When most people talk about colour what they actually mean is chromaticity. Chromaticity is a 2 dimensional property like the graph shown above demonstrating the sRGB colour space (the triangle) vs the visible light spectrum.

The issue is that colour is a 3 dimensional property and also involves luminance. HDR Display brightness is measured in cd/m2 (candela per meter squared) also known as nits.

On a 200nit HDR monitor the hex value of #FFFFFF (pure white) will show as the same colour as #BBBBBB on a 400nit monitor when measured with a spectracal. This is because luminance is very important when it comes to colour and the light coming from a surface affects our perception of it.

When viewing an image on a different monitor you rely on the brightness of the users monitor to ensure colour accuracy and dynamic range. As well as the quality of the monitor to ensure coverage of the intended colour space. Unfortunately there isn’t an absolute standard for colour space and common colour spaces such as CIE Lab are relative to however bright your monitor goes and dont abide by a brightness standard such as nits so you can never be completely sure that an image will display correctly on every device as the usual 1-100 brightness setting on a monitor bears no obvious relation to the nit value of the display. This means that the image you product wont look the same on say your clients monitor, or in print, or on a TV.

Emulating ACES Tonemapping

ACES is a colour space designed by the Academy of Motion Picture Arts and Sciences, the organisation behind the Oscars. Its used by major feature films and ensures correct colour representation through a pipeline and wide dynamic range in an image. It preserves as much information in an image so that black and white don’t ‘clip’ when lighting is at real world values and lets a LUT (look-up table) interpret that data into a more natural and realistic image.

By emulating ACES colour space settings (IDT or Input data transform) to tonemap our raw image in the frame buffer we can then apply our LUT (RRT or Reference Rendering Transform) to create a ‘look’ for the image either in corona or in lightroom.

The ODT section of the aces pipeline is where you encode your image into a format compatible with the final output device. For my clients an sRGB JPEG is usually fine but images shouldnt be saved as such until the very last step. More often than not this can be done straight from the frame buffer unless further post processing is required. Simply saving the frame buffer as an EXR or other uncompressed format isn’t enough. You need full coverage of the colour gamut until the very end and this is what ACES offers. We apply aces emulation first to ensure the LUT has as much data to work with as possible.

But why would we generate all that data if we are saving as sRGB anyway? Its actually more accurate to tonemap an image with a wider gamut down to sRGB than it is to just render in sRGB without proper tonemapping.

When a high end camera takes a photo the raw data looks very different to what the final image looks like. The image below is a raw image from a RED camera.

The camera takes this image it and then applies its own colour transform to make the image look more accurate to what you’d expect to see. You might know this as a LUT or lookup table.

If you use the standard ACES post process settings in the VFB you will end up with an image in the ACES colour space which might look similar to the first image and after applying a LUT made for ACES you will have a dynamic and consistent image that looks like what a camera could produce and holds an incredibly large amount of data covering almost the entire visual locus. The beauty of this is that if everyone is using a standardised LUT and a standard set of post processing settings across a project, you can be sure you have an image that has more natural colour data and lighting as well as complete consistency in colour across every render as it hasnt been scrambled by an sRGB colour space crushing the dynamic range of the image. You can see this below.

Why is this useful in CG?

ACES is intended to preserve lots of data so that in compositing, CG and other elements will sit in the same colour space and behave the same way when you do your final colour grade. We can take advantage of this within corona to get more natural lighting results. In the image below a physically correct sun and sky in corona doesn’t seem to light a room very naturally. This is because it is trying to display a hdr image on an LDR monitor without proper tonemapping . (default settings with -2 exposure and LUT applied) Its too yellow, too bright in the hotspots and just doesn’t look correct.

The image is burned out and clipping due to the lack of tonemapping

You can also see how the image is clipping and losing data in the white areas of the false colour map.You can apply this false colour map as a LUT to your scene to check your own scenes.

White areas denote colours that are outside of the sRGB spectrum and therefore cant be displayed by your monitor.

Why does this happen? When you render an image the data rendered is actually alot brighter than you screen can produce. This is because it is a high dynamic range image. Most monitors are low dynamic range and use the sRGB colour space. Tone mapping is the process of mapping the colours of the render (which has a high dynamic range) to something that looks good on a low dynamic range output such as a monitor or tv.

sRGB

The sRGB colour space used by a lot of renderers has a much smaller colour palette than a camera sensor does due to sRGB being intended for very old CRT screens. Using the ACES Emulation tonemapping on the RAW data in the VFB tonemaps the frame buffer into something akin to real ACES colour space and gives you access to a much larger colour gamut during grading. As big as that of the human eye (see below) and gives you up to 33 stops of exposure in an image so when a LUT gets applied it has as much data to work with without being clamped by sRGB’s LDR colour space only showing a small section of the colour data. This is becoming a standard in the motion picture and CG industry because you can have a final output with the fidelity of the source material.

In sRGB Each colour level is measured by the range of decimal numbers from 0 to 255 (256 levels for each colour) as it is an 8-bit system. For example, if a colour has zero Blue, it will be a mixture of Red and Green. This means we can generate 256 x 256 x 256 = 16.777.216 different colours with this model. If a colour falls outside this range. You get clipping and this is very bad as that data is lost. You can see across the colour sRGB uses indicated by the area inside the triangle.

The charts below compare the coverage of some popular colour spaces. sRGB is the standard across most monitors. Rec2020 is a 10-bit system and can display 1024 levels of R G and B giving you access to billions of colours. It has been adopted by the UHD standard and is now standard in HDR displays. Adobe RGB is used for photography and other creative pipelines but the largest gamut of all is ACES which covers the entire visual locus

A Solution

Now we will apply the ACES emulation tonemapping settings. Once we have applied these tonemapping settings the image appears washed out (below). We need to bring the colours back to something our displays can show. The look or LUT is applied after the tonemapping. This allows you to keep as much raw data in the image whilst bringing the colours back to life in a natural way. You must ensure however that the LUT used is compatible with this workflow and the ones that ship with corona unfortunately all aren’t.

So if we apply ACES emulation settings to this image by entering the values in the tonemapping section of the VFB the image looks like this (ACES settings, -2 exposure NO LUT APPLIED)

This raw data is tonemapped so that all the data corona is generating is shown without clipping.

You can see that no colour data is clipping and being lost in the false colour render

There are no white areas denoting clipping colours

We then apply the LUT. In this instance we have chosen Filmic Base Contrast.

No ACES
With ACES

As you can see the image with ACES seems much more natural than the standard sRGB render. We have retained more data in the image and colours will be completely consistent across all images using this setup.

By storing ACES images as 16 bit half float images we are able to encode 33 stops of dynamic range into our images as well as cover the entire visual locus giving us a gamut larger than any display. This means regardless of what the image is destined for it can be used by any display technology past present or future and will display accurately and more importantly consistently and we can be certain that all images across a project look the same for use further down the pipe in grading.

ACES Settings

If your visualisation goal is to emulate a photographic, filmic look then ACES emulation works really well by mimicking the response curve of film and giving a more realistic result. This is done using the settings across (Exposure can be any number required)

You will need to apply a LUT to see your processed image.