High Dynamic Range in HD View 3

This post is all about high dynamic range imaging (HDRI).  As I mentioned in the Beta 3 announcement, HD View is the best way, that we are aware of, to interact with HDR images over the web.  We take advantage of three recent advances to make this happen.  They are HD Photo compression technology, the PC’s graphic processing unit (GPU), and high contrast displays.  I will describe each of these things, but first, a quick intro to high dynamic range imaging (HDRI). 

What is HDRI?

luminance 

image courtesy of Mark Fairchild

Light in the world has a very large range of intensities.  In the figure above, you can see a 100 million to 1 range (alternatively 26 stops) in intensity from starlight to direct sunlight.  An HDR image is one that contains a measure of the intensity of light in a scene.  At each pixel, it records one of the numbers on the scale in the figure above.  This means that an HDR image can represent a scene that has a very large intensity difference between the shadows and the highlights.  Cameras today can’t directly record scenes that have such an extreme range.  So to create these images, photographers stack multiple exposures together, one correctly exposed for the shadows, one for the mid-tones, one for the highlights, and so on.  These separate exposures are then merged by software(1, 2) to produce an HDR image.  This is quite a different representation than a JPEG image.  In terms of range, a JPEG can represent only a small portion of the  possible range of intensities in a scene, which makes JPEG a low dynamic range (LDR) format.  This is why in a backlit situation your camera can produce a good JPEG of the foreground object or one of the bright background, but not both at once.  It turns out that most computer displays are also LDR, so even if a JPEG could store a greater range, you wouldn’t be able to see it on a standard monitor.  JPEGs also differ from HDR in the way they store color information, but that is a topic for my next post.

So why are there so many HDR images stored as JPEGs on flickr?  That is thanks to a technique known as tone-mapping.  In order to make an HDR image display on an LDR monitor, tone-mapping algorithms compress the wide range of intensities in the HDR image to fit in the narrow range of the monitor.  There have been terrific advances in tone-mapping, but it is a very hard problem to make tone-mapped images look natural.  In order to compress the range, dark parts of the scene need to be brightened to the point that they have similar intensity to bright parts.  This can lead to an unnatural look, because as a human observer of this image, you expect bright objects to look a certain way when compared to dark objects, but the tone-mapping process has changed the tonal relationship between these objects.  I’ve heard many people say that they "don’t like the HDR effect," but what I think they object to is the effect of the tone-mapping process.  It is important to separate these two things.  HDR images represent the intensities of light in a scene.  Tone-mappers make HDR images presentable on the computer monitors available today.

HD Photo and JPEG XR

Now onto how we are supporting HDR in HD View.  On the High Dynamic Range page of the HD View web site we include the following diagram that describes our HDR pipeline.

newflow 

The pipeline starts with a camera capturing one or more RAW images.  A single RAW image by itself contains a greater dynamic range than JPEG can store, and as described above, if multiple bracketed RAW images are captured, these can be merged into an HDR image.  If we want to deliver this over the web, today that would involve the tone-mapping process described above followed by storing the result as a JPEG.  In our pipeline, we instead store the extended dynamic range image directly in a new image format called HD Photo.   There are other file formats available today that could also be used such as, .HDR, OpenEXR, and TIFF.  But, none of these offer the sophisticated compression algorithms that HD Photo does.  This compression allows us to effectively transfer HDR images over the web.  The file sizes of these other HDR file formats are simply too large for interactive web access.  As an example, the images on our HDR Survey page were downloaded from Mark Fairchild’s site in OpenEXR format and converted to the HD Photo format.  The OpenEXR files range in size from 35MB to 46MB, by comparison after conversion to HD Photo the images are about 1/10th the size (3.6MB to 6MB). 

HD Photo started as a Microsoft Research project.  It was then refined and further developed by the Microsoft Codec Group.  Microsoft subsequently presented HD Photo to the JPEG standards body, and that organization has decided to make HD Photo a standard that will be known as JPEG XR.  I am very much looking forward to the time when JPEG XR emerges from the standardization process and is in ubiquitous use.  For now though, you can start experimenting with the new technology in the form of HD Photo.  The HD View team has provided a simple command line tool and a Photoshop plugin that you can use to convert your RAW and HDR images to the HD Photo format that HD View requires for streaming over the web.  HD Photo has lots of other great features beyond HDR, please see Bill Crow’s blog for many more details.

GPUs

The graphics processing unit (GPU) of almost every modern PC now has HDR capabilities.  For example, a new $300 PC from Dell uses the Intel 3100 GPU.  This chip can load and process floating point images which is the same representation that HD Photo uses and is a great representation for HDRI(3).  Of course higher end GPUs from NVidia and ATI have these capabilities as well, but the point is that even entry level PCs can now process HDR imagery in real time.  GPUs have this capability because in order to generate realistic looking 3D graphics, game designers need to simulate real world lighting.  Thus, today’s most realistic looking games are in fact doing HDR.  GPUs also have the ability to run ‘shader’ programs.  These shaders, as they are known, can run fairly sophisticated image processing algorithms at interactive rates.  We use these capabilities to perform real time tone-adjustment on HDR images.

Displaying HDR

HDR displays are coming but they aren’t available just yet.  When they become available our HD View pipeline won’t need to change and any content that you’ve created for HD View will look great on them.  But for the foreseeable future we will also have LDR displays.  So now that we’ve delivered an HDR image all the way to an LDR monitor what do we do with it?  Remember that HD View is an interactive image viewer.  So, we can create the ideal tone adjustment given the end-users display, the portion of the image they are currently viewing and any intents we let them express.  This is quite different from the tone-mapped images you see today.  Those images are processed assuming that they will be viewed full-frame on an LDR monitor or on paper.  This can lead to the distorted relationship between tonal values I described above.  It can also lead to a flattening or loss of contrast in the image.  In HD View we instead do tone adjustment on the fly as the user views the image.  For example, when zoomed all the way out on an image we might see the shadows be truly dark, but as we zoom into those dark areas HD View will auto-expose to show previously indiscernible details (the ability to retain those details in the shadows is thanks to HD Photo).  In our pipeline, we’ve retained all of the dynamic range of the original photographs and we have the ability to run shaders, so we offer users a variety of different ways to do tone-adjustment on the fly.  These are accessed via the toolbar in the upper right of the window and are summarized in the table below.  For some screen shots of these modes in effect see our help page.

Metering0

Perform no automatic tone-adjustment.  This is still an interesting mode for extended bit depth images because the wide-gamut color support is still in effect.

Metering1

Auto adjust the exposure value depending on what is in view.  This mode is very similar to the auto gain in a camera.

Metering2

Automatically compress or expand the dynamic range.  We use what is known as a global tone map operator.  The adjustment curve is continuously updated depending on what is in view, it is very similar to that published by Erik Reinhard.  The difference being that we transition to a contrast enhancement curve for low contrast regions.

Camera for the Web

To wrap it up, I have described a pipeline that lets you maintain all of the light information that was captured from the original scene.  A common term used to describe this is a ‘scene-referred’ workflow.  In the past scene-referred workflows have been advocated for the image editing process but not for delivery over the web.  For web delivery conversion to JPEG is the standard.  Here we are allowing scene-referred imagery to travel all the way to the end user’s monitor.  One way that we like to think of this is that HD View is a camera for the web.  It is performing a very similar task to camera hardware today which is reading scene-referred rays of light and generating the best possible JPEG image that it can.  We instead are enabling that scene-referred data to travel over the web.  The HD View "camera" can then generate the best possible photograph using additional information that is only available at the end of this pipeline.  This includes taking into account the monitor being used, the user’s current pan/zoom interaction, and any additional settings that we want to expose to the user.  Currently these additional settings are the three tone-adjustments listed above.  However, we can see expanding this to any of the settings that you find in hardware cameras today.  Beyond that, we are running on powerful PC hardware, so our software camera can perform image processing that is just not feasible in your point and shoot camera.

Try it out and we always appreciate feedback.

-Matt Uyttendaele


(1) I used Photoshop’s "Merge to HDR" for these examples on our site. 

(2) Wide field-of-view panoramic images are also typically HDR.  In the Capturing and Viewing Gigapixel Images paper, a pipeline to produce HDR gigapixel images is described,  PT Gui Pro and Autopano Pro both have the ability to create HDR panoramas.

(3) See figure 3. in Greg Ward’s HDR encodings paper.

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to High Dynamic Range in HD View 3

  1. Unknown says:

    Great post.Your flowchart made me smile: Photons have 32 bits? According to the particle or the wave interpretation?😉

  2. John says:

    Great post, I like many had assumed that HDView had died with the release of deep zoom and silverlight2. I now understand how much more HDView is then just viewing large images. I look forward to making my first HDR image and using HDView to show it online.
    Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s