Image space acquisition and modelling
In order to achieve realistic images, computer graphics employs detailed models which accurately describe the geometry and the materials of the scene to be rendered. The creation of those models comes at a high cost, though, especially when they need to be produced manually. This lead to the development of acquisition techniques which construct these models for real-world scenes automatically.
Image space methods stand out in that they do not require explicit scene descriptions as collections of surfaces and their illumination response. Instead, they capture a set of images of the scene, and render the desired result image directly out of that. Thus, they are especially suited for complex scene geometries and can robustly handle a multitude of materials – albeit at the cost of requiring large collections of input images for exact results.
Global illumination effects, such as reflections, refractions and shadows are handled intrinsically correctly for a multitude of materials by image space relighting. This picture shows a rendering of a test scene assembled from only 230 input pictures |
Image space relighting enables the rendering of images for scenes in drastically different illumination conditions. Even though no explicit scene model (geometry and local reflectance properties) is required, the reproduction of illumination effects is especially faithful to reality and expressive at the cut through the orange |
Image Space Relighting
One of the tasks that can be solved efficiently in image space is relighting: from a dataset of picture, which were taken under controlled, or at the least known and varying, illumination conditions, new pictures are rendered for novel illuminations which have no direct counterpart in the recorded data set.
In the general case, the image data set forms an 8D-reflectance field [Debevec et al., 2000]. It enables not only arbitrary illumination conditions to be simulated, but also modifying the camera position and its intrinsic parameters. Due to its high dimensionality, though, it is difficult to acquire and handle, which is why image space relighting makes simplifying assumptions: the camera is assumed to be invariant and unmovable between acquisition and rendering, and the simulated illumination is modeled as coming from the distance, so that there is no variation of illumination within the scene space. Thus, the reflectance field is reduced to only four dimensions. The approach being universal, all materials that can be modeled with a linear light transport assumption, and their interactions through global illumination effects, such as shadows, caustics, reflections and color bleeding can be simulated accurately.
Problems addressed in the past were
- illumination with a Bayesian Approach: how to transfer illumination information between scene parts without painstakingly calibrating the illumination conditions at setup time? [Fuchs et al. 2005, EGSR]
- adaptive sampling and extended light sources: how to acquire a 4D reflectance field so that only illumination directions need to be sampled for which the reflectance field exposes a high variation? [Fuchs et al. 2006, TOG]
- interpolation: how to use the acquired data most effectively, so that from few hundred recorded pictures results can be achieved as if thousands of pictures had been available? [Fuchs et al. 2007, EG]
- passive reflectance field displays: how to construct an object with a programmed appearance that matches a given reflectance field? [Fuchs et al. 2008, SIGGRAPH]
Future challenges may be found in fighting the “curse of dimensionality”: how to capture a full-fledged 8D reflectance field efficiently? and in the ongoing simplification of the measurement process.
From left to right: the glass mouse has a geometry which is hard to understand even for a human observer if only a single view is available (the photo shows the original). FIRS allows to virtually cut through the object and construct a data set which can be visualized as a volume, but also be assembled to a surface model. |
Volumetric Measurements of Transparant Scene Geometry with FIRS
Image-space methods no only provide solutions for relighting, they are also useful in different contexts. One example is the measurement of the geometry a transparent, complex object, which is difficult to acquire with conventional methods due to refractions and reflections dominating the appearance.
Fluorescent Immersion Range Scanning (IRS) solves the problem by submerging the scene in a fluorescent liquid. Illuminating the working volume with a single laser line of appropriate wave length creates a 2D slice through the volume, in which the liquid lights up, but the object remains dark. This slice can then be captured with a digital camera; as the line is moved, a densely sampled volume data set can be acquired, image by image.
This approach is also suitable for non-transparent materials such as shiny metal surfaces. While the opacity of the objects impedes capturing a volume data set, image processing can still capture the surface closest to the laser emitter, and thus reconstruct the surface of an otherwise difficult to scan material.