by Matteo Dellepiane and Marco Callieri
It is now possible to display very detailed colour information on big 3D models. Research at the Visual Computing Lab, ISTI-CNR, Pisa, Italy, aims at creating a mostly automatic, easy-to-use way of mapping colour on geometric data.
The evolution of technology and important advances in the field of visualization of huge 3D datasets mean that it is now possible to acquire and display detailed 3D models. However, in order to achieve a completely realistic result, high quality colour information must be added to the geometric structure.
A coloured model can be created starting from: a 3D dataset, a set of photos (such as those shown in Figure 1) and calibration data, ie, the values of the parameters of the camera that took the photos. They can be divided into two groups: extrinsic parameters (translation vector and rotation matrix), related to the position of the camera in the space, and intrinsic parameters, the 'internal' settings (focal length, lens distortion) of the camera. Unfortunately, in most cases, calibration data are not available and have to be estimated.
Camera parameters are calculated by aligning each photo to the model: since this can only be done automatically in particular cases (eg, 'shape from silhouette'), user intervention is necessary.
Algorithms which align a photo on a model need some correspondences between them: the user must indicate some corresponding points on both objects.
Our software, called TexAlign, allows users to load both the model and all the photos, creating an Alignment Process whose data (correspondences coordinates, parameters of aligned images) are saved in an xml file. It is also possible to set correspondences not only between a photo and the model, but also between photos, using overlapping sections. The 'image to image' correspondence is used by the application to infer new correspondences with the model. A 'workload minimizer' analyses the graph of correspondences between the elements of the Alignment Process and helps the user to complete all alignments very quickly.
Once calibration data have been estimated, we have to 'project' the colour information onto the model. In 3D graphics, there are two ways to display colour on a model: texture mapping, where a single texture image is 'mapped' to the model, and 'per vertex' colour, where a RGB colour value is assigned to each vertex. The first technique gives very good results; the main limitation is that the photos have to be 'packed' in an image of maximum 4096x4096 resolution. Since we normally deal with tens of photos, we would need to subsample them in order to put all of them in a single image. Hence, since we can provide very detailed 3D models, colour per vertex is the best choice to preserve both geometric and colour detail. We assign to each vertex of the model the colour of the images which project on it. Since more than one image could project onto the same vertex, a major issue rises: how can we automatically 'weigh' the contribution of each photo in order to achieve a realistic result?
To solve this problem, we developed an application called TexTailor. Starting from the set of photos, the model and the calibration data, TexTailor automatically creates a mask for each photo, assigning a weight to each pixel. The weight is calculated by considering three main values: the angle between the normal of the vertex associated with the pixel and the direction of view, the distance between the point of view and the vertex (depth) and the distance of the pixel from a discontinuity in the 'depth map' (that is a map where each pixel has the value of the depth of the associated vertex). These values are combined to calculate a weight. The colour value of each vertex is a 'weighted sum' of the contributions of all images. A result achieved using this approach is shown in Figure 2.
Even though the results so far are encouraging, more work is needed to improve the technique. The most promising area for future investigation is related to the estimation of the lighting environment. This can be done using known techniques (ie mirrored spheres) during the photographic campaign, or by combining image processing and analysis of geometric information during the projection operation. A good estimation of lighting would prevent the projection of dark areas in photos and could also provide information about the reflectance properties of the material.
Visual Computing Lab at ISTI-CNR: http://vcg.isti.cnr.it/joomla/index.php
Matteo Dellepiane, Marco Callieri, ISTI-CNR, Italy
Tel: +39 0503152925