by Michal Haindl, Matěj Sedláček and Radomír Vávra
Museums and other cultural heritage custodians are interested in digitizing their collections, not only for the sake of preserving cultural heritage, but also to make the information content accessible and affordable to researchers and the general public. Once an object’s digital model is created it can be digitally reconstructed to its original uneroded or unbroken shape or realistically visualized using different historical materials. Some artifacts are so fragile that they cannot leave the carefully controlled light, humidity, and temperature of their storage facilities, thus they are already inaccessible to the public, and the viable alternative is their exhibition in the form of an augmented reality scene. Researchers at the Institute of Information Theory and Automation (UTIA) of the Czech Academy of Sciences in Prague have developed a sophisticated measurement and processing setup to enable the construction of physically correct virtual models.
While precise shape measurement can be achieved using advanced laser scanners and other commercially available shape measuring devices, an object’s surface appearance is much more complicated. Virtual reality applications typically use oversimplified surface material and illumination models that only remotely approximate the appearance of real scenes, meaning human observers can easily differentiate between real and virtual or augmented reality scenes. Real surface material visual appearance is a highly complex physical phenomenon which intricately depends on incident and reflected spherical angles, time, light spectrum and several other physical variables. While recent advances in computer hardware and virtual modelling are finally allowing the view and illumination dependencies of natural surface materials to be taken into account in the form of bidirectional texture function models (BTF) [1], this occurs at the expense of an immense increase in the required number of material sample measurements and the visualization complexity.
Within the Pattern Recognition department of UTIA, we have built a high precision robotic gonioreflectometer [1,3]. The setup consists of independently controlled arms with camera and light. Its parameters, such as angular precision (to 0.03 degrees), spatial resolution (1000 DPI), and selective spatial measurement qualify this gonioreflectometer as a state-of-the-art device. The typical resolution of an area of interest is around 2000 x 2000 pixels, each of which is represented by at least 16-bit floating point values to achieve reasonable representation of high-dynamic-range visual information. The memory requirements for storage of a single material sample amount to 360 giga-bytes per spectral channel but more precise spectral measurements with a moderate visible spectrum (400-700nm) sampling further increase the amount of data to five tera-bytes or more.
Figure 1: Celtic druid head (300 BC, National Museum in Prague) precise BTF plaener model (left) and the reconstructed head using the same BTF model but another environmental lighting (right.).
Figure 2: Reconstructed druid head using the linden wood BTF model.
We applied this technique within the Czech Science Foundation project GAČR 14-10911S for the best known Celtic artifact from the European Iron Age period (450–50 B.C.) owned by the National Museum in Prague - the Celtic druid head. This plaener Celtic druid head (see Figure 1 - left) is so precious that it has only been exhibited three times since its discovery at a sandpit in Mšecké Žehrovice, Czechia in 1943, and each time only for a few days under tight security. Its exact digital model with ±0.1 mm accuracy created from our laser scanner measurements allows us not only to create a high quality copy for permanent exhibition, but art historians can study in detail its chiseling style by an ancient artist, its ritual smashing when Celts had to abandon their sanctuary, and even allows researchers to look for alternative materials (Figure 2) of that era. Visual techniques are non-invasive and thus ideal for documentation and assessment of cultural objects directly in their workroom computers. Unfortunately some parts of this precious sculpture were never recovered (see the right part of the head digital model on Figure 1 - left). These missing parts, as well as stone scars due to its ritual smashing and ploughing damage, can be reconstructed using prediction-based image processing methods. Figures 1 – right and 2 illustrate such shape reconstruction results in the original plaener and possible alternative wooden (Figure 2) materials. Once the accurate shape and material models are developed, we can insert such a model into an augmented reality scene (Figure 1 – right) in a way which respects physically correct illumination and viewing conditions derived from the real environment. For example, to insert this Celtic druid head into a true Celtic sanctuary if archaeologists were to find one.
References:
[1] M. Haindl, J. Filip: “Visual Texture”, ISBN 978-1-4471-4901-9, London: Springer-Verlag, 2013.
[2] M. Haindl, J. Filip: “Advanced textural representation of materials appearance”, Proc. SIGGRAPH Asia’11 (Courses), pp. 1:1 - 1:84, ACM .
[3] M. Haindl, J. Filip, R. Vávra: “Digital Material Appearance: The Curse of Tera-Bytes “, ERCIM News, 90, pp. 49-50, ISSN 0926-4981, 2012.
Please contact:
Michal Haindl
CRCIM (UTIA), Czech Republic
Tel: +420 266052350
E-mail: