by Thomas Schiffer, Christoph Schinko, Torsten Ullrich and Dieter W. Fellner
The current methods of describing the shape of three-dimensional objects can be classified into two groups: composition of primitives and procedural description. As a 3D acquisition device simply returns an agglomeration of elementary objects (eg a laser scanner returns points) a real-world data set is always a – more or less noisy – composition of primitives. A generative model, on the other hand, describes an ideal object rather than a real one. Owing to this abstract view of an object, generative techniques are often used to describe objects semantically. Consequently, generative models, rather than being a replacement for established geometry descriptions (based on points, triangles, etc.), offer a sensible, semantic enrichment.
In combination with variance analysis techniques, generative descriptions can be used to enrich digitized artifacts. Detailed mesh comparisons can reveal the smallest changes and damage. These analysis and documentation tasks are valuable not only in the context of cultural heritage but also in engineering and manufacturing.
The Institut für ComputerGraphik und WissensVisualisierung (CGV) at Technische Universität Graz and Fraunhofer Austria tackled this problem and created a workflow, which automatically combines generative descriptions with reconstructed artifacts and performs a nominal/actual value comparison. The bridge between both the generative and the explicit geometry description is very important: it combines the accuracy and systematics of generative models with the realism and irregularities of real-world data. In this way, digitized artifacts can benefit from procedural modelling techniques: procedural models include expert knowledge within its object description; eg classification schemes used in architecture, archaeology and civil engineering can be mapped to procedures and algorithms, which realize a generative shape.
A generative description is like a shape template (eg to construct a cup) with some free input parameters (eg height and radius). For a specific object only its type and its instantiation parameters have to be identified. Identification is required by digital library services for markup, indexing and retrieval. The classification and semantic meta data in general are obviously important in the context of electronic product data management, product life cycle management, data exchange and storage.
The approach implemented by Fraunhofer Austria and CGV performs this semantic enrichment in three steps: The first step registers a generative model (including its free parameters) to the real-world data set. A generative model can be regarded as a function, which generates geometry, when called with some parameters. The registration step analyzes such a function, identifies whether it can describe the given artifact, and if so it determines the best-fit parameter set; ie no other parameter set can describe the digital artifact any better (in combination with the generative function). In a second step, the difference between the best-fit generative model and the input scan is computed. The results of this variance analysis are stored in a simple texture. Finally, the obtained variance can be visualized using X3D technology. The last step generates an X3D file containing the best-fit generative model, the texture of distance values and shader code capable of applying the difference as displacements. The X3D solution offers an integrated, standard compliant approach for visualization and documentation purposes. Furthermore, the standardized X3D format is self-contained and does not depend on an external viewer application.
The generated X3D file incorporates the following components:
- The geometry of the generative, best-fit reference mesh as indexed face set
- A texture storing distance values to the digitized artifact
- Vertex, geometry and fragment shaders for displacement and lighting purposes.
Figure 1: Comparison between a reconstructed vase and the results of a generative enrichment with real-world geometry. The light red parts do not have a counterpart in the input data set, whereas the light brown parts correspond to the input data set.
To demonstrate our workflow we use an example data set of the Museum Eggenberg collection. The vase (see Figure 1 – left) has been reconstructed using the photogrammetric service ARC3D (http://www.arc3d.be), which takes a set of photographs and returns a reconstructed triangle mesh. Furthermore, an existing generative shape template of a vase with 13 degrees of freedom (ie 13 real-valued input parameters) has been passed to our workflow. The resulting X3D file describes the real-world geometry as well as the generative structure. Its included shader code is capable of applying the difference as displacements and allows a human viewer to switch selectively between the two model descriptions or displays both of them simultaneously (Figure 1 – right illustrates these results). The rendering shows the clean generative model (light red) for parts, which do not have a counterpart in the input data set, and the geometric offset (light brown) for parts, which do have a corresponding part in the input data set.
There are some drawbacks when displacing vertices on a curved surface (holes or overlaps appear) or when the input data has no correspondence to the generative representation. These issues will be the focus of future work.
TU Graz, Austria