3D Reconstruction and Auralisation

of the “Painted Dolmen” of Antelas


Introduction

3D Reconstruction

Auralisation

Final Demonstration

Audio Demo

Acknowledgements

References


Introduction

Virtual Reality is a very active topic of research. A large number of applications of this type of technology can be found in areas as varied as the automotive industry, computer games, industrial training and prototyping, aeronautics, medicine, archaeology, architecture and tourism [1-3]. The European Network of Excellence-INTUITION on this topic joins together more than 58 partners.

Most of the effort in the design and development of VR systems has normally been directed at providing a visually realistic experience to the user. However, whilst vision is undoubtedly our predominant sense, the feeling of immersion in a Virtual Environment can be significantly improved by taking our other senses into account as well. Among them, hearing clearly stands out as the most important for the enhancement of VR experiences.

The focus of this work is precisely on the combination of visual and audio immersion; in other words, the reconstruction of a real-world environment through the development of a 3D model which allows the user to see and hear as if she was really there. This requires not only recording the environment’s actual visual and acoustic properties and integrating them into the model but also tracking the user’s movements and updating the audiovisual scene accordingly in real-time.

The Anta Pintada (painted dolmen) of Antelas was deemed an excellent case-study for this work. Among the numerous remains from the Neolithic period which can be found in the Vouga valley region, this chamber tomb stands out for its extraordinary archaeological value, mainly due to the unique colour drawings found in its interior [4]. Extremely fragile (a considerable part was irremediably lost through exposure to light in early archaeological campaigns), they require strict conservation measures, including severe restrictions to visitor access. This problem – reconciling heritage conservation with the need to provide public access – is by no means exclusive of this particular site. In some cases, the solution has involved building replicas [5, 6]. A less radical, more affordable alternative is offered by the development of VR models. These can also be invaluable in the promotion (especially through the Internet) and museological presentation of a site. The heritage conservation authorities responsible for the Anta Pintada are very keen on investing in these areas.

Additional motivation for choosing the Anta Pintada to test a VR model integrating audio is provided by the emergence of Acoustic Archaeology [7]: there is growing scientific interest in studying the acoustics of ancient man-made structures. Intriguing acoustic properties have been found in many of them; there is a suggestion that those properties might have been deliberately engineered. The suggestion is particularly strong for Neolithic passage-graves (i.e. composed of a corridor and a burial chamber) such as this one [8].

Back to Top

3D Reconstruction

Acquisition

The 3D Laser Range Scanner prototype used in this work is based on a 2D scanner (SICK LMS 200 laser unit) fixed on a tilt unit to allow rotation. The pan and tilt information are synchronised to produce a spherical representation of points [9]. In May 2006, the prototype was used to acquire 3D information from the “Anta Pintada de Antelas” a Neolithic chamber tomb located in Oliveira de Frades, listed as Portuguese national monument (see Figure 1).

In-situ data acquisition in May 2006

Registration

The first processing step consists in registering all the acquired data on the same coordinate system. The software developed for this purpose, based on VTK [10, 11], uses one of the most popular methods of performing this alignment – the Iterative Closest Point algorithm (ICP)

Two views of the complete Anta model obtained by aligning a set of 9 point clouds.

Final 3D Model Creation

The algorithm implemented is described in the next figure. It is applied to all the registered range images and starts by performing a 2D Delaunay triangulation on each. All grid nodes that fall within the viewing volume of the range image are marked. In this way, the algorithm ‘illuminates’ the nodes with each range image until the whole volume inside the model is marked. A contouring operation on this grid then generates a 3D model of the room, as shown in the following figure.

Processing of the registered cloud of points to obtain a 3D model

Back to Top

Auralisation

Acoustic characterisation of the 3D model

An application was developed to add that information, indispensable for calculating both the early-reflection and late-reverberation parts of the RIR. Based on VTK, the application allows models to be imported in VRML and OBJ formats. Among other functionalities, it offers the possibility of selecting single triangles (Figure a) or groups of triangles using a BoxWidget (Figure b) and assigning materials to them, chosen from an SQL database containing the acoustic properties of each material

Examples of triangle and box widget selection on a synthetic model.

Early reflections

The first step to calculate sound reflections in the 3D model is to work out the position of the virtual sources associated with each triangle. The second step is to check their “visibility”, i.e. whether the line between virtual source and listener intersects the corresponding triangle [15]. The following figure shows the location of the visible virtual sources (represented by grey spheres) corresponding to first-order reflections in two different models. Source and listener are represented respectively by a sphere and a head.

Figure 9: Position of the virtual sources: two examples

Head Related Transfer Functions (HRTF)

The acoustic stimuli at a listener’s eardrums are influenced by the complex interaction between the sound waves and the listener’s torso and head. This interaction is strongly dependent upon the direction of arrival of the sound wave. For each angular position of the sound source relative to the centre of the head (usually specified by two angles: azimuth and elevation), it can be described by a pair of HRTF (Head-Related Transfer Functions [16]) – one for each ear. Usually, a discrete set of HRTF is defined for regularly distributed values of azimuth and elevation. The HRTF capture the main cues on source localisation, provided by the differences in sound intensity and arrival time between ears, known as Interaural Time Difference (ITD) and Interaural Intensity Difference (IID).

Back to Top

Final Demonstration

  

Demonstrations: User in VR Setup, example with (left) a synthetic model and (right) the Anta model

Audio Demo

By clicking on the following images, you can experience some of the sounds generated by our software. The sounds (drums and a bone flute found in China at 9,000-year-old Neolithic site) are played inside the model of the anta.  The images show the position of the user and of the sound source and also indicate the movement that was done by the user while listening to the sound. The third sound corresponds to a sound source jumping to several different positions around the user. In order to have o good perception of the spatial sound, headphones should be used. All the computed sounds use direct sound and first order reflections. The non-processed original sounds of the djembé and the flute are also available.

 

                                                                                    non – processed djembé                                               non – processed flute

 

                                  

Back to Top

Acknowledgements

The authors wish to thank the City Council of Oliveira de Frades, for granting them access to the Anta Pintada, and especially Filipe Soares (City Council / Municipal Museum) for his kind collaboration.

Back to Top

References

1.     Brooks, F.P., What's real about virtual reality? IEEE Computer Graphics and Applications, 1999. 19(6): p. 16-27.

2.     van Dam, A., D.H. Laidlaw, and R.S. Simpson, Experiments in Immersive Virtual Reality for Scientific Visualization. Computer & Graphics, 2002. 26: p. 535-555.

3.     Zajtchuk, R. and R.M. Satava, Medical applications of virtual reality. Communications of the Acm, 1997. 40(9): p. 63-64.

4.     IRHU Inventory - URL-http://www.monumentos.pt/Monumentos/forms/002_B1.aspx.

5.     La Grotte de Lascaux. URL-http://www.culture.gouv.fr/culture/arcnat/lascaux/fr/.

6.     Brú na Bóinne Visitor Centre - Newgrange and Knowth.

URL-http://www.heritageireland.ie/en/MidlandsEastCoast/BrunaBoinneVisitorCentreNewgrangeandKnowth/

7.     Devereux, P., Stone Age Soundtracks: The Acoustic Archaeology of Ancient Sites. London, 2001: Vega.

8.     Jahn, R.G., P. Devereux, and M. Ibison, Acoustical resonances of assorted ancient structures. Journal of the Acoustical Society of America, 1996. 99(2): p. 649-658.

9.     Dias, P., M. Matos, and V. Santos, 3D reconstruction of real world scenes using a low-cost 3D range scanner. Computer-Aided Civil and Infrastructure Engineering, 2006. 21(7): p. 486-497.

10.   Schroeder, W., K. Martin, and B. Lorensen, The Visualization Toolkit An Object-Oriented Approach To 3D Graphics, 4th Edition: Kitware, Inc. publishers.

11.   Schroeder, W.J., et al., The Visualization Toolkit User's Guide. 2001: Kitware, Inc publishers.

12.   Besl, P.J. and N.D. Mckay, A Method for Registration of 3-D Shapes. Ieee Transactions on Pattern Analysis and Machine Intelligence, 1992. 14(2): p. 239-256.

13.   Farina, A. Convolution of anechoic music with binaural impulse responses. in Proc. of PARMA-CM Users Meeting. 1993. Parma - Italy.

14.   Gardner, W.G., Chapter 3. Reverberation Algorithms, in Applications of digital signal processing to audio and acoustics / edited by Mark Kahrs, Karlheinz Brandenburg, M. Kahrs and K. Brandenburg, Editors. 1998: Boston.

15.   Savioja, L., et al., Creating interactive virtual acoustic environments. J. Audio Eng. Soc, 1999. 47(9): p. 675-705.

16.   Gardner, B. and K. Martin, HRTF Measurements of a KEMAR Dummy-Head Microphone, MIT Media Laboratory. 1994.

17.   Kendall, G.S., A 3-D sound primer: Directional hearing and stereo reproduction. Computer Music Journal, 1995. 19(4): p. 23-46.

18.   PortAudio, an Open-Source Cross-Platform Audio API. URL-http://www.portaudio.com.

19.   Dahl, L. and J.-M. Jot. A reverberator based on absorbent all-pass filters. in COST-G6 Conference on Digital Audio Effects DAFx’00. 2000. Verona, Italy.

20.   Begault, D., 3-D Sound for virtual reality and multimedia. 1994: Academic Press.

Back to Top

Author: Paulo Dias at IEETA/Universidade de Aveiro, Portugal

Back to Top
Back to Paulo Dias Homepage