ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

The evolution of electronic technology and the growing presence of computer science in the music field have greatly transformed ways of 'making music', involving various aspects from creation to production and performance, and leading to the appearance of new artistic forms. The audio functionalities are often closely intertwined with the world of graphics, video, performance, virtual reality and telecommunications, creating artistic and cultural multimedia products. This means that an efficient data-processing system will play an essential role in ensuring that all the operations planned during the conceptualization and design of a performance can be realized rapidly and smoothly, allowing the performance to take place in real time with a high level of interactivity.

Researchers of the ISTI computerART Lab and the DSP Audio team [1,2,3] have focused on developing systems that detect real-time features from body actions during interactive artistic multimedia performances. Two relevant examples are the 'Palm Driver' (in which the movement of a player's hands controls real-time synthesized music via an infrared interface), and the 'PAGe' (in which the movement of a painter's hands in front of a video camera produces a painting on a virtual canvas projected by a video-beam). Other recent methods for feature extraction from audio signals have also been proposed in the framework of the MUSCLE-NoE EU Project [4] (see E-Team7: Semantic from Audio and Genre Classification for Music).

Figure 1: Outline of models for controlling audio-video effects.
Figure 1: Outline of models for controlling audio-video effects.

The Pandora system tracks the audio parameters of a live musical performance in order to control the video effects, following a pre-designed storyboard for a movie, a 3D sequence or other video content. The project was proposed by the musician Enrico Cerretti [4], and the video effects have been developed jointly with Infobyte SPA (Rome, [5]). Pandora involves monitoring the performer (actor), and eventually a video operator (director) who can also modify the execution flow, thus setting up bidirectional feedback between the two, ie between music and video. The sequence of main system computations is represented in Figure 2.

Figure 2: Pandora principle of operations.
Figure 2: Pandora principle of operations.

Music (or other sounds) produced by the performer are acquired by microphones; the relative analogue signals are sent to an audio interface and processed in a typical Windows platform in order to dynamically compute the parameters of interest – energy and fundamental frequency. The amplitude can be measured by means of an envelope follower detector from which the true value of the effective energy of the signal will subsequently be obtained. Detection and tracking of the fundamental frequency, our second parameter of interest, is a well-known and non-trivial problem in the literature and many methods have been proposed to tackle it. The algorithm we used works in the time domain and implements the 'Average Magnitude Difference Function' (AMDF), better known as the 'fast autocorrelation function', which exploits sums and differences of signal samples rather than products.

The association of sound parameters for controlling 3D/2D video sequences is usually determined during the planning phase of the performance by a special Multimedia Editor. The values extracted are used directly or by applying suitable mapping. Once the system had been implemented, various applications were developed to test its functionalities and performance. The experiments have confirmed the correct tracking/extraction of sound parameters produced by traditional instruments (clarinet, flute), in term of low latency and accuracy. Users can easily link these parameters to video effects in various ways (3D shape transformations, colours, shading etc).

Figure 3: Sequence of sails and caravel movements controlled by the RMS sound parameter.
Figure 3: Sequence of sails and caravel movements controlled by the RMS sound parameter.

The system is not only suitable for interactive artistic multimedia performances but also for other non-artistic applications such as multimedia authoring, company presentations and musical rehabilitation therapy. In the future, the system will be tested with a variety of instruments in order to tune the appropriate settings of the algorithm for a wide range of applications.

Links:
[1] http://tarabella.isti.cnr.it/
[2] http://www.bad-sector.com
[3] http://www.isti.cnr.it
[4] http://www.myspace.com/enricocerretti
[5] http://www.infobyte.it

Please contact:
Graziano Bertini, ISTI-CNR
E-mail: graziano.bertini@isti.cnr.it

Leonello Tarabella, ISTI-CNR
E-mail: leonello.tarabella@isti.cnr.it

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Get the latest issue to your desktop
RSS Feed