• Ingen resultater fundet

5 Synthesis Techniques

5.6 Granular synthesis

Granular synthesis (see [Truax 1988], [Roads 1996], [Childs 2002]) is a probabilistic sound generation method, based on drawing many short packets of sound called grains or granules, from source waveforms. A sound grain lasts a brief moment (typically 1 to 100 ms), which approaches the minimum perceivable event time for duration, frequency, and amplitude discrimination [Roads 1996].

An amplitude envelope shapes each grain. This envelope can vary in different implementations from a Gaussian bell-shaped curve to a simple three-stage line-segment attack/sustain/decay, each of which creating sonically different sounds. The grain duration can be constant, random, or it can vary in a frequency-dependent way. The waveform within the grain can be of two types: synthetic or sampled. Synthetic waveforms are typically sums of sinusoids scanned at a specified frequency. For sampled grains, one typically reads the waveform from a predetermined location in a stored sound file, with or without pitch-shifting. Several parameters can be varied on a grain-by-grain basis, including the duration, envelope, frequency, location in a sound file (for sampled grains), spatial location, and waveform (a wavetable for synthetic grains, or a file name or input channel for sampled grains). A simple granular synthesis process is shown in Figure 32. The resulting sound signal can be written as;

( )

=

∑ (

)

Despite the simplicity of the instrument, to generate even a plain, uncomplicated sound requires a massive amount of control data. These parameters describe each grain:

starting time, amplitude, etc. The complexity of the sound generated by granular synthesis derives from the amount of control data fed to it. If n is the number of parameters for each grain, and d is the average grain density per second of sound, it takes d × n parameter values to specify one second. Since d typically varies between a few dozen and several thousand, it is clear that for the purposes of compositional control, a higher-level unit of organization for the grains is needed. The purpose of such a unit is to let composers specify large quantities of grains using just a few global parameters.

• Time-granulated or sampled-sound stream, with overlapped, quasi-synchronous, or asynchronous playback.

As mentioned, the asynchronous granular synthesis has been implemented in this project and therefore is the only organization type that will be described.

Figure 32. Simple granular synthesis process. A much greater number og grains would be used in real output for a smoother waveform. When a new grain is created, a section of the waveform is copied. The position of the section is determined by the temporal distribution across the waveform.

This section is the envelope. All of the currently active grains are summed to produce the final output. This figure is taken from [Williamson and Murray-Smith 2004].

5.6.1 Asynchronous Granular Synthesis

The asynchronous granular synthesis (AGS) scatters grains in a statistical manner over a specified duration within regions inscribed on the frequency-time space. These regions are called clouds – the units which the composer works with. The composer specifies a cloud in terms of the following parameters:

• Start time and duration of cloud

• Grain duration or grain duration range

• Density of grains per second or by time frame

• Bandwidth of the cloud (only for synthetic waveforms)

• Amplitude envelope of the cloud

• Waveform(s) within the grains (only for synthetic waveforms)

• Spatial dispersion of the grains in the cloud

pointillistic textures, while high grain densities create more massive blocks of sounds.

The spatial algorithm of a cloud can involve random scattering or panning effects over the duration of the cloud, and enhances granular texture.

An analogy exists between the AGS and those created in the visual domain by particle synthesis. Particle synthesis has been used to create fire, water, clouds, fog, and grass-like textures, which are analogous to some of the audio effects possible with AGS (crackling fire, water gurgling, windy gusts, and explosions).

5.6.2 Using Granular Synthesis to Display Probability Densities

Recently in [Williamson and Murray-Smith 2004], it was proposed that sonification via granular synthesis is a particularly suitable method for performing the translation of changing conditional and joint probabilities. Consider each cloud in the AGS as a conditional probability density function of a specific class. The probability of being in each class is given by a probabilistic model of the data. For example, if the grain density is held constant, then the number of grains to be drawn from each class is the probability of being in the class multiplied by the grain density.

A spatial distribution example was made together with the implementation of the AGS in the project process. An illustration in Figure 33 shows how a mixture of three Gaussians could be used to map regions of a two-dimensional state-space to sound. Each Gaussian is associated with a specific sound. As the cursor moves through the space, the timbre and/or the pitch of the sound changes accordingly. Although here the densities are in a simple spatial configuration, the technique is general and is applicable to higher dimensions. A QuickTime animation created of a fix path through the state-space was created and this can be found on the attached CD-Rom in folder “Granular Synthesis Using Probabilities”. Figure 34 shows a flowchart of the Matlab implementation of the sonification of conditional probabilities (or joint probabilities) using AGS with sampled sounds. The sampled sounds used in the implementation were made on a software synthesizer from Native Instruments called Absynth. The parameters in the initialization together with the sampled sounds were optimized such that the author found the sound results created as pleasurable as possible. The spatial dispersion of the grains where distributed evenly in the left and right channel thus creating a stereo sound file.

Figure 33. Illustration of a path through a two-dimensional state space comprised of three Gaussians.

As the cursor (black diamond) moves along the path the sound at each point is given by the conditional probabilities of being in either of the classes given by the present point of the cursor.

Imagine this path as being a tracking of some process through three distinct states represented by sounds.

Input:

P(C k |x) or P(x,C k ) (Prob) Sample Selection (GSmode)

Initialization:

Grain Density - number of grains in each frame (n) Grain Duration Range (MinGD, MaxGD)

Frames Per Second (fps) Overlap of Frames (Overlap)

Envelope Type (Winmode)

Calculate:

Number of grains to be drawn for each class (draw_p = n.Prob) Number of frames in sonification (Slen = length(Prob)/fps)

for 1:Slen

Get:

draw_p windows with random grain duration within [MinGD, MaxGD] for each class (GE)

draw_p of random selection of grains from the samples for each class (Grains) Calculate:

Sum uniformly distributed windowed grains in current time frame with previous time frames in GrainSum

Sonification of probabilities using

granular synthesis (GrainSum)

Figure 34 shows an overview of the sonification process using probabilities to control an AGS, with the help of a flowchart. The number of grains drawn from each cloud for each time frame and the length of the sonification is controlled via the input data.