• Ingen resultater fundet

Architecture of the Implementation

In document Audio Processing on a Multicore Platform (Sider 101-106)

6.5 Audio Processing Latency on a Multicore Platform

7.1.2 Architecture of the Implementation

The individual effect structures and functions explained in Sections 5.3 and 5.4 are the base for the multicore platform implementation: here, a data struc-ture that is general for all effects is created on top of the strucstruc-tures previ-ously mentioned, which is calledstruct AudioFXand explained in Subsection

7.1.2.1. The same object-oriented style approach has been used for this imple-mentation. Similarly, the effect setup and audio processing functions, called alloc audio varsandaudio processrespectively, are also built on top of the functions of each individual effect. These two are overviewed in Subsections 7.1.2.2 and 7.1.2.3. The full C implementation of these structure and functions can be found in theaudio.candaudio.hfiles of thelibaudiolibrary of Patmos1.

7.1.2.1 The AudioFX Structure

Thestruct AudioFX uses an object-oriented style approach, which allows in-stantiating audio effects as objects. It stores generic parameters of the effect (ID, effect type, core number, buffer sizes, connections...), which need to be set following the rules explained in Section 6.3. The structure can be found in Listing 7.1.

1 // ty p e of c o n n e c t i o n : f i r s t last , to NoC , or to s a m e c o r e

2 t y p e d e f e n u m { FIRST , LAST , NOC , S A M E } c o n _ t ;

3 // c o m p a r i s o n of r e c e i v e / s e n d b u f f e r s i z e s

4 t y p e d e f e n u m { XeY , XgY , XlY } p t_ t ;

5 // p o s s i b l e e f f e c t s :

6 t y p e d e f e n u m { DRY , DRY_8S , DELAY , O V E R D R I V E , WAHWAH ,

7 CHORUS , D I S T O R T I O N , HP , LP , BP , BR ,

8 VIBRATO , T R E M O L O } f x _ t ;

9

10 s t r u c t A u d i o F X {

11 // e f f e c t ID

12 _ S P M int * f x _ i d ;

13 // co r e n u m b e r

14 _ S P M int * c p u i d ;

15 // c o n n e c t i o n t y p e

16 _ S P M c o n _ t * i n _ c o n ;

17 _ S P M c o n _ t * o u t _ c o n ;

18 // a m o u n t of s e n d and r e c e i v e c h a n n e l s ( f o r k or j o i n e f f e c t s )

19 _ S P M u n s i g n e d int * s e n d _ a m ;

20 _ S P M u n s i g n e d int * r e c v _ a m ;

21 // p o i n t e r s to SPM d a t a

22 _ S P M u n s i g n e d int * x _ p n t ; // p o i n t e r to x l o c a t i o n

23 _ S P M u n s i g n e d int * y _ p n t ; // p o i n t e r to y l o c a t i o n

24 // r e c e i v e and s e nd NoC c h a n n e l p o i n t e r s

25 _ S P M u n s i g n e d int * r e c v C h a n P ;

26 _ S P M u n s i g n e d int * s e n d C h a n P ;

27 // p r o c e s s i n g t y p e

28 _ S P M p t _ t * pt ;

29 // p a r a m e t e r s : S , Nr , Ns , Nf

30 _ S P M u n s i g n e d int * s ;

31 _ S P M u n s i g n e d int * Nr ;

32 _ S P M u n s i g n e d int * Ns ;

33 _ S P M u n s i g n e d int * Nf ;

1https://github.com/t-crest/patmos/tree/master/c/libaudio

7.1 Architecture and Technical Details 91

34 // in and out b u f f e r s i z e ( bo t h for NoC or s a m e core , in s a m p l e s )

35 _ S P M u n s i g n e d int * x b _ s i z e ; // x b u f f e r

36 _ S P M u n s i g n e d int * y b _ s i z e ; // y b u f f e r

37 // a u d i o d a t a

38 v o l a t i l e _ S P M s h o r t * x ; // i n p u t a u d i o x [2]

39 v o l a t i l e _ S P M s h o r t * y ; // o u t p u t a u d i o y [2]

40 // A u d i o e f f e c t i m p l e m e n t e d

41 _ S P M f x _ t * fx ;

42 // P o i n t e r to e f f e c t s t r u c t

43 _ S P M u n s i g n e d int * f x _ p n t ;

44 // B o o l e a n v a r i a b l e for l a s t t y p e s : c h e c k s n e ed to w a i t for o u t p u t

45 _ S P M int * l a s t _ i n i t ;

46 // L a t e n c y c o u n t e r ( f r o m i n p u t to o u t p u t )

47 _ S P M u n s i g n e d int * l a s t _ c o u n t ;

48 _ S P M u n s i g n e d int * l a t e n c y ;

49 };

50

Listing 7.1: Parameters of theAudioFXstructure.

Some of the parameters of Listing 7.1 are trivial. The ones that are not are explained here:

• Thesend amandrecv amparameters store information about how many send or receive channels this effect is connected to (i.e. if it is a fork or a join effect).

• The x pnt and y pnt parameters point to the location where the audio data in the receive and send buffers is. This will be the buffer correspond-ing to a NoC channel, if the effect is connected to the NoC. Otherwise, it will be some location in the local SPM.

• Theptparameter defines the processing type of the effect: XeY,XgY or XlY (these terms were introduced in Subsection 6.3.2). This is needed by theaudio processfunction to know which steps it should execute during processing (sending, firing, receiving...) and how often.

• The S, Nr, Ns and Nf parameters were also introduced in Subsection 6.3.2.

• Thexandylocations hold the audio samples, but are only used if the effect is not connected to the NoC on its input or its output, respectively. When it is connected, the audio samples are stored in the send and receive buffers of the NoC channels, handled by the functions of the message passinglibmp library, and accessed by thex pntand y pntpointers.

• The fx pnt parameter is a pointer to the actual audio effect structure (delay, filter, distortion, and so on). It can point to any of the effects presented in Section 5.4. That is why, at the beginning of this section, it has been stated that the struct AudioFX is implemented on top of the individual processing structures.

• Finally, thelast init, last countandlatencyparameters are instan-tiated only when the effect is the last one of the chain, so it needs to take care of the audio signal latency, as explained. The latency parameter contains the latency value in iterations. The last count is incremented in each run at the beginning, and when it reaches the latency value, the last initboolean value is set to true, indicating that the output of audio data can begin.

7.1.2.2 The alloc audio vars Function

Thealloc audio varsfunction can be found in theaudio.cfile of thelibaudio library. It takes care of the audio effect allocation and initialization, and needs to be executed during setup time, before processing. It has no strict time requirements. The main argument it takes isstruct AudioFX *audioP, which is a pointer to the effect object. The rest are values of the effect’s parameters.

The function takes care of storing each parameter in the local SPM, using the mp alloc() function of the libmp library to keep track of the next available address. As explained before, it does not store all the parameters of thestruct AudioFX, but only the relevant ones for the given effects (for instance, if the effect is not the last of the chain, it does not make sense to store the parameters related to latency). It also initializes some parameters.

The audio effect that is processed (delay, distortion...) is also given as an argu-ment. This function calls the alloc <FX> vars function to allocate the effect

<FX>in the SPM. It can be any effect of the ones listed in Section 5.4. An im-portant addition is an effect calledDRY 8SAMPLES, which is unique in the sense that it processes a block of 8 samples, instead of just one, as the rest of the effects do. It does not make any actual processing, as it simply copies 8 input samples to its output buffer each time it fires. However, this effect has been created to show that the implemented multicore platform also supports effects that process more than a single sample, and that combinations of effects with different data-rates are synchronized correctly.

Finally, there is a function related to this one, namedfree audio vars, and it is called just before exiting the program, to free the space that has been

dynam-7.1 Architecture and Technical Details 93

ically allocated in the external memory (such as audio buffers or modulation arrays of the effects).

7.1.2.3 The audio processFunction

The audio process function is shown in the appendix Section C.1. It is the main function to process each effect, so it has strict real-time requirements.

Its only input argument is a pointer to the effect structure, struct AudioFX

*audioP. As stated before, this function is the same for all the effects in all cores, but the effect object passed as an argument will have its own parameters, so the function will act differently on each case. This function calls a differ-ent processing function depending on the effect type (distortion, delay...). The function called isaudio <FX>, where again,<FX>can be any of the effects listed in Section 5.4. The steps done by this function are briefly described here.

First of all, input and output audio data pointersxPandyPare created, which will point to the location specified by x pnt and y pnt. The location can be a NoC channel buffer or another SPM location. Then, the receiving, firing (processing) and sending steps are executed. For this, the function checks the processing typeptof the effect, and executes the steps in the correct order, as many times as needed, depending on theNr, Ns, NfandSvalues. These steps were defined in Subsection 6.3.2 for each processing type. The receiving and sending process is executed on every channel, if the effect is a join or a fork. If the effect is the first or last of the chain, theaudioInandaudioOutfunctions are called respectively, to exchange data with the audio interface I/O device. In each step, thexPandyPpointers need to be incremented correctly.

If the effect is connected to the NoC, it calls themp recv,mp sendandmp ack functions (the last one as many times as the receive). The timeout argument is used to prevent the platform from getting stuck when there is any problem. In the case of the mp sendfunction, the sending process through the NI and the Argo NoC will be overlapped with the computation: this means that the core can continue processing after sending, as long as it does not get stuck because there are no available send buffers. If the next effect in the chain is located in the same core, then data is simply placed in an SPM location, where it can be read by the next effect.

In document Audio Processing on a Multicore Platform (Sider 101-106)