• Ingen resultater fundet

Future work

In document Audio Processing on a Multicore Platform (Sider 126-137)

• We have acquired knowledge on the main DSP algorithms that can be applied for audio signal processing, and on how the different parameters affect the sound.

• We have developed the integration of such DSP algorithms into the world of real-time audio processing, implementing them in an efficient way, bal-ancing the computation requirements with the complexity of the algo-rithm, and finally optimizing WCET by making use of local memories.

• We have implemented a set of audio effects in C that run on a Patmos processor, using the designed audio interface for audio input/output.

• We have designed a set of rules that allow using a multicore platform to process different audio effects that are connected between each other forming chains, taking care of balancing overhead associated to data trans-fers with the latency of the signal, to ensure that real-time perception is accomplished in all cases.

• We have implemented the multicore processing system on T-CREST plat-form, which allows processing sequential and parallel chains of effects in real-time.

• We have implemented audio mode changes, which allows having more than one effect setup in a single application to switch among them at run-time.

• We have implemented a software tool which performs the allocation of audio effect tasks, following the mentioned rules to distribute the effects in the multicore platform, minimizing the usage of communication channels.

• Finally, we have verified the correct functionality of different aspects of the implementation, such as the communication and processing on the platform and the performance of the allocation algorithm. We have also discussed the high scalability of the design, which allows integration of other IP cores in the system.

9.2 Future work 115

and more complex effects is proposed, such as spatial or frequency-domain effects.

• In relation to the previous point, the integration of hardware blocks that implement audio processing algorithms is proposed, which would allow more complex implementations of some effects, and would also reduce WCET and improve its predictability, as explained.

• The usage of the instruction SPM is also recommended, as this would reduce WCET considerably. In the current implementation, data cache misses are minimized using a local data SPM, but instruction misses seem to be an important limitation for real-time processing.

• Finally, a more complex static task allocation algorithm would maximize the computational and communication resources of the platform, finding a correct balance between these two.

Bibliography

[1] Eric Battenberg, Adrian Freed, and David Wessel. Advances in the Paral-lelization of Music and Audio Applications.Proceedings of the International Computer Music Conference, pages 349–352, 2010.

[2] Richard Boulanger and Victor Lazzarini. The Audio Programming Book.

The MIT Press, 2011.

[3] Udo Z¨oler. DAFX : Digital Audio Effects. 2 edition, 2011.

[4] Jose A. Belloch, Bal´azs Bank, Lauri Savioja, Alberto Gonzalez, and Vesa V¨alim¨aki. Multi-Channel IIR Filtering Of Audio Signals Using a GPU.

Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), 2014.

[5] Music-DSP Source Code Archive. Available online at http://musicdsp.

org/archive.php?classid=4#41(last accessed: Jan. 2017).

[6] James A. Moorer. About This Reverberation Business. Computer Music Journal, 3(2), 1979.

[7] Hudson Giesbrecht, Will Mcfarland, and Tim Perry. Algorithmic Reverber-ation Combining Moorer’s reverberator with simulated room IR reflection modeling. Technical report, 2009. Available online athttp://arqen.com/

wp-content/docs/Hybrid-Convolution-Algorithmic-Reverb.pdf (last accessed: Jan. 2017).

[8] Tiziano Leidi, Thierry Heeb, Marco Colla, and Jean-philippe Thiran.

Event-driven real-time audio processing with GPGPUs. In Audio Engi-neering Society Convention 130, pages 1–10, May 2011.

[9] Andreas Partzsch and Ulrich Reiter. Multi core / multi thread processing in object based real time audio rendering: Approaches and solutions for an optimization problem. InAudio Engineering Society Convention 122, May 2007.

[10] Steven W. Smith. The Scientist and Engineer’s Guide to Digital signal processing. California Technical Publishing, 2 edition, 1999.

[11] Martin Schoeberl, Florian Brandner, Stefan Hepp, Wolfgang Puffitsch, and Daniel Prokesch. Patmos Reference Handbook. Technical report, 2017.

Available online at http://patmos.compute.dtu.dk/patmos_handbook.

pdf(last accessed: Jan. 2017).

[12] Stefan Hepp, Benedikt Huber, Jens Knoop, Daniel Prokesch, and Peter Puschner.The platin Tool Kit - The T-CREST Approach for Compiler and WCET Integration. Available online athttps://publik.tuwien.ac.at/

files/PubDat_246928.pdf(last accessed: Jan. 2017).

[13] Rasmus Bo Sørensen, Luca Pezzarossa, Martin Schoeberl, and Jens Sparsø.

A Resource-Efficient Network Interface Supporting Low Latency Reconfigu-ration of Virtual Circuits in Time-Division Multiplexing Networks-on-Chip.

Submitted to Journal of Systems Architecture: Embedded Software Design, 12 2016.

[14] Evangelia Kasapaki, Martin Schoeberl, Rasmus Bo Sørensen, Christoph M¨uller, Kees Goossens, and Jens Sparsø. Argo: A Real-Time Network-on-Chip Architecture with an Efficient GALS Implementation. IEEE Trans-actions on Very Large Scale Integration (VLSI) Systems, 24(2):479–492, 2016.

[15] Martin Schoeberl, Sahar Abbaspourseyedi, Alexander Jordan, Evangelia Kasapaki, Wolfgang Puffitsch, Jens Sparsø, Benny Akesson, Neil Audsley, Jamie Garside, Raffaele Capasso, Alessandro Tocchi, Kees Goossens, Sven Goossens, Yonghui Li, Scott Hansen, Reinhold Heckmann, Stefan Hepp, Benedikt Huber, Jens Knoop, Daniel Prokesch, Peter Puschner, Andr´e Rocha, and Cl´audio Silva. T-crest: Time-predictable multi-core architec-ture for embedded systems.Journal of Systems Architecture, 61(9):449–471, 2015.

[16] Luca Pezzarossa. Hardware Accelerators in Network-on-Chip Based Multi-Core Platforms. Technical University of Denmark, 2014.

[17] Terasic Technologies. DE2-115 User Manual. 2010.

[18] Philipp Degasperi, Stefan Hepp, Wolfgang Puffitsch, and Martin Schoeberl.

A method cache for patmos. International Symposium on Object-oriented Real-time Distributed Computing, pages 100–108, 2014.

BIBLIOGRAPHY 119

[19] OCP International Partnership. Open Core Protocol Specification 3.0.

Technical report, 2009.

[20] Peter Puschner, Daniel Prokesch, Benedikt Huber, Jens Knoop, Stefan Hepp, and Gernot Gebhard. The t-crest approach of compiler and wcet-analysis integration. Proceedings - Object-oriented Real-time Distributed Computing, International Symposium on, page 6913220, 2013.

[21] Chris Lattner. Introduction to the LLVM Compiler Infrastructure.

Computer-Aided Civil and Infrastructure Engineering, 21(5):319–320, 2006.

[22] Daniel Sanz Ausin and Fabian Goerge. Design of an Audio Interface for Patmos. 2016. Available online at https://arxiv.org/abs/1701.06382 (last accessed: Jan. 2017).

[23] Wolfson Microelectronics. WM8731 Audio Codec. Technical report, 2004. Available online at http://www.cs.columbia.edu/˜sedwards/

classes/2012/4840/Wolfson-WM8731-audio-CODEC.pdf (last accessed:

Jan. 2017).

[24] Roman Obermaisser. Event-Triggered and Time-Triggered Control Paradigms. Springer Science + Business Media, 2005.

[25] Whirlwind. Opening Pandora’s Box? Available online athttp://

whirlwindusa.com/support/tech-articles/opening-pandoras-box/

(last accessed: Jan. 2017).

[26] The CSound Community. Real-time audio using csound. Available online at http://www.csounds.com/manual/html/UsingRealTime.html (last accessed: Jan. 2017).

[27] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing:

Principles, Algorithms and Applications. Prentice-Hall International, Inc., 3 edition, 1996.

[28] Ki-Il Kum, Jiyang Kang, and Wonyong Sung. AUTOSCALER for C: An optimizing floating-point to integer C program converter for fixed-point digital signal processors. IEEE Transactions on Circuits and Systems II:

Analog and Digital Signal Processing, 47(9):840–848, 2000.

[29] Seehyun Kim, Ki-Il Kum, and Wonyong Sung. Fixed-Point Optimization Utility for C and C++ Based Digital Signal Processing Programs. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Pro-cessing, 1998.

[30] Nenad Cetic, Miroslav Popovic, Miodrag Djukic, and Momcilo Krunic. A Run-time Library for Parallel Processing on a Multi-core DSP.Proceedings of 2013 IEEE 3rd Eastern European Regional Conference on the Engineer-ing of Computer Based Systems, ECBS-EERC 2013, pages 41–47, 2013.

[31] Fatma A. Omara and Mona M. Arafa. Genetic algorithms for task schedul-ing problem. Journal of Parallel and Distributed Computing, 70(1):13–22, 2010.

[32] Tiziano Leidi, Thierry Heeb, Marco Colla, and Jean-Philippe Thiran.

Model-driven development of audio processing applications for multi-core processors. InAudio Engineering Society Convention 128, May 2010.

[33] Edward A. Lee and David G. Messerschmitt. Synchronous data flow. Pro-ceedings of the Ieee, 75(9):1235–1245, 1987.

[34] Claudius Ptolemaeus. System Design, Modeling, and Simulation. Using Ptolemy II. 1 edition, 2014. Available online at http://ptolemy.org/

systems(last accessed: Jan. 2017).

Appendix A

Audio Interface: Hardware Design & API

This appendix includes Chisel and C code listings, which implement some of the hardware and software parts of the audio interface designed for Patmos and the WM8731 audio CODEC of the Altera DE2-115 board.

The first 2 Sections, A.1 and A.2, show the input and output buffers respectively.

As explained in chapter 4, these are not the only hardware components of the interface, but they are the main ones that have been developed in this project.

The rest can be found in the Patmos GitHub repository1. Finally, Section A.3 shows the main C functions which form the software API to access the audio interface from Patmos.

A.1 ADC Buffer

1 // F I F O b u f f e r for a u d i o i n p u t f r o m W M 8 7 3 1 A u d i o c o d e d .

2 3

4 p a c k a g e io

5

1https://github.com/t-crest/patmos/tree/master/hardware/src/io

6 i m p o r t C h i s e l . _

7

8 c l a s s A u d i o A D C B u f f e r ( A U D I O B I T L E N G T H : Int , M A X A D C B U F F E R P O W E R : Int ) e x t e n d s M o d u l e {

9

10 // IOs

11 val io = new B u n d l e {

12 // to / f r o m A u d i o A D C

13 val a u d i o L A d c I = U I n t ( INPUT , A U D I O B I T L E N G T H )

14 val a u d i o R A d c I = U I n t ( INPUT , A U D I O B I T L E N G T H )

15 val e n A d c O = U I n t ( OUTPUT , 1)

16 val r e a d E n A d c I = U I n t ( INPUT , 1) // u s e d to s y n c r e a d s

17 // to / f r o m P A T M O S

18 val e n A d c I = U I n t ( INPUT , 1)

19 val a u d i o L P a t m o s O = U I n t ( OUTPUT , A U D I O B I T L E N G T H )

20 val a u d i o R P a t m o s O = U I n t ( OUTPUT , A U D I O B I T L E N G T H )

21 val r e a d P u l s e I = U I n t ( INPUT , 1)

22 val e m p t y O = U I n t ( OUTPUT , 1) // e m p t y b u f f e r i n d i c a t o r

23 val b u f f e r S i z e I = U I n t ( INPUT , M A X A D C B U F F E R P O W E R +1) // m a x i m u m b u f f e r S i z e I : (2ˆ M A X A D C B U F F E R P O W E R ) + 1

24 }

25

26 val B U F F E R L E N G T H : Int = ( M a t h . pow (2 , M A X A D C B U F F E R P O W E R ) ) . a s I n s t a n c e O f [ Int ]

27

28 // R e g i s t e r s for o u t p u t a u d i o d a t a ( to P A T M O S )

29 val a u d i o L R e g = Reg ( i n i t = U I n t (0 , A U D I O B I T L E N G T H ) )

30 val a u d i o R R e g = Reg ( i n i t = U I n t (0 , A U D I O B I T L E N G T H ) )

31 io . a u d i o L P a t m o s O := a u d i o L R e g

32 io . a u d i o R P a t m o s O := a u d i o R R e g

33

34 // FI F O b u f f e r r e g i s t e r s

35 val a u d i o B u f f e r L = Vec . f i l l ( B U F F E R L E N G T H ) { Reg ( i n i t = U I n t (0 , A U D I O B I T L E N G T H ) ) }

36 val a u d i o B u f f e r R = Vec . f i l l ( B U F F E R L E N G T H ) { Reg ( i n i t = U I n t (0 , A U D I O B I T L E N G T H ) ) }

37 val w _ p n t = Reg ( in i t = U I nt (0 , M A X A D C B U F F E R P O W E R ) )

38 val r _ p n t = Reg ( in i t = U I nt (0 , M A X A D C B U F F E R P O W E R ) )

39 val f u l l R e g = Reg ( i n i t = UI n t (0 , 1) )

40 val e m p t y R e g = Reg ( i n i t = U I n t (1 , 1) ) // s t a r t s e m p t y

41 io . e m p t y O := e m p t y R e g

42 val w _ i n c = Reg ( in i t = U I nt (0 , 1) ) // w r i t e p o i n t e r i n c r e m e n t

43 val r _ i n c = Reg ( in i t = U I nt (0 , 1) ) // r e ad p o i n t e r i n c r e m e n t

44

45 // i n p u t h a n d s h a k e s t a t e m a c h i n e ( f r o m A u d i o A D C )

46 val s I n I d l e :: s I n R e a d :: Nil = E n u m ( U I n t () , 2)

47 val s t a t e I n = Reg ( i n i t = s I n I d l e )

48 // c o u n t e r for i n p u t h a n d s h a k e

49 val r e a d C n t R e g = Reg ( i n it = U I n t (0 , 3) )

50 val R E A D C N T L I M I T = U I n t (3)

51

52 // o u t p u t h a n d s h a k e s t a t e m a c h i n e ( to P a t m o s )

53 val s O u t I d l e :: s O u t R e a d i n g :: Nil = E n u m ( U I n t () , 2)

54 val s t a t e O u t = Reg ( i n i t = s O u t I d l e )

55

ADC Buffer 123

56 // f u l l and e m p t y s t a t e m a c h i n e

57 val s F E I d l e :: s F E A l m o s t F u l l :: s F E F u l l :: s F E A l m o s t E m p t y ::

s F E E m p t y :: Nil = E n u m ( UI n t () , 5)

58 val s t a t e F E = Reg ( i n i t = s F E E m p t y )

59

60 // r e g i s t e r to k e e p t r a c k of b u f f e r s i z e

61 val b u f f e r S i z e R e g = Reg ( i n i t = U I n t (0 , M A X A D C B U F F E R P O W E R +1) )

62 // u p d a t e b u f f e r s i z e r e g i s t e r

63 w h e n ( b u f f e r S i z e R e g =/= io . b u f f e r S i z e I ) {

64 b u f f e r S i z e R e g := io . b u f f e r S i z e I

65 r _ p n t := r _ p n t & ( io . b u f f e r S i z e I - U I n t (1) )

66 w _ p n t := w _ p n t & ( io . b u f f e r S i z e I - U I n t (1) )

67 }

68

69 // o u t p u t e n a b l e : j u s t w i r e f r o m i n p u t e n a b l e

70 io . e n A d c O := io . e n A d c I

71

72 // a u d i o i n p u t h a n d s h a k e : if e n a b l e

73 w h e n ( io . e n A d c I === U I n t (1) ) {

74 // s t a t e m a c h i n e

75 s w i t c h ( s t a t e I n ) {

76 is ( s I n I d l e ) {

77 // wa i t u n t i l p o s E d g e r e a d E n A d c I

78 w h e n ( io . r e a d E n A d c I === U I n t (1) ) {

79 // wa i t R E A D C N T L I M I T c y c l e s u n t i l i n p u t d a t a is w r i t t e n

80 w h e n ( r e a d C n t R e g === R E A D C N T L I M I T ) {

81 // re a d input , i n c r e m e n t w r i t e p o i n t e r

82 a u d i o B u f f e r L ( w _ p n t ) := io . a u d i o L A d c I

83 a u d i o B u f f e r R ( w _ p n t ) := io . a u d i o R A d c I

84 w _ p n t := ( w _ p n t + U I n t (1) ) & ( io . b u f f e r S i z e I - U In t (1) )

85 w _ i n c := U I n t (1)

86 // if it is full , write , but i n c r e m e n t r e a d p o i n t e r too

87 // to s t o r e new s a m p l e s and d u m p o l d e r o n e s

88 w h e n ( f u l l R e g === U I n t (1) ) {

89 r _ p n t := ( r _ p n t + U I n t (1) ) & ( io . b u f f e r S i z e I -U I n t (1) )

90 r _ i n c := U I n t (1)

91 }

92 // u p d a t e s t a t e

93 s t a t e I n := s I n R e a d

94 }

95 . o t h e r w i s e {

96 r e a d C n t R e g := r e a d C n t R e g + U I n t (1)

97 }

98 }

99 }

100 is ( s I n R e a d ) {

101 r e a d C n t R e g := U I n t (0)

102 // wa i t u n t i l n e g E d g e r e a d E n A d c I

103 w h e n ( io . r e a d E n A d c I === U I n t (0) ) {

104 // u p d a t e s t a t e

105 s t a t e I n := s I n I d l e

106 }

107 }

108 }

109 }

110 . o t h e r w i s e {

111 r e a d C n t R e g := U I n t (0)

112 s t a t e I n := s I n I d l e

113 w _ i n c := U I n t (0)

114 }

115 116 117

118 // a u d i o o u t p u t s t a t e m a c h i n e : if e n a b l e and not e m p t y

119 w h e n ( ( io . e n A d c I === U I n t (1) ) && ( e m p t y R e g === U I n t (0) ) ) {

120 // s t a t e m a c h i n e

121 s w i t c h ( s t a t e O u t ) {

122 is ( s O u t I d l e ) {

123 w h e n ( io . r e a d P u l s e I === U I n t (1) ) {

124 a u d i o L R e g := a u d i o B u f f e r L ( r _ p n t )

125 a u d i o R R e g := a u d i o B u f f e r R ( r _ p n t )

126 s t a t e O u t := s O u t R e a d i n g

127 }

128 }

129 is ( s O u t R e a d i n g ) {

130 w h e n ( io . r e a d P u l s e I === U I n t (0) ) {

131 r _ p n t := ( r _ p n t + U I n t (1) ) & ( io . b u f f e r S i z e I - U In t (1) )

132 r _ i n c := U I n t (1)

133 s t a t e O u t := s O u t I d l e

134 }

135 }

136 }

137 }

138 . o t h e r w i s e {

139 s t a t e O u t := s O u t I d l e

140 }

141 142 143

144 // u p d a t e f u l l and e m p t y s t a t e s

145 w h e n ( ( w _ i n c === U I n t (1) ) || ( r _ i n c === U I n t (1) ) ) {

146 // d e f a u l t : set b a c k v a r i a b l e s

147 w _ i n c := U I n t (0)

148 r _ i n c := U I n t (0)

149 // s t a t e m a c h i n e

150 s w i t c h ( s t a t e F E ) {

151 is ( s F E I d l e ) {

152 f u l l R e g := U I n t (0)

153 e m p t y R e g := U I nt (0)

154 w h e n ( ( w _ i n c === U I n t (1) ) && ( w _ p n t === ( ( r _ p n t -U I n t (1) ) & ( io . b u f f e r S i z e I - -U I n t (1) ) ) ) && ( r _ i n c ===

U I n t (0) ) ) {

155 s t a t e F E := s F E A l m o s t F u l l

156 }

157 . e l s e w h e n ( ( r _ i n c === U I n t (1) ) && ( r _ p n t === ( ( w _ p n t -U I n t (1) ) & ( io . b u f f e r S i z e I - -U I n t (1) ) ) ) && ( w _ i n c ===

U I n t (0) ) ) {

158 s t a t e F E := s F E A l m o s t E m p t y

159 }

DAC Buffer 125

160 }

161 is ( s F E A l m o s t F u l l ) {

162 f u l l R e g := U I n t (0)

163 e m p t y R e g := U I nt (0)

164 w h e n ( ( r _ i n c === U I n t (1) ) && ( w _ i n c === U I n t (0) ) ) {

165 s t a t e F E := s F E I d l e

166 }

167 . e l s e w h e n ( ( w _ i n c === U I n t (1) ) && ( r _ i n c === U I n t (0) ) ) {

168 s t a t e F E := s F E F u l l

169 f u l l R e g := U I n t (1)

170 }

171 }

172 is ( s F E F u l l ) {

173 f u l l R e g := U I n t (1)

174 e m p t y R e g := U I nt (0)

175 w h e n ( ( r _ i n c === U I n t (1) ) && ( w _ i n c === U I n t (0) ) ) {

176 s t a t e F E := s F E A l m o s t F u l l

177 f u l l R e g := U I n t (0)

178 }

179 }

180 is ( s F E A l m o s t E m p t y ) {

181 f u l l R e g := U I n t (0)

182 e m p t y R e g := U I nt (0)

183 w h e n ( ( w _ i n c === U I n t (1) ) && ( r _ i n c === U I n t (0) ) ) {

184 s t a t e F E := s F E I d l e

185 }

186 . e l s e w h e n ( ( r _ i n c === U I n t (1) ) && ( w _ i n c === U I n t (0) ) ) {

187 s t a t e F E := s F E E m p t y

188 e m p t y R e g := U I nt (1)

189 }

190 }

191 is ( s F E E m p t y ) {

192 f u l l R e g := U I n t (0)

193 e m p t y R e g := U I nt (1)

194 w h e n ( ( w _ i n c === U I n t (1) ) && ( r _ i n c === U I n t (0) ) ) {

195 s t a t e F E := s F E A l m o s t E m p t y

196 e m p t y R e g := U I nt (0)

197 }

198 }

199 }

200 }

201 }

In document Audio Processing on a Multicore Platform (Sider 126-137)