• Ingen resultater fundet

GUI of the Sonification procedure

6 EEG Sonification

6.4 Using Sonification for Identification of Artifactual Features in Time Courses

6.4.4 GUI of the Sonification procedure

This section gives a brief overview of the GUI designed in this project, depicted in Figure 48.

Figure 48. GUI of the time course auditory browser.

The GUI was kept very simple with only the essential parameters assessable to the user.

The main parameters are;

Choice of independent component. Corresponding to the time course sonified and scalp map displayed (top left hand side)

Choice of sounds. One can choose between several sounds (middle left hand side)

Perform sonification. (bottom left hand side)

Play sonification. (bottom right hand side)

For the auditory browser to work one must have an EEG-struct from EEGLAB with the ICA components of the EEG signal in memory. An example of a typical way a user would employ the auditory browser could be: the user enters a component to be sonified and the scalp map of the selected component appears, if the sound type has been selected, one can create the sonification simply by pressing the Sonify-button, wait until the process is done and then press the Play-button to listen to the sonification. A further description of how to use the GUI and two small examples can be found in the enclosed CD-Rom, please see under folder “GUI and Instructions”.

6.5 Conclusion

Initially, a short introduction to EEG was given followed by a section on the artifacts present in EEG signals. Furthermore, a brief section on applying ICA on EEG for the purpose of artifact removal was presented, which led to a section on previous methods for EEG sonifications. Finally, a suggestion of how to use sonifications in auditory browsing of time courses with the intent to decontaminate the EEG signal when using the ICA method was presented. In this section descriptions of the features and of the validation procedure of the classifiers were included, together with a section were the author attempted to formalize the heuristic procedure of mapping the outputs of the classifier to an asynchronous granular synthesis. In connection with this a GUI of the browser was created to allow and encourage the reader to try the auditory browser.

Chapter 7

7 Discussion

The author believes that some statements that are repeated by many researchers paint a glorified picture of sonification as being a solution to the “limited” dimensionality of the visual system and thus proving useful for analyzing complex multivariate data. Is it clear how much information we can absorb and understand at one time? Does the auditory system allow comprehensible analysis of information in higher dimensionality than the visual system? These are important questions which the author is unable to answer, though feels are very important questions that need answering before statements such as the one presented above can be used as an incentive to use sonifications.

Sonification, however, does allow data to be experienced in a new way, and can in some instances supply information that would not have been accessible otherwise. Can it then be specified in which instances using sonification will give a deeper understanding of the data? Like every method, sonification will not be the best choice for all problems, though, as mentioned, it does allow a new way of viewing data. A hint to this question could have been given by the examples of the Voyager 2 and the Quantum Whistle, where the information that was being sought was embedded in time series signals of high noise levels and was therefore unable to be perceived via the visual system.

Using auditory displays leaves the eyes free for other important tasks that require the eyes fixed, for example, in many medical and critical monitoring situations. Here the concept of backgrounding is used, thus only drawing attention to specific information when unexpected states in the sonification, i.e. the data, are perceived, usually when large changes occur.

Sonification may be a good aid for rapid screening of data since an auditory stream can be consumed with comparable little effort. Though usability tests proving where and why sonifications are superior to its visual counterpart are an invaluable source of information, and should be encouraged and possibly conducted with psychologists or psychophysicists. As mentioned earlier, the high temporal resolution, i.e.

blink-ish events occur, but also when in time they occur. This is an advantage over static visual displays such as pie chart or numerical presentations of how many blinks were detected.

When using sound one should refrain from using it in situations where it can interfere with speech communication or in open workspaces, where it can rapidly become annoying. Using bad sound quality or tiring sounds can also create fatigue and annoyance and will ultimately result in other solutions being found to convey the information. The limited ranges of loudness are a drawback of using sound. The ranges have to be in a range of pleasantness, i.e. presenting something too loud creates fatigue and annoyance, and presenting a sonification too soft makes the sonification sensitive to masking effects created by other sounds in the environment resulting in strenuous listening. The lack of orthogonality of auditory dimensions make sonifications become increasingly unpredictable the higher dimensions are used to convey the information. When working in this high dimensionality [Hermann 2002] best recommends the designer of a sonification to experiment until a solution is found that subjectively allows distinguishing the parameters of the sound.

The cultural bias and the user’s ability to understand and interpret sounds play an important part in presenting data via sound. Musicians, for example, are better at discerning changes in sound than non-musicians. This leads to the learnability of auditory displays: inexperienced users might need to be trained in using auditory displays before they can yield the full benefit from these types of displays. Furthermore, experienced users may also require other sonifications and other interaction possibilities than inexperienced users [Flowers 2005]. A reason for this is that auditory displays are not common in conveying information and as with everything that is met for the first time some degree of familiarization is needed.

The auditory browser presented in this thesis was intended to aid in the search of isolated artifacts in time courses for researchers that use ICA to form “decontaminated” versions EEG signals. Since the author used a lot of time on studying sonification literature and perhaps sidetracked and used too large a part on investigating the interesting possibilities of augmenting data sets for extracting relevant features in binary classification problems, the time remaining was limited. Due to this the only artifact taken into account was eye blinks, which is the easiest artifact to detect. This could of course be extended to include a search for more artifacts, such as muscle activity and line noise. A further extension could have been a search through ERP time courses for stimulus-locked, response-locked and non-phase locked activities as preformed manually in [Jung et al. 2001]. This could aid in finding “decontaminated” versions of ERP and the components that constitute the ERP. The auditory browser can be categorized as lying in between parameter mapping and parameterized auditory icons on the analogic/symbolic continuum. The reason for this is that the auditory browser does not represent analogically what is happening in the data, though does give a sense of the categorical time structure of the data.

Instead of creating a model of eye blinks one could also have chosen to sonify the features of the eye blink. Sonifying the two features in this thesis is not a difficult affair;

one could, for example, map one parameter to pitch and the other to a prominent dimension of timbre, such as brightness. However, if more features are needed in specifying other artifacts this kind of “feature mapping” becomes more difficult and less

intuitive. Therefore a model that classifies the multidimensional feature space and outputs values corresponding to which state it is in is a more general solution. One could argue that one removes oneself further from the actual data, due to the fact that the model only can identify the learned states from the current features, thus making it a less flexible approach if other searches for non-learned states are wanted, i.e. exploratory data analysis. However, exploratory data analysis and this form of monitoring are two separate tasks as suggested in [Kramer 1994 p. 15], and one could argue that new features can be extracted and the classifier updated only requiring the need to find a new way of representing the new state, making this approach a very flexible approach. Extending the feature space could have the result that other models types are needed, such as neural networks, relevance vector machines or Hidden Markov Models [Williamson and Murray-Smith 2003], though this does not influence the parameter mapping when using the probability outputs of the classifier.

A feature that the author wanted to have added to the GUI was a “zoom” function, which is inspired by zooming in the visual domain. This would have allowed the user to listen to a short presentation of the time course, as it is now possible, and then be able to zoom in to listen to a more detailed representation of an interesting part of the time course. Thus, allowing the user the possibility to listen to a small area of the time course prolonged in time. This would perhaps have been interesting for ERP analysis, as suggested in [Mayer-Kress 1994]. Another, possibly farfetched, idea the author had was to use the scalp maps, i.e. the columns of the mixing matrix, to control where the sound in a three dimensional space originated from. Thus, if, for example, listening to the eye blink time course the sound generated would be perceived as coming in front of the listener. This would perhaps eliminate the need to present the scalp map in the GUI and for ERP analysis this could give an idea of where in the brain the process stem from and how they evolve in time. The author had imagined this best could be realized by using HRTF. Recently, in [Lokki 2005], HRTFs were used to create a three dimensional sound space to inform researchers in room acoustics in slow motion about the spatial distribution of early reflections and spectrum of each reflection. This was intended as a more intuitive investigation tool for room acoustic designers.

As mentioned, the hypothesis of this form of browsing being more efficient and informative than the present form of analysis of time courses has to be tested, though was out of the scope of this thesis. Empirical research about learnability, reliability and usability are an important requirement for a more quantitative assessment of the performance of different sonification strategies and should be encouraged until a clearer picture of why sonifications work or do not work. This is important due to the fact that most of the mappings from data to sound are done in a heuristic fashion, requiring quantitative assessment of the sonification. It is suggested that a study must be done on the audience demographics [Walker and Kramer 1996], e.g. researchers in EEG, and thereafter the obtained knowledge should influence the choice of the testing technique(s) and the sonification, also including the GUI.

For researchers that in the future are interested in the field of sonification one

Finally, the author reconfirms that, the frequently stated need for a flexible toolbox for sonification research and data exploration, still is in great demand.

Chapter 8

8 Conclusion

This project concerns auditory browsing based on monitoring the states (i.e.

contaminated and non-contaminated) in EEG time courses. Features were extracted from the EEG time courses and a classification of these was performed. The granular synthesis technique was used to translate the probabilities of being in a state at a given time to auditory information. To ensure that relevant changes in the time course data were perceived when listening to the sonification; the classification component was an important part of the translation or mapping process. As a part of the classification study, the concept of augmented data sets for binary classification problems was investigated.

The concept of augmented data sets was presented and a heuristically investigated in chapter 2. Here it was seen that the discriminatory value of the PCA increased to the level of linear discriminant functions. The results seemed to show that when using APCA the d + l eigenvector for small class labels in most cases gave the best results. The APCA procedure was tested on experimental data (chapter 2) in many dimensions and on real data (chapter 6) in two dimensions where it, in both cases, gave consistent results.

Compared to other linear discriminant techniques, however, the APCA method is very computationally inefficient. Furthermore, a preliminary investigation of augmenting ICA (infomax) was also given. For data types of non-zero mean the AICA showed a general increase in discriminatory value when compared to ICA. For data types of zero mean, which are super Gaussian distributed, the directions of the column vectors in the mixing matrix seemed to correlate better with the directions of the data set than those obtained when ICA was run on the same data. Both the APCA and AICA are limited to binary classification problems and it is clear that further investigations of these methods are necessary to give a lucid and precise explanation of what is going on.

A brief introduction to auditory perception, and sound synthesis was presented as part of the understanding of the sonification procedure. Auditory perception and findings in this field are crucial to have in mind when designing sonifications and to understand

utilized for an auditory browser. The chapter on sonification gave an overview of the field of sonification and presented a section on the issues in designing sonifications where observations in auditory perception were related to sonification design.

In chapter 6 previous EEG sonification techniques were presented together with the design procedure of the auditory browser implemented in the course of the project.

Furthermore, a GUI was designed to combine the components in an easy to use interface.

References

Baier, G. and Hermann, T. The Sonification of Rhythms in Human Electroencephalogram. Proc. of the ICAD 2004, Sydney, Australia, July, 2004

Barras, S. and Kramer, G. Using Sonification. Multimedia Systems, Springer-Verlag, pp.

23-31, 1999.

Benford, S. and Greenhalgh, C. Introducing Third Party Objects into the Spatial Model of Interaction. Fifth European Conference on Computer Supported Cooperative Work, Lancaster, UK, 1997.

Bishop, C.M. Neural Networks for Pattern Recognition. Oxford University Press Inc., New York, USA, 1995.

Blattner, M. M., Sumikawa, D. A., and Greenberg, R. M. Earcons and icons: Their structure and common design principles. Human Computer Interaction, 4, 1, pp. 11-44, 1989.

Bonebright, T.L., Miner, N.E., Goldsmith, T.E. and Caudell, T.P. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli. Proc. of ICAD 1998, University of Glasgow, UK, Nov. 1998.

Bregmann, A. Auditory Scene Analysis. Cambridge, MA: MIT Press, 1990.

Childs, E. Achorripsis: A sonification of probability distributions. Proc. of the ICAD 2002, 2002.

Ersbøll, B.K. and Conradsen, K. An introduction to statistics: volume 2. 6. edition, DTU-tryk, Kgs. Lyngby, 2003.

Fawcett, T. ROC Graphs: Notes and Practical Considerations for Data Mining Researchers. Intelligent Enterprise Technologies Laboratory, HP Laboratories Palo Alto, HPL-2003-4, California, Jan. 2003.

Fernström, J.M. and McNamara, C. After Direct Manipulation – Direct Sonification.

ICAD 1998, Glasgow, Scotland, 1998.

Flowers, J.H. Thirteen Years of Reflection on the Auditory Graphing: Promises, Pitfalls, and Potential New Directions. Proc. of the ICAD 2005, Limerick, Ireland, July 2005.

Gaver, W. W. Using and Creating Auditory Icons. In G. Kramer (Ed.), Auditory Display:

Sonification, Audification, and Auditory Interfaces, Addison-Wesley, pp. 417-46, 1994.

Hartmann, W.M. Signals, Sound, and Sensations. Springer, New York, 1998.

Hermann, T. Sonification for Exploratory Data Analysis. Ph.D. Dissertation at Bielefeld University, Germany ,Aug. 2002.

Hermann, T. and Ritter, H. Listen to your Data: Model-Based Sonification for Data Analysis. Proc. of the ISIMADE '99, Baden-Baden, Germany, 1999.

Hermann, T., Meinicke, P., Bekel, H., Müller, H.M., Weiss, S. and Ritter, H.

Sonifications for EEG Data Analysis. Proc. of the ICAD 2002, Kyoto, Japan, July 2002.

Herrmann, C.S., Lenz, D., Junge, S., Busch, N.A. and Maess, B. Memory-matches Evoke Human Gamma-responses. BMC Neuroscience 5:13, 2004.

Hinterberger, T. and Baier, G. Parametric Orchestral Sonification of EEG in Real Time.

Proc. of the International Workshop on Interactive Sonification, Bielefeld, Germany, Jan.

2004.

Hinterberger, T., Baier, G., Mellinger, J., and Birbaumer, N. Auditory Feedback of Human EEG for Direct Brain-Computer Communication. Proc. of the ICAD 2004, Sydney, Australia, July 2004.

Hooper, G. EEG Sonification. Proc. of the ICAD 2004, Sydney, Australia, July 2004.

Jensen, K. The Timbre Model. Workshop on current research directions in computer music, Barcelona, Spain, 2001.

Jovanov, E., Starcevic, D., Marsh, D., Obrenovic, Z., Radivojevic, V. and Samardzic A.Multi modal presentation in virtual telemedical environments, High-Performance Computing and Networking, Proceedings Lecture Notes In Computer Science, 1593:

964-972 1999.

Jung, T., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., and Sejnowski, T.J.

Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clinical Neurophysiology 111, pp. 1745-1758, 2000.

Jung, T., Makeig, S., Westerfield, M., Townsend, J., Courchesne, E., and Sejnowski, T.J.

Analysis and Visualization of Single-Trial Event-Related Potentials. Wiley-Liss Inc., Human Brain Mapping 14:166-185, 2001.

Kennedy, R.L., Lee, Y, Van Roy, B, Reed, C.D. and Lippmann, R.P. Solving Data Mining Problems through Pattern Recognition. Prentice Hall, PTR, New Jersey, 1995.

Kolenda, T. Independent Component Analysis, Master Thesis, Department of Mathematical Modeling, Technical University of Denmark, 1998.

Kramer, G. Auditory Display - Sonification, Audification, and Auditory Interfaces, Proceedings Volume, Addison Wesley, 1994.

Kramer, G., Walker, B., Bonebright, T., Cook, P. Flowers, J., Miner, N. and Neuhoff, J.

Sonification Report: Status of the field and research agenda. Technical Report, ICAD, 1999.

Kronland-Martinet, R., Guillemain, P. and Ystad, S. The Timbre Model – Discrimination and Expression. Mosart Deliverabel D22, Evaluation report of Timbre modeling, 2002.

Kwak, N., and Choi, C.H. Feature extraction based on ICA for Binary Classifications Problems. IEEE Trans. On Knowledge and Data Engineering, 2003.

Lokki, T. Aurilization of Simulated Impulse Responses in Slow Motion. AES 118th Convention Paper 6500, Barcelona, Spain, May 2005.

Makeig, S., Jung, T.P., Ghahremani, D., Bell, A.J. and Sejnowski, T.J. Blind separation of event-related brain responses into independent components. Proc. Natl. Acad. Sci.

USA, 94:10979-10984, 1997.

Marentakis, G. and K. Jensen, Sinusoidal Synthesis Optimization, Proc. of the ICMC, Göteborg, Sweden 2002.

Mayer-Kress, G. Sonification of Multiple Electrode Human Scalp Electroencephalogram.

Proc. of the ICAD, 1994.

Meinicke, P., Hermann, T., Bekel, H., Müller, H.M., Weiss, S. and Ritter, H.

Identification of Discriminative features in the EEG. Intelligent Data Analysis 8, IOS Press, pp. 97-107, 2004.

Moore, B.C.J. An Introduction to the Psychology of Hearing. 4. edition Academic Press, London, 1997.

Mørup, M. Analysis of Brain Data - Using Multi-Way Array Models on the EEG, Informatics and Mathematical Modelling, Technical University of Denmark, DTU, 2005 Nuehoff, J.G., McBeath, M.K. and Wanzie, W. C. Dynamic Frequency Change Influences Loudness Perception: A Central, Analytical Process. Journal of Experimental Psychology: Human Perception and Performance, Vol 25(4), pp 1050-1059, 1999.

Patterson, R. Guidelines for Auditory Warning Systems on Civil Aircraft. Civil Aviation Authority Paper 82017, 1982.

Poulsen, T. Ear, Hearing and Speech: A short introduction. Version 1.2, Ørsted-DTU, Lyngby, 2001.

Roads, C. The Computer Music Tutorial. The MIT Press. Massachusetts, USA, 1996.

Truax, B. Real-Time Granular Synthesis with a Digital Signal Processor. Computer Music Journal, vol. 12, no. 2, pp. 14-26, 1988.

Walker, B. and Ehrenstein . Congruency Effects with Dynamic Auditory Stimuli: Design Implications. Proc. of ICAD, Palo Alto, California, Nov. 1997.

Walker, B. and Kramer, G. Human Factors and the Acoustic Ecology: Considerations for Multimedia Audio Design. Proc. of the ICAD, Palo Alto, California, Nov. 1996.

Williamson, J. and Murray-Smith, R. Granular Synthesis for Display of Time-Varying Probability Densities. Proc. of the International Workshop on Interactive Sonification, Bielefeld, Jan. 2004.

Williamson, J. and Murray-Smith, R. Audio Feedback with Gesture Recognition.

Technical Report TR-2002-127, Department of Computer Science, University of Glasgow, UK, Dec. 2002.

Technical Report TR-2002-127, Department of Computer Science, University of Glasgow, UK, Dec. 2002.