Vai ai contenuti. | Spostati sulla navigazione | Spostati sulla ricerca | Vai al menu | Contatti | Accessibilità

| Crea un account

De Poli, Giovanni - Avanzini, Federico - Mion, Luca - D'Incà, Gianluca - Trestino, Cosmo - Pirrò, Davida - Luciani, Annie - Castagné, Nicholas (2005) Towards a multi-layer architecture for multi-modal rendering of expressive actions. In: Proceedings of ENACTIVE05, 2nd International Conference on Enactive Interfaces, 17-18 November 2005, Genova, Italy.

Full text disponibile come:

[img]
Anteprima
Documento PDF
308Kb

Abstract (inglese)

Expressive content has multiple facets that can be conveyed by music, gesture, actions. Different application scenarios can require different metaphors for expressiveness control. In order to meet the requirements for flexible representation, we propose a multi-layer architecture structured into three main levels of abstraction. At the top (user level) there is a semantic description, which is adapted to specific user requirements and conceptualization. At the other end are low-level features that describe parameters strictly related to the rendering model. In between these two extremes, we propose an intermediate layer that provides a description shared by the various high-level representations on one side, and that can be instantiated to the various low-level rendering models on the other side. In order to provide a common representation of different expressive semantics and different modalities, we propose a physically-inspired description specifically suited for expressive actions.


Statistiche Download - Aggiungi a RefWorks
Tipo di EPrint:Contributo a convegno (Relazione)
Anno di Pubblicazione:Novembre 2005
Settori scientifico-disciplinari MIUR:Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 Sistemi di elaborazione delle informazioni
Struttura di riferimento:Dipartimenti > Dipartimento di Ingegneria dell'Informazione
Codice ID:100
Depositato il:10 Giu 2007
Simple Metadata
Full Metadata
EndNote Format

Bibliografia

I riferimenti della bibliografia possono essere cercati con Cerca la citazione di AIRE, copiando il titolo dell'articolo (o del libro) e la rivista (se presente) nei campi appositi di "Cerca la Citazione di AIRE".
Le url contenute in alcuni riferimenti sono raggiungibili cliccando sul link alla fine della citazione (Vai!) e tramite Google (Ricerca con Google). Il risultato dipende dalla formattazione della citazione.

[1] C.M. Hsieh, A. Luciani, ``Physically-based particle modeling for dance verbs'', Proc of the Graphicon Conference 2005, Novosibirsk, Russia, 2005. Cerca con Google

[2] N. Castagné, C. Cadoz, ``GENESIS: A Friendly Musician-Oriented Environment for Mass-Interaction Physical Modeling'', International Computer Music Conference - ICMC 2002 — Goteborg — pp. 330-337, 2002. Cerca con Google

[3] B. Repp, ``Patterns of expressive timing in performances of a Beethoven minuet by nineteen famous pianists'', Journal of Acoustical Society of America, vol. 88, pp. 622-641, 1990. Cerca con Google

[4] B. Repp, ``Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's 'Traumerei''', Journal of Acoustical Society of America, vol. 92, pp. 2546-2568, 1992. Cerca con Google

[5] F. Bonini, A. Rodà, ``Expressive content analysis of musical gesture: an experiment on piano improvisation'', Workshop on Current Research Directions in Computer Music, Barcelona, 2001. Cerca con Google

[6] M. Melucci, N. Orio, N. Gambalunga, ``An Evaluation Study on Music Perception for Content-based Information Retrieval'', Proc. Of International Computer Music Conference, Berlin, Germany, pp. 162-165, 2000. Cerca con Google

[7] E. Cambouropoulos, ``The Local Boundary Detection Model (LBDM) and its Application in the Study of Expressive Timing'', Proceedings of the International Computer Music Conference (ICMC 2001), 17-22 September, Havana, Cuba, 2001. Cerca con Google

[8] L. Mion, ``Application of Bayesian Networks to automatic recognition of expressive content of piano improvisations'', in Proceedings of the SMAC03 Stockholm Music Acoustics Conference, Stockholm, Sweden, pp. 557-560, 2003. Cerca con Google

[9] N. P. Todd, ``Model of expressive timing in tonal music'', Music Perception, vol. 3, pp. 33-58, 1985. Cerca con Google

[10] N. P. Todd, ``The dynamics of dynamics: a model of musical expression'', Journal of the Acoustical Society of America, 91, pp. 3540-3550. Cerca con Google

[11] A. Friberg, L. Frydèn, L. Bodin, J. Sundberg ``Performance Rules for Computer-Controlled Contemporary Keyboard Music'', Computer Music Journal, 15(2): 49-55, 1991. Cerca con Google

[12] D. Chi, M. Costa, L. Zhao, N. Badler, ``The EMOTE Model for Effort and Shape'', In Proceedings of SIGGRAPH00, pp. 173-182, July 2000. Cerca con Google

[13] S. Canazza, G. De Poli, C. Drioli, A. Rodà, A. Vidolin ``Modeling and Control of Expressiveness in Music Performance'', The Proceedings of the IEEE, vol. 92(4), pp. 286-701, 2004. Cerca con Google

[14] R. Bresin, ``Artificial neural networks based models for automatic performance of musical scores'', Journal of New Music Research, 27(3):239—270, 1998. Cerca con Google

[15] A. Camurri, G. De Poli, M. Leman, G. Volpe, ”Communicating Expressiveness and Affect in Multimodal Interactive Systems”, IEEE Multimedia, vol. 12, n. 1, pp. 43-53, 2005. Cerca con Google

[16] S. Hashimoto, ``KANSEI as the Third Target of Information Processing and Related Topics in Japan'', in Camurri A. (ed.): Proceedings of the International Workshop on KANSEI: The technology of emotion, AIMI (Italian Computer Music Association) and DIST University of Genova, 101-104, 1997. Cerca con Google

[17] K. Suzuki, S. Hashimoto, ``Robotic interface for embodied interaction via dance and musical performance'', In G. Johannsen (Guest Editor), The Proceedings of the IEEE, Special Issue on Engineering and Music, 92, pp. 656—671, 2004. Cerca con Google

[18] R. Bresin, A. Friberg, ``Emotional coloring of computer controlled music performance'', Computer Music Journal, vol. 24, no. 4, pp. 44—62, 2000. Cerca con Google

[19] L. Mion, G. D'Incà, ``An investigation over violin and flute expressive performances in the affective and sensorial domains'', Sound and Music Computing Conference (SMC 05), Salerno, Italy, 2005 (submitted). Cerca con Google

[20] M. Imberty, Les ecritures du temps, Dunod, Paris, 1981. Cerca con Google

[21] R. Laban, F.C. Lawrence, Effort: Economy in Body Movement, Plays, Inc., Boston, 1974. Cerca con Google

[22] D. Cirotteau, G. De Poli, L. Mion, A. Vidolin, and P. Zanon, "Recognition of musical gestures in known pieces and in improvisations", In A. Camurri, G. Volpe (eds.) Gesture Based Communication in Human- Computer Interaction, Berlin: Springer Verlag, pp. 497-508, 2004. Cerca con Google

[23] W. W. Gaver, ``What in the world do we hear? An ecological approach to auditory event perception'', Ecological Psychology, 5(1):1 29, 1993. Cerca con Google

[24] F. Avanzini, M. Rath, D. Rocchesso, and L. Ottaviani, ``Low-level sound models: resonators, interactions, surface textures'', In D. Rocchesso and F. Fontana, editors, The Sounding Object, pages 137-172. Mondo Estremo, Firenze, 2003. Cerca con Google

[25] L. Ottaviani, D. Rocchesso, F. Fontana, F. Avanzini, ``Size, shape, and material properties of sound models'', In D. Rocchesso and F. Fontana, editors, The Sounding Object, pages 95-110. Mondo Estremo, Firenze, 2003. Cerca con Google

[26] F. Avanzini, D. Rocchesso, S. Serafin, ``Friction sounds for sensory substitution'', Proc. Int. Conf. Auditory Display (ICAD04), Sydney, July 2004. Cerca con Google

[27] Canazza S., De Poli G., Di Sanzo G., Vidolin A. ``A model to add expressiveness to automatic musical performance'', In Proc. of International Computer Music Conference, Ann Arbour, pp. 163-169, 1998. Cerca con Google

[28] Clynes, M. ``Sentography: dynamic forms of communication of emotion and qualities'', Computers in Biology & Medicine, Vol, 3: 119-130, 1973. [29] Sundberg J, Friberg A. ``Stopping locomotion and stopping a piece of music: Comparing locomotion and music performance'', Proceedings of the Nordic Acoustic Meeting Helsinki 1996, 351-358, 1996. Cerca con Google

[30] A. Luciani, “Dynamics as a common criterion to enhance the sense of Presence in Virtual environments”. Proceedings of “Presence Conference 2004”. Oct. 2004. Valencia. Spain. Cerca con Google

[31] A. Luciani, J.L. Florens, N. Castagné. “From Action to Sound: a Challenging Perspective for Haptics”, Proceedings of WHC Conference 2005. Cerca con Google

[32] C. Cadoz , A. Luciani, J.L. Florens: "CORDISANIMA: a Modeling and Simulation System for Sound and Image Synthesis- The General Formalism", Computer Music Journal, Vol. 17-1, MIT Press, 1993. Cerca con Google

[33] A. Luciani, “Mémoires vives”. Artwork. Creation mondiale. Rencontres Internationales Informatique et Création Artistique. Grenoble 2000. Cerca con Google

[34] N. Castagné, C. Cadoz : "A Goals-Based Review of Physical Modelling" - Proc. of the International Computer Music Conference ICMC05 - Barcelona, Spain, 2005. Cerca con Google

[35] C. Cadoz, "The Physical Model as Metaphor for Musical Creation. pico..TERA, a Piece Entirely Generated by a Physical Model", Proc. of the International Computer Music Conference ICMC02, Sweden, 2002. Cerca con Google

[36] P. Juslin and J. SLoboda (eds.), Music and emotion: Theory and research, Oxford Univ. Press, 2001 Cerca con Google

Download statistics

Solo per lo Staff dell Archivio: Modifica questo record