Vai ai contenuti. | Spostati sulla navigazione | Spostati sulla ricerca | Vai al menu | Contatti | Accessibilità

| Crea un account

Michieletto, Stefano (2014) Robot Learning by observing human actions. [Tesi di dottorato]

Full text disponibile come:

[img]
Anteprima
Documento PDF - Versione sottomessa
Available under License Creative Commons Attribution Share Alike.

9Mb

Abstract (inglese)

Nowadays, robotics is entering in our life. One can see robot in industries, offices and even in homes. The more robots are in contact with people, the more requests of new capabilities and new features increase, in order to make robots able to act in case of need, help humans or be a companion. Therefore, it becomes essential to have a quick and easy way to teach new skills to robots. That is the aim of Robot Learning from Demonstration. This paradigm allows to directly program new tasks in a robot through demonstrations.

This thesis proposes a novel approach to Robot Learning from Demonstration able to learn new skills from natural demonstrations carried out from naive users. To this aim, we introduce a novel Robot Learning from Demonstration framework by proposing novel approaches in all functional sub-units: from data acquisition to motion elaboration, from information modeling to robot control.

A novel method is explained to extract 3D motion flow information from both RGB and depth data acquired by using recently introduced consumer RGB-D cameras.
The motion data are computed over the time to recognize and classify human actions.

In this thesis, we describe new techniques to remap human motion to robotic joints. Our methods allow people to natural interact with robots by re-targeting the whole body movements in an intuitive way. We develop algorithm for both humanoids and manipulators motion and test them in different situations.

Finally, we improve modeling techniques by using a probabilistic method: the Donut Mixture Model. This model is able to manage several interpretations that different people can produce performing a task. The estimated model can also be updated directly by using new attempts carried out by the robot. This feature is very important to rapidly obtain correct robot trajectories by means of few human demonstrations.

A further contribution of this thesis is the creation of a number of new virtual models for the different robots we used to test our algorithms. All the developed models are compliant with ROS, so they can be used to foster research in the field from all the community of this very diffuse robotics framework. Moreover, a new 3D dataset is collected to compare different action recognition algorithms. The dataset contains both RGB-D information coming directly from the sensor and skeleton data provided by a skeleton tracker.

Abstract (italiano)

La robotica sta ormai entrando nella nostra vita. Si possono trovare robot nelle industrie, negli uffici e perfino nelle case. Più i robot sono in contatto con le persone, più aumenta la richiesta di nuove funzionalità e caratteristiche per rendere i robot capaci di agire in caso di necessità, aiutare la gente o di essere di compagnia. Perciò è essenziale avere un modo rapido e facile di insegnare ai robot nuove abilità e questo è proprio l'obiettivo del Robot Learning from Demonstration. Questo paradigma consente di programmare nuovi task in un robot attraverso l'uso di dimostrazioni.

Questa tesi propone un nuovo approccio al Robot Learning from Demonstration in grado di apprendere nuove abilità da dimostrazioni eseguite naturalmente da utenti inesperti. A questo scopo, è stato introdotto un innovativo framework per il Robot Learning from Demonstration proponendo nuovi approcci in tutte le sub-unità funzionali: dall'acquisizione dei dati all’elaborazione del movimento, dalla modellazione delle informazioni al controllo del robot.

All’interno di questo lavoro è stato proposto un nuovo metodo per estrarre l’ informazione del flusso ottico 3D, combinando dati RGB e di profondità acquisiti tramite telecamere RGB-D introdotte di recente nel mercato consumer. Questo algoritmo calcola i dati di movimento lungo il tempo per riconoscere e classificare le azioni umane.

In questa tesi, sono descritte nuove tecniche per rimappare il movimento umano alle articolazioni robotiche. I metodi proposti permettono alle persone di interagire in modo naturale con i robot effettuando un re-targeting intuitivo di tutti i movimenti del corpo. È stato sviluppato un algoritmo di re-targeting del movimento sia per robot umanoidi che per manipolatori, testando entrambi in diverse situazioni.

Infine, sono state migliorate le tecniche di modellazione utilizzando un metodo probabilistico: il Donut Mixture Model. Questo modello è in grado di gestire le numerose interpretazioni che persone diverse possono produrre eseguendo un compito. Inoltre, il modello stimato può essere aggiornato utilizzando direttamente tentativi effettuati dal robot. Questa caratteristica è molto importante per ottenere rapidamente traiettorie robot corrette, mediante l’uso di poche dimostrazioni umane.

Un ulteriore contributo di questa tesi è la creazione di una serie di nuovi modelli virtuali per i diversi robot utilizzati per testare i nostri algoritmi. Tutti i modelli sviluppati sono compatibili con ROS, in modo che possano essere utilizzati da tutta la comunità di questo framework per la robotica molto diffuso per promuovere la ricerca nel campo. Inoltre, è stato raccolto un nuovo dataset 3D al fine di confrontare diversi algoritmi di riconoscimento delle azioni, il dataset contiene sia informazioni RGB-D provenienti direttamente dal sensore che informazioni sullo scheletro fornite da uno skeleton tracker.

Statistiche Download - Aggiungi a RefWorks
Tipo di EPrint:Tesi di dottorato
Relatore:Menegatti, Emanuele
Dottorato (corsi e scuole):Ciclo 26 > Scuole 26 > INGEGNERIA DELL'INFORMAZIONE > SCIENZA E TECNOLOGIA DELL'INFORMAZIONE
Data di deposito della tesi:30 Gennaio 2014
Anno di Pubblicazione:30 Gennaio 2014
Parole chiave (italiano / inglese):Robot Learning from Demonstration, Machine Learning, Robotics, RGB-D data, Motion Re-targeting, Action Recognition, Natural Demonstrations, 3D Motion
Settori scientifico-disciplinari MIUR:Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 Sistemi di elaborazione delle informazioni
Struttura di riferimento:Dipartimenti > Dipartimento di Ingegneria dell'Informazione
Codice ID:6774
Depositato il:19 Mag 2015 16:27
Simple Metadata
Full Metadata
EndNote Format

Bibliografia

I riferimenti della bibliografia possono essere cercati con Cerca la citazione di AIRE, copiando il titolo dell'articolo (o del libro) e la rivista (se presente) nei campi appositi di "Cerca la Citazione di AIRE".
Le url contenute in alcuni riferimenti sono raggiungibili cliccando sul link alla fine della citazione (Vai!) e tramite Google (Ricerca con Google). Il risultato dipende dalla formattazione della citazione.

[1] http://www.microsoft.com/en-us/kinectforwindows/. Vai! Cerca con Google

[2] H. Akaike. Information theory and an extension of the maximum likelihood principle. In 2nd International Symposium on Information Theory, pages 267–281, 1973. Cerca con Google

[3] B. Akgun, M. Cakmak, K. Jiang, and A. L. Thomaz. Keyframe-based learn- ing from demonstration. International Journal of Social Robotics, 4(4):343– 355, 2012. Cerca con Google

[4] S. Ali and M. Shah. Human action recognition in videos using kinematic features and multiple instance learning. Pattern Analysis and Machine Intel- ligence, IEEE Transactions on, 32(2):288 –303, feb. 2010. Cerca con Google

[5] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn. Correspondence map- ping induced state and action metrics for robotic imitation. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 37(2):299– 307, 2007. Cerca con Google

[6] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469–483, 2009. Cerca con Google

[7] G. Ballin, M. Munaro, and E. Menegatti. Human Action Recognition from RGB-D Frames Based on Real-Time 3D Optical Flow Estimation. In Cerca con Google

A. Chella, R. Pirrone, R. Sorbello, and K. R. Jo´hannsdo´ttir, editors, Biolog- ically Inspired Cognitive Architectures 2012, pages 65–74. Springer Berlin Heidelberg, 2012. Cerca con Google

[8] F. Basso, M. Munaro, S. Michieletto, E. Pagello, and E. Menegatti. Fast and robust multi-people tracking from rgb-d data for a mobile robot. In 12th Intelligent Autonomous Systems Conference (IAS-12), pages 265–276. Springer, Jeju Island, Korea, June 2013. Cerca con Google

[9] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. In B. Siciliano and O. Khatib, editors, Handbook of Robotics, pages 1371–1394. Springer, Secaucus, NJ, USA, 2008. Cerca con Google

[10] A. Bisson, A. Busatto, S. Michieletto, and E. Menegatti. Stabilize humanoid robot teleoperated by a rgb-d sensor. In Proceedings of the Workshop Popu- larize Artificial Intelligence (PAI2013), pages 97–102, 2013. Cerca con Google

[11] V. Bloom, D. Makris, and V. Argyriou. G3d: A gaming action dataset and real time action recognition evaluation framework. In Computer Vi- sion and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 7 –12, june 2012. Cerca con Google

[12] C. G. Broyden. The convergence of a class of double-rank minimization algorithms 1. general considerations. IMA Journal of Applied Mathematics, 6(1):76–90, 1970. Cerca con Google

[13] S. Calinon and A. Billard. Stochastic gesture production and recognition model for a humanoid robot. In Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, volume 3, Cerca con Google

pages 2769–2774. IEEE, 2004. Cerca con Google

[14] S. Calinon and A. Billard. Recognition and reproduction of gestures using a probabilistic framework combining pca, ica and hmm. In Proceedings of the 22nd international conference on Machine learning, pages 105–112. ACM, 2005. Cerca con Google

[15] S. Calinon and A. Billard. A probabilistic programming by demonstration framework handling constraints in joint space and task space. In Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, pages 367–372. IEEE, 2008. Cerca con Google

[16] S. Calinon, F. D’halluin, D. G. Caldwell, and A. G. Billard. Handling of mul- tiple constraints and motion alternatives in a robot programming by demon- stration framework. In Humanoid Robots, 2009. Humanoids 2009. 9th IEEE- RAS International Conference on, pages 582–588. IEEE, 2009. Cerca con Google

[17] S. Calinon, F. Guenter, and A. Billard. On learning, representing, and gen- eralizing a task in a humanoid robot. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 37(2):286–298, 2007. Cerca con Google

[18] E. Cypher and D. C. Halbert. Watch what I do: programming by demonstra- tion. The MIT Press, 1993. Cerca con Google

[19] B. Dariush, M. Gienger, A. Arumbakkam, Y. Zhu, B. Jian, K. Fujimura, and Cerca con Google

C. Goerick. Online transfer of human motion to humanoids. International Journal of Humanoid Robotics, 6(02):265–289, 2009. Cerca con Google

[20] W. C. Davidon. Variable metric method for minimization. SIAM Journal on Optimization, 1(1):1–17, 1991. Cerca con Google

[21] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from in- complete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1–38, 1977. Cerca con Google

[22] R. Dillmann. Teaching and learning of robot tasks via observation of human performance. Robotics and Autonomous Systems, 47(2):109–116, 2004. Cerca con Google

[23] G. Du, P. Zhang, J. Mai, and Z. Li. Markerless kinect-based hand track- ing for robot teleoperation. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 9, 2012. Cerca con Google

[24] A. Efros, A. Berg, G. Mori, and J. Malik. Recognizing action at a distance. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 726 –733 vol.2, oct. 2003. Cerca con Google

[25] R. Fletcher. A new approach to variable metric algorithms. The computer journal, 13(3):317–322, 1970. Cerca con Google

[26] R. Fletcher and M. J. Powell. A rapidly convergent descent method for minimization. The Computer Journal, 6(2):163–168, 1963. Cerca con Google

[27] R. Fletcher and C. M. Reeves. Function minimization by conjugate gradi- ents. The computer journal, 7(2):149–154, 1964. Cerca con Google

[28] M. Freese, S. Singh, F. Ozaki, and N. Matsuhira. Virtual robot experimen- tation platform v-rep: a versatile 3d robot simulator. In Proceedings of the Second international conference on Simulation, modeling, and programming for autonomous robots, SIMPAR’10, pages 51–62, Berlin, Heidelberg, 2010. Springer-Verlag. Cerca con Google

[29] B. Gerkey, R. T. Vaughan, and A. Howard. The player/stage project: Tools for multi-robot and distributed sensor systems. In Proceedings of the 11th international conference on advanced robotics, volume 1, pages 317–323, 2003. Cerca con Google

[30] S. Ghidoni, S. Anzalone, M. Munaro, S. Michieletto, and E. Menegatti. A distributed perception infrastructure for robot assisted living. To appear in Robotics and Automous Systems (RAS) Journal. Cerca con Google

[31] M. Gleicher. Retargetting motion to new characters. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 33–42. ACM, 1998. Cerca con Google

[32] D. Goldfarb. A family of variable-metric methods derived by variational means. Mathematics of computation, 24(109):23–26, 1970. Cerca con Google

[33] R. Gopalan and B. Dariush. Toward a vision based hand gesture interface for robotic grasping. In Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, pages 1452–1459. IEEE, 2009. Cerca con Google

[34] D. H. Grollman and A. Billard. Donut as i do: Learning from failed demon- strations. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 3804–3809. IEEE, 2011. Cerca con Google

[35] D. H. Grollman and A. G. Billard. Robot learning from failed demonstra- tions. International Journal of Social Robotics, 4(4):331–342, 2012. Cerca con Google

[36] J. Han, L. Shao, D. Xu, and J. Shotton. Enhanced computer vision with microsoft kinect sensor: A review. IEEE Transactions on Cybernetics, 2013. Cerca con Google

[37] M. Hersch, F. Guenter, S. Calinon, and A. Billard. Dynamical system mod- ulation for robot learning via kinesthetic demonstrations. Robotics, IEEE Transactions on, 24(6):1463–1467, 2008. Cerca con Google

[38] M. Holte and T. Moeslund. View invariant gesture recognition using 3d mo- tion primitives. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 797 –800, 31 2008-april 4 Cerca con Google

2008. Cerca con Google

[39] M. Holte, T. Moeslund, N. Nikolaidis, and I. Pitas. 3d human action recogni- tion for multi-view camera systems. In 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2011 International Conference on, pages 342 –349, may 2011. Cerca con Google

[40] http://bulletphysics.org/. Bullet Physics Library. [online]. [41] http://www.ode.org/. Open Dynamics Engine. [online]. [42] http://www.ogre3d.org/. OGRE 3D. [online]. Vai! Cerca con Google

[43] http://www.primesense.com/solutions/nite middleware. Nite middleware [online]. Vai! Cerca con Google

[44] A. Kar. Skeletal tracking using microsoft kinect. Methodology, 1:1–11, 2010. Cerca con Google

[45] N. Koenig and A. Howard. Design and use paradigms for gazebo, an open- source multi-robot simulator. In Intelligent Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on, volume 3, pages 2149–2154 vol.3, Sept.-2 Oct. Cerca con Google

[46] H. S. Koppula and A. Saxena. Learning spatio-temporal structure from rgb-d videos for human activity detection and anticipation. ICML, 2013. Cerca con Google

[47] M. Korner and J. Denzler. Analyzing the subspaces obtained by dimen- sionality reduction for human action recognition from 3d data. In Advanced Video and Signal-Based Surveillance (AVSS), 2012 IEEE Ninth International Conference on, pages 130 –135, sept. 2012. Cerca con Google

[48] Y. Kuniyoshi, M. Inaba, and H. Inoue. Learning by watching: Extracting reusable task knowledge from visual observation of human performance. Robotics and Automation, IEEE Transactions on, 10(6):799–822, 1994. Cerca con Google

[49] I. Laptev and T. Lindeberg. Space-time interest points. In Proc. Ninth IEEE Int Computer Vision Conf, pages 432–439, 2003. Cerca con Google

[50] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In Proc. IEEE Conf. Computer Vision and Pattern Recognition CVPR 2008, pages 1–8, 2008. Cerca con Google

[51] J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard. Interactive control of avatars animated with human motion data. In ACM Transactions on Graphics (TOG), volume 21, pages 491–500. ACM, 2002. Cerca con Google

[52] J. Lei, X. Ren, and D. Fox. Fine-grained kitchen activity recognition using rgb-d. In Proceedings of the 2012 ACM Conference on Ubiquitous Comput- ing, UbiComp ’12, pages 208–211, New York, NY, USA, 2012. ACM. Cerca con Google

[53] A. Leo´n, E. F. Morales, L. Altamirano, and J. R. Ruiz. Teaching a robot to perform task through imitation and on-line feedback. In Progress in Pat- tern Recognition, Image Analysis, Computer Vision, and Applications, pages 549–556. Springer, 2011. Cerca con Google

[54] W. Li, Z. Zhang, and Z. Liu. Action recognition based on a bag of 3d points. In IEEE International Workshop on CVPR for Human Communica- tive Behavior Analysis (in conjunction with CVPR 2010), San Francisco, CA,, pages 9 –14, June 2010. Cerca con Google

[55] R. Liu, S. Z. Li, X. Yuan, and R. He. Online Determination of Track Loss Using Template Inverse Matching. In The Eighth International Workshop Cerca con Google

on Visual Surveillance - VS2008, Marseille, France, 2008. Graeme Jones and Tieniu Tan and Steve Maybank and Dimitrios Makris. Cerca con Google

[56] B. Lukas and T. Kanade. An iterative image registration technique with an application to stereo vision. In IJCAI ’81, pages 674–679, 1981. Cerca con Google

[57] M. M. Marinho, A. A. Geraldes, A. P. Bo´, and G. A. Borges. Manipulator control based on the dual quaternion framework for intuitive teleoperation using kinect. In Robotics Symposium and Latin American Robotics Sympo- sium (SBR-LARS), 2012 Brazilian, pages 319–324. IEEE, 2012. Cerca con Google

[58] S. Michieletto, N. Chessa, and E. Menegatti. Learning how to approach industrial robot tasks from natural demonstrations. In Advanced Robotics and its Social Impacts (ARSO), 2013 IEEE Workshop on, pages 255–260. IEEE, 2013. Cerca con Google

[59] S. Michieletto, S. Ghidoni, E. Pagello, M. Moro, and E. Menegatti. Why teach robotics using ros. To appear in Journal of Automation, Mobile Robotics & Intelligent Sysyems (JAMRIS). Cerca con Google

[60] S. Michieletto and E. Menegatti. Human action recognition oriented to hu- manoid robots action reproduction. In Proceedings of the Workshop Popu- larize Artificial Intelligence (PAI2012), pages 35–40, 2012. Cerca con Google

[61] S. Michieletto, A. Rizzi, and E. Menegatti. Robot learning by observing humans activities and modeling failures. In IROS workshops: Cognitive Robotics Systems (CRS2013), IEEE (Nov 2013), 2013. Cerca con Google

[62] S. Michieletto, D. Zanin, and E. Menegatti. In Z. X. David Al-Dabass, Alessandra Orsoni, editor, European Modelling Symposium (EMS2013), pages 448–453, Manchester, UK. Cerca con Google

[63] Y. Ming, Q. Ruan, and A. Hauptmann. Activity recognition from rgb-d cam- era with 3d local spatio-temporal features. In Multimedia and Expo (ICME), 2012 IEEE International Conference on, pages 344 –349, july 2012. Cerca con Google

[64] K. Miura, M. Morisawa, S. Nakaoka, F. Kanehiro, K. Harada, K. Kaneko, and S. Kajita. Robot motion remix based on motion capture data towards human-like locomotion of humanoid robots. In Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on, pages 596– 603. IEEE, 2009. Cerca con Google

[65] S. Muench, J. Kreuziger, M. Kaiser, and R. Dillman. Robot program- ming by demonstration (rpd)-using machine learning and user interaction methods for the development of easy and comfortable robot programming systems. In Proceedings of the International Symposium on Industrial Robots, volume 25, pages 685–685. INTERNATIONAL FEDERATION OF ROBOTICS, & ROBOTIC INDUSTRIES, 1994. Cerca con Google

[66] M. Munaro, G. Ballin, S. Michieletto, and E. Menegatti. 3d flow estima- tion for human action recognition from colored point clouds. Biologically Inspired Cognitive Architectures, 2013. Cerca con Google

[67] M. Munaro, F. Basso, and E. Menegatti. Tracking people withing groups with rgb-d data. In Proc. of the International Conference on Intelligent Robots and Systems (IROS), Vilamoura (Portugal), 2012. Cerca con Google

[68] M. Munaro, F. Basso, S. Michieletto, E. Pagello, and E. Menegatti. A soft- ware architecture for rgb-d people tracking based on ros framework for a mobile robot. In Frontiers of Intelligent Autonomous Systems, pages 53–68. Springer, 2013. Cerca con Google

[69] M. Munaro, S. Michieletto, and E. Menegatti. An evaluation of 3D motion flow and 3D pose estimation for human action recognition. In RSS Work- shops: RGB-D: Advanced Reasoning with Depth Cameras, 2013. Cerca con Google

[70] R. M. Neal and G. E. Hinton. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355–368. Springer, 1998. Cerca con Google

[71] B. Ni, G. Wang, and P. Moulin. Rgbd-hudaact: A color-depth video database for human daily activity recognition. In Computer Vision Workshops (ICCV Cerca con Google

Workshops), 2011 IEEE International Conference on, pages 1147 –1153, Cerca con Google

nov. 2011. Cerca con Google

[72] M. N. Nicolescu and M. J. Mataric. Natural methods for robot task learning: Instructive demonstrations, generalization and practice. In Proceedings of the second international joint conference on Autonomous agents and multi- agent systems, pages 241–248. ACM, 2003. Cerca con Google

[73] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. Sequence of the most informative joints (smij): A new representation for human skeletal ac- tion recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 8 –13, june 2012. Cerca con Google

[74] F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, and R. Bajcsy. Berkeley MHAD: A comprehensive multimodal human action database. In Proceedings of IEEE Workshop on Applications of Computer Vision (WACV), Jan. 2013. Cerca con Google

[75] M. Pardowitz, S. Knoop, R. Dillmann, and R. Zollner. Incremental learning of tasks from user demonstrations, past experiences, and vocal comments. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 37(2):322–332, 2007. Cerca con Google

[76] E. Polak and G. Ribiere. Note sur la convergence de me´thodes de directions conjugue´es. ESAIM: Mathematical Modelling and Numerical Analysis- Mode´lisation Mathe´matique et Analyse Nume´rique, 3(R1):35–43, 1969. Cerca con Google

[77] N. S. Pollard, J. K. Hodgins, M. J. Riley, and C. G. Atkeson. Adapting human motion for the control of a humanoid robot. In Robotics and Au- tomation, 2002. Proceedings. ICRA’02. IEEE International Conference on, volume 2, pages 1390–1397. IEEE, 2002. Cerca con Google

[78] M. Popa, A. Koc, L. Rothkrantz, C. Shan, and P. Wiggers. Kinect sensing of shopping related actions. In undefined, K. Van Laerhoven, and J. Gelissen, editors, Constructing Ambient Intelligence: AmI 2011 Workshops, Amster- dam, Netherlands, 11 2011. Cerca con Google

[79] G. Pozzato, S. Michieletto, and E. Menegatti. Towards smart robots: rock- paper-scissors gaming versus human players. In Proceedings of the Work- shop Popularize Artificial Intelligence (PAI2013), pages 89–95, 2013. Cerca con Google

[80] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, Cerca con Google

R. Wheeler, and A. Ng. Ros: an open-source robot operating system. In Pro- ceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2009. Cerca con Google

[81] S. Schaal, J. Peters, J. Nakanishi, and A. Ijspeert. Learning movement prim- itives. In Robotics Research, pages 561–572. Springer, 2005. Cerca con Google

[82] G. Schwarz. Estimating the dimension of a model. The annals of statistics, 6(2):461–464, 1978. Cerca con Google

[83] Y. Seol, C. OSullivan, and J. Lee. Creature features: Online motion pup- petry for non-human characters. In Proceedings of the 2013 ACM SIG- GRAPH/Eurographics Symposium on Computer Animation, 2013. Cerca con Google

[84] D. F. Shanno. Conditioning of quasi-newton methods for function minimiza- tion. Mathematics of computation, 24(111):647–656, 1970. Cerca con Google

[85] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, Cerca con Google

M. Cook, and R. Moore. Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116–124, 2013. Cerca con Google

[86] D. C. Smith, A. Cypher, and J. Spohrer. Kidsim: programming agents with- out a programming language. Communications of the ACM, 37(7):54–67, 1994. Cerca con Google

[87] J. Sung, C. Ponce, B. Selman, and A. Saxena. Unstructured human activity detection from rgbd images. In ICRA, 2012. Cerca con Google

[88] H. L. U. Thuc, P. V. Tuan, and J.-N. Hwang. An effective 3d geometric relational feature descriptor for human action recognition. In Computing and Communication Technologies, Research, Innovation, and Vision for the Future (RIVF), 2012 IEEE RIVF International Conference on, pages 1 –6, 27 2012-march 1 2012. Cerca con Google

[89] A. Ude. Trajectory generation from noisy positions of object features for teaching robot paths. Robotics and Autonomous Systems, 11(2):113–127, 1993. Cerca con Google

[90] M. Vukobratovic´ and J. Stepanenko. On the stability of anthropomorphic systems. Mathematical Biosciences, 15(1):1–37, 1972. Cerca con Google

[91] J. Wang, Z. Liu, Y. Wu, and J. Yuan. Mining actionlet ensemble for ac- tion recognition with depth cameras. In IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR 2012), Providence, Rhode Island,, June 2012. Cerca con Google

[92] C. Wolf, J. Mille, E. Lombardi, O. Celiktutan, M. Jiu, M. Baccouche, E. Del- landra, C.-E. Bichot, C. Garcia, and B. Sankur. The LIRIS Human activi- ties dataset and the ICPR 2012 human activities recognition and localiza- tion competition. Technical Report RR-LIRIS-2012-004, LIRIS UMR 5205 CNRS/INSA de Lyon/Universit Claude Bernard Lyon 1/Universit Lumire Lyon 2/cole Centrale de Lyon, Mar. 2012. Cerca con Google

[93] L. Xia, C.-C. Chen, and J. Aggarwal. View invariant human action recogni- tion using histograms of 3d joints. In Computer Vision and Pattern Recog- nition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pages 20 –27, june 2012. Cerca con Google

[94] Y. Yacoob and M. Black. Parameterized modeling and recognition of activ- ities. In Computer Vision, 1998. Sixth International Conference on, pages 120 –127, jan 1998. Cerca con Google

[95] K. Yamane, Y. Ariki, and J. Hodgins. Animating non-humanoid char- acters with human motion data. In Proceedings of the 2010 ACM SIG- GRAPH/Eurographics Symposium on Computer Animation, SCA ’10, pages 169–178, Aire-la-Ville, Switzerland, Switzerland, 2010. Eurographics As- sociation. Cerca con Google

[96] X. Yang and Y. Tian. Eigenjoints-based action recognition using naive- bayes-nearest-neighbor. In IEEE Workshop on CVPR for Human Activity Understanding from 3D Data, 2012. Cerca con Google

[97] H. Zhang and L. E. Parker. 4-dimensional local spatio-temporal features for human activity recognition. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 2044 –2049, sept. 2011. Cerca con Google

[98] Y. Zhao, Z. Liu, L. Yang, and H. Cheng. Combing rgb and depth map fea- tures for human activity recognition. In Signal Information Processing As- sociation Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pages 1 –4, dec. 2012. Cerca con Google

[99] C. Zito, M. Kopicki, R. Stolkin, and J. L. Wyatt. Sequential re-planning for dextrous grasping under object-pose uncertainty. In RSS 2013 Workshop: Manipulation with Uncertain Models, 2013. Cerca con Google

Download statistics

Solo per lo Staff dell Archivio: Modifica questo record