Vai ai contenuti. | Spostati sulla navigazione | Spostati sulla ricerca | Vai al menu | Contatti | Accessibilità

| Crea un account

Bogo, Federica (2015) From scans to models: Registration of 3D human shapes exploiting texture information. [Tesi di dottorato]

Full text disponibile come:

[img]Documento PDF
Tesi non accessible fino a 30 Gennaio 2018 per motivi correlati alla proprietà intellettuale.
Visibile a: nessuno

7Mb

Abstract (inglese)

New scanning technologies are increasing the importance of 3D mesh data, and of algorithms that can reliably register meshes obtained from multiple scans. Surface registration is important e.g. for building full 3D models from partial scans, identifying and tracking objects in a 3D scene, creating statistical shape models.

Human body registration is particularly important for many applications, ranging from biomedicine and robotics to the production of movies and video games; but obtaining accurate and reliable registrations is challenging, given the articulated, non-rigidly deformable structure of the human body.

In this thesis, we tackle the problem of 3D human body registration. We start by analyzing the current state of the art, and find that: a) most registration techniques rely only on geometric information, which is ambiguous on flat surface areas; b) there is a lack of adequate datasets and benchmarks in the field. We address both issues.

Our contribution is threefold. First, we present a model-based registration technique for human meshes that combines geometry and surface texture information to provide highly accurate mesh-to-mesh correspondences. Our approach estimates scene lighting and surface albedo, and uses the albedo to construct a high-resolution textured 3D body model that is brought into registration with multi-camera image data using a robust matching term.

Second, by leveraging our technique, we present FAUST (Fine Alignment Using Scan Texture), a novel dataset collecting 300 high-resolution scans of 10 people in a wide range of poses. FAUST is the first dataset providing both real scans and automatically computed, reliable ground-truth correspondences between them.

Third, we explore possible uses of our approach in dermatology. By combining our registration technique with a melanocytic lesion segmentation algorithm, we propose a system that automatically detects new or evolving lesions over almost the entire body surface, thus helping dermatologists identify potential melanomas.

We conclude this thesis investigating the benefits of using texture information to establish frame-to-frame correspondences in dynamic monocular sequences captured with consumer depth cameras. We outline a novel approach to reconstruct realistic body shape and appearance models from dynamic human performances, and show preliminary results on challenging sequences captured with a Kinect.

Abstract (italiano)

Lo sviluppo di nuove tecnologie di scansione sta accrescendo l'importanza dei dati tridimensionali (3D), e la necessita' di algoritmi di registrazione adeguati per essi. Registrare accuratamente superfici 3D e' importante per identificare oggetti ed effettuarne il tracking, costruire modelli completi a partire da scansioni parziali, creare modelli statistici.

La registrazione di scansioni 3D del corpo umano e' fondamentale in molte applicazioni, dal campo biomedico a quello della produzione di film e videogiochi; ottenere registrazioni accurate e affidabili e' pero' difficile, poiche' il corpo umano e' articolato, e si deforma in maniera non rigida.

In questa tesi, affrontiamo il problema della registrazione di scansioni 3D del corpo umano. Iniziamo la nostra analisi considerando lo stato dell'arte, e rilevando che: a) la maggior parte delle tecniche di registrazione 3D usa solo informazione geometrica, che e' ambigua in zone in cui le superfici sono lisce; b) c'e' una mancanza di adeguati dataset e benchmark nel settore. L'obiettivo di questa tesi e' quello di risolvere questi problemi.

In particolare, portiamo tre contributi. Primo, proponiamo una nuova tecnica di registrazione per scansioni 3D del corpo umano che integra informazione geometrica con informazione cromatica di superficie. La nostra tecnica dapprima stima l'illuminazione nella scena, in modo da fattorizzare il colore della superficie osservata in effetti di luce e pura albedo; l'albedo estratta viene quindi usata per creare un modello 3D del corpo ad alta risoluzione. Tale modello viene allineato a una serie di immagini 2D, acquisite simultaneamente alle scansioni 3D, usando una funzione di matching robusta.

Secondo, sulla base delle registrazioni prodotte dalla nostra tecnica, proponiamo un nuovo dataset per algoritmi di registrazione 3D, FAUST (Fine Alignment Using Scan Texture). FAUST colleziona 300 scansioni 3D relative a 10 soggetti in differenti pose. E' il primo dataset che fornisce sia scansioni reali, sia registrazioni accurate e affidabili ("ground truth") per esse.

Terzo, esploriamo possibili usi del nostro approccio in dermatologia. Combinando la nostra tecnica di registrazione con un algoritmo di segmentazione per lesioni melanocitiche, proponiamo un sistema di screening in grado di rilevare l'insorgenza di nuove lesioni o modifiche in lesioni preesistenti su quasi tutta la superficie cutanea; tale sistema e' di aiuto per i dermatologi nell'individuazione di potenziali melanomi.

Concludiamo questa tesi esaminando l'importanza di usare informazione cromatica per registrare scansioni 3D acquisite in sequenze dinamiche. In particolare, proponiamo un nuovo approccio per ottenere modelli 3D realistici e completi del corpo umano a partire da sequenze acquisite con un singolo Kinect.

Aggiungi a RefWorks
Tipo di EPrint:Tesi di dottorato
Relatore:Peserico, Enoch
Dottorato (corsi e scuole):Ciclo 26 > Scuole 26 > INGEGNERIA DELL'INFORMAZIONE > SCIENZA E TECNOLOGIA DELL'INFORMAZIONE
Data di deposito della tesi:30 Gennaio 2015
Anno di Pubblicazione:30 Gennaio 2015
Parole chiave (italiano / inglese):3D registration, human body modeling, texture mapping, appearance modeling, 3D mesh
Settori scientifico-disciplinari MIUR:Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 Sistemi di elaborazione delle informazioni
Struttura di riferimento:Dipartimenti > Dipartimento di Ingegneria dell'Informazione
Codice ID:7833
Depositato il:16 Nov 2015 12:06
Simple Metadata
Full Metadata
EndNote Format

Bibliografia

I riferimenti della bibliografia possono essere cercati con Cerca la citazione di AIRE, copiando il titolo dell'articolo (o del libro) e la rivista (se presente) nei campi appositi di "Cerca la Citazione di AIRE".
Le url contenute in alcuni riferimenti sono raggiungibili cliccando sul link alla fine della citazione (Vai!) e tramite Google (Ricerca con Google). Il risultato dipende dalla formattazione della citazione.

[1] http://bmivisualizer.com. Vai! Cerca con Google

[2] http://microsoft.com/en-us/kinectforwindows. Vai! Cerca con Google

[3] http://cyberware.com. Vai! Cerca con Google

[4] http://vitronic.de. Vai! Cerca con Google

[5] http://3dmd.com. Vai! Cerca con Google

[6] http://ir-ltd.net. Vai! Cerca con Google

[7] http://4dviews.com. Vai! Cerca con Google

[8] http://asus.com/Multimedia/Xtion_PRO_LIVE. Vai! Cerca con Google

[9] http://intel.com/content/www/us/en/architecture-and-technology/reals ense-depth-technologies.html. Vai! Cerca con Google

[10] http://faust.is.tue.mpg.de. Vai! Cerca con Google

[11] http://mocap.cs.cmu.edu. Vai! Cerca con Google

[12] http://chumpy.org. Vai! Cerca con Google

[13] O. Alexander et al. “Digital Ira: Creating a real-time photoreal digital actor.” In: ACM SIGGRAPH 2013 Posters. 2013, 1:1–1:1. Cerca con Google

[14] B. Allen, B. Curless, and Z. Popovic. “The space of human body shapes: Reconstruction and parameterization from range scans.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 22.3 (2003), pp. 587–594. Cerca con Google

[15] B. Allen, B. Curless, Z. Popovic, and A. Hertzmann. “Learning a correlated model of identity and pose-dependent body shape variation for real-time syn- thesis.” In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA). 2006, pp. 147–156. Cerca con Google

[16] C. Allene, J. Pons, and R. Keriven. “Seamless image-based texture atlases using multi-band blending.” In: IEEE International Conference on Pattern Recognition (ICPR). 2008, pp. 1–4. Cerca con Google

[17] B. Amberg. “Editing faces in videos.” PhD thesis. University of Basel, 2011. Cerca con Google

[18] B. Amberg, S. Romdhani, and T. Vetter. “Optimal step nonrigid ICP algorithms for surface registration.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–8. Cerca con Google

[19] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, H.-C. Pand, and J. Davis. “The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces.” In: Advances in Neural Information Processing Systems (NIPS). 2005, pp. 33–40. Cerca con Google

[20] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. “SCAPE: Shape Completion and Animation of PEople.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 24.3 (2005), pp. 408–416. Cerca con Google

[21] M. Aubry, U. Schlickewei, and D. Cremers. “The wave kernel signature: A quan-tum mechanical approach to shape analysis.” In: IEEE International Conference on Computer Vision (ICCV) Worskhops. 2011, pp. 1626–1633. Cerca con Google

[22] A. Balan and M. J. Black. “The naked truth: Estimating body shape under clothing.” In: European Conference on Computer Vision (ECCV). Vol. 5303. LNCS. 2008, pp. 15–29. Cerca con Google

[23] A. Balan, M. J. Black, H. Haussecker, and L. Sigal. “Shining a light on human pose: On shadows, shading and the estimation of pose and shape.” In: IEEE International Conference on Computer Vision (ICCV). 2007, pp. 1–8. Cerca con Google

[24] A. Balan, L. Sigal, M. J. Black, J. Davis, and H. Haussecker. “Detailed human shape and pose from images.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–8. Cerca con Google

[25] J. Barron and J. Malik. “Color constancy, intrinsic images, and shape estimation.” In: European Conference on Computer Vision (ECCV). Vol. 7575. LNCS. 2012, pp. 57–70. Cerca con Google

[26] R. Basri and D. Jacobs. “Lambertian reflectance and linear subspaces.” In: IEEE Trans. on Pattern Analysis and Machine Intelligence 25.2 (2003), pp. 218–233. Cerca con Google

[27] A. Baumberg. “Blending images for texturing 3D models.” In: British Machine Vision Conference (BMVC). 2002, pp. 1–10. Cerca con Google

[28] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. “SURF: Speeded up robust features.” In: Computer Vision and Image Understanding 110.3 (2008), pp. 346– 359. Cerca con Google

[29] F. Bernardini, I. Martin, and H. Rushmeier. “High-quality texture reconstruction from multiple scans.” In: IEEE Trans. on Visualization and Computer Graphics 7.4 (2001), pp. 318–332. Cerca con Google

[30] P. Besl and N. McKay. “A method for registration of 3D shapes.” In: IEEE Trans. on Pattern Analysis and Machine Intelligence 14.2 (1992), pp. 239–256. Cerca con Google

[31] F. Blais. “Review of 20 years of range sensor development.” In: Journal of Electronic Imaging 13.1 (2004), pp. 231–243. Cerca con Google

[32] V. Blanz and T. Vetter. “A morphable model for the synthesis of 3D faces.” In: ACM SIGGRAPH. 1999, pp. 187–194. Cerca con Google

[33] J. Blinn. “Models of light reflection for computer synthesized pictures.” In: ACM SIGGRAPH. 1977, pp. 192–198. Cerca con Google

[34] F. Bogo, J. Romero, M. Loper, and M. J. Black. “FAUST: Dataset and evaluation for 3D mesh registration.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014, pp. 3794–3801. Cerca con Google

[35] F. Bogo, J. Romero, E. Peserico, and M. J. Black. “Automated detection of new or evolving melanocytic lesions using a 3D body model.” In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). Vol. 8673. LNCS. 2014, pp. 593–600. Cerca con Google

[36] A. Bronstein, M. Bronstein, L. Guibas, and M. Ovsjanikov. “Shape Google: Geometric words and expressions for invariant shape retrieval.” In: ACM Trans. on Graphics 30.1 (2011), 1:1–1:20. Cerca con Google

[37] A. Bronstein, M. Bronstein, and R. Kimmel. “Generalized multidimensional scaling: A framework for isometry-invariant partial surface matching.” In: Proc. of the National Academy of Sciences (PNAS) 103.5 (2006), pp. 1168–1172. Cerca con Google

[38] A. Bronstein, M. Bronstein, and R. Kimmel. Numerical geometry of non-rigid shapes. Springer, 2008. Cerca con Google

[39] A. Bronstein, M. Bronstein, R. Kimmel, M. Mahmoudi, and G. Sapiro. “A Gromov- Hausdorff framework with diffusion geometry for topologically-robust non- rigid shape matching.” In: International Journal of Computer Vision 89.2–3 (2010), pp. 266–286. Cerca con Google

[40] A. Bronstein et al. “SHREC 2010: Robust correspondence benchmark.” In: Eurographics Workshop on 3D Object Retrieval (3DOR). 2010. Cerca con Google

[41] B. Brown and S. Rusinkiewicz. “Global non-rigid alignment of 3-D scans.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 26.3 (2007), 148:1–148:10. Cerca con Google

[42] C. Cagniart, E. Boyer, and S. Ilic. “Probabilistic deformable surface tracking from multiple videos.” In: European Conference on Computer Vision (ECCV). Vol. 6314. LNCS. 2010, pp. 326–339. Cerca con Google

[43] E. Catmull. “Subdivision algorithm for computer display of curved surfaces.” PhD thesis. University of Utah, 1974. Cerca con Google

[44] Y. Chen, Z. Liu, and Z. Zhang. “Tensor-Based Human Body Modeling.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2013, pp. 105–112. Cerca con Google

[45] Y. Chen, D. Robertson, and R. Cipolla. “A practical system for modelling body shapes from single view measurements.” In: British Machine Vision Conference (BMVC). 2011, pp. 82–91. Cerca con Google

[46] R. Coifman and S. Lafon. “Diffusion maps.” In: Applied and Computational Harmonic Analysis 21.1 (2006), pp. 5–30. Cerca con Google

[47] Y. Cui, W. Chang, T. Nöll, and D. Stricker. “KinectAvatar: Fully automatic body capture using a single Kinect.” In: Asian Conference in Computer Vision (ACCV) Workshops. Vol. 7729. LNCS. 2012, pp. 133–147. Cerca con Google

[48] E. De Aguiar, C. Stoll, C. Theobalt, N. Ahmed, H.-P. Seidel, and S. Thrun. “Performance capture from sparse multi-view video.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 27.3 (2008), 98:1–98:10. Cerca con Google

[49] E. De Aguiar, C. Theobalt, C. Stoll, and H.-P. Seidel. “Marker-less deformable mesh tracking for human shape and motion capture.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–8. Cerca con Google

[50] M. Dou, H. Fuchs, and J. Frahm. “Scanning and tracking dynamic objects with commodity depth cameras.” In: International Symposium on Mixed and Augmented Reality (ISMAR). 2013, pp. 99–106. Cerca con Google

[51] R. Drugge, C. Nguyen, L. Gliga, and E. Drugge. “Clinical pathway for melanoma detection using comprehensive cutaneous analysis with Melanoscan.” In: Dermatology Online Journal 16.8 (2010), p. 1. Cerca con Google

[52] A. Dubrovina and R. Kimmel. “Matching shapes by eigendecomposition of the Laplace-Beltrami operator.” In: International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT). 2010. Cerca con Google

[53] E. Dunki-Jacobs, G. Callender, and K. McMasters. “Current management of melanoma.” In: Current Problems in Surgery 50.8 (2013), pp. 351–382. Cerca con Google

[54] M. Eisemann et al. “Floating textures.” In: Computer Graphics Forum (Proc. Eurographics) 27.2 (2008), pp. 409–418. Cerca con Google

[55] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard. “3-D mapping with an RGB-D camera.” In: IEEE Trans. on Robotics 30.1 (2014), pp. 177–187. Cerca con Google

[56] D. Filiberti, P. Bellutta, P. Ngan, and D. Perednia. “Efficient segmentation of large-area skin images: An overview of image processing.” In: Skin Research and Technology 1.4 (1995), pp. 200–208. Cerca con Google

[57] J. Gall, C. Stoll, E. De Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel. “Motion capture using joint skeleton tracking and surface estimation.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2009, pp. 1746–1753. Cerca con Google

[58] S. Geman and D. McClure. “Statistical methods for tomographic image reconstruction.” In: Bulletin of the International Statistical Institute 52.4 (1987), pp. 5– 21. Cerca con Google

[59] B. Goldluecke, M. Aubry, K. Kolev, and D. Cremers. “A super-resolution framework for high-accuracy multiview reconstruction.” In: International Journal of Computer VIsion 106.2 (2014), pp. 172–191. Cerca con Google

[60] M. de la Gorce, D. Fleet, and N. Paragios. “Model-based 3D hand pose estimation from monocular video.” In: IEEE TPAMI 33.9 (2011). Cerca con Google

[61] P. Guan, L. Reiss, D. Hirshberg, A. Weiss, and M. J. Black. “DRAPE: DRessing Any PErson.” In: 31.4 (2012), 35:1–35:10. Cerca con Google

[62] P. Guan, A. Weiss, A. Balan, and M. J. Black. “Estimating human shape and pose from a single image.” In: IEEE International Conference on Computer Vision (ICCV). 2009, pp. 1381–1388. Cerca con Google

[63] D. Haehnel, S. Thrun, and W. Burgard. “An extension of the ICP alogrithm for modeling nonrigid objects with moble robots.” In: International Joint Conference on Artificial Intelligence (IJCAI). 2003, pp. 915–920. Cerca con Google

[64] N. Hasler, H. Ackermann, B. Rosenhahn, T. Thormählen, and H.-P. Seidel. “Multilinear pose and body shape estimation of dressed subjects from image sets.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2010, pp. 1823–1830. Cerca con Google

[65] N. Hasler, C. Stoll, M. Sunkel, B. Rosenhahn, and H.-P. Seidel. “A statistical model of human pose and body shape.” In: Computer Graphics Forum 28.2 (2009), pp. 337–346. Cerca con Google

[66] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer, 2009. Cerca con Google

[67] T. Helten, A. Baak, G. Bharaj, M. Muueller, H.-P. Seidel, and C. Theobalt. “Personalization and evaluation of a real-time depth-based full body tracker.” In: IEEE International Conference on 3D Vision (3DV). 2013, pp. 279–286. Cerca con Google

[68] D. Hirshberg, M. Loper, E. Rachlin, and M. J. Black. “Coregistration: Simultaneous alignment and modeling of articulated 3D shape.” In: European Conference on Computer Vision (ECCV). Vol. 7577. LNCS. 2012, pp. 242–255. Cerca con Google

[69] M. Hornacek, A. Fitzgibbon, and C. Rother. “SphereFlow: 6 DoF scene flow from RGB-D pairs.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014, pp. 3526–3533. Cerca con Google

[70] H. Huang and P. Bergstresser. “A new hybrid technique for dermatological image registration.” In: IEEE International Conference on Bioinformatics and Bio- engineering (BIBE). 2007, pp. 1163–1167. Cerca con Google

[71] Q. Huang, B. Adams, M. Wicke, and L. Guibas. “Non-rigid registration under isometric deformations.” In: Symposium on Geometry Processing (SGP). 2008, pp. 1449–1457. Cerca con Google

[72] S. Izadi et al. “KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera.” In: ACM Symposium on User Interface Software and Technology (UIST). 2011, pp. 559–568. Cerca con Google

[73] A. Jain, T. Thormählen, H.-P. Seidel, and C. Theobalt. “MovieReshape: Tracking and reshaping of humans in videos.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 29.6 (2010), 148:1–148:10. Cerca con Google

[74] Z. Janko and J. Pons. “Spatio-temporal image-based texture atlases for dynamic 3-D models.” In: IEEE International Conference on Computer Vision (ICCV) Worskhops. 2009, pp. 1646–1653. Cerca con Google

[75] I. Jolliffe. Principal Component Analysis. Springer, 2002. Cerca con Google

[76] O. van Kaick, H. Zhang, G. Hamarneh, and D. Cohen-Or. “A survey on shape Cerca con Google

correspondence.” In: Computer Graphics Forum 30.6 (2011), pp. 1681–1707. Cerca con Google

[77] M. Kazhdan, M. Bolitho, and H. Hoppe. “Poisson surface reconstruction.” In: Cerca con Google

Symposium on Geometry Processing (SGP). 2006, pp. 61–70. Cerca con Google

[78] V. Kim, Y. Lipman, and T. Funkhouser. “Blended intrinsic maps.” In: ACM Cerca con Google

Trans. on Graphics (Proc. SIGGRAPH) 30.4 (2011), 79:1–79:12. Cerca con Google

[79] A. Lehrmann, P. Gehler, and S. Nowozin. “Efficient nonlinear Markov Models for human motion.” In: IEEE Conference on Computer Vision and Pattern Recogni- tion (CVPR). 2014, pp. 1314–1321. Cerca con Google

[80] V. Lempitsky and D. Ivanov. “Seamless mosaicing of image-based texture maps.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–6. Cerca con Google

[81] H. Lensch, W. Heidrich, and H.-P. Seidel. “A silhouette-based algorithm for texture registration and stitching.” In: Graphical Models 63.4 (2001), pp. 245–262. Cerca con Google

[82] H. Li, B. Adams, L. Guibas, and M. Pauly. “Robust single-view geometry and motion reconstruction.” In: ACM Trans. on Graphics (Proc. SIGGRAPH Asia) 28.5 (2009), 175:1–175:10. Cerca con Google

[83] H. Li, R. Sumner, and M. Pauly. “Global correspondence optimization for non- rigid registration of depth scans.” In: Symposium on Geometry Processing (SGP). 2008, pp. 1421–1430. Cerca con Google

[84] H. Li, E. Vouga, A. Gudym, L. Luo, J. Barron, and G. Gusev. “3D self-portraits.” In: ACM Trans. on Graphics (Proc. SIGGRAPH Asia) 32.6 (2013), 187:1–187:9. Cerca con Google

[85] H. Li et al. “Temporally coherent completion of dynamic shapes.” In: ACM Trans. on Graphics 31.1 (2012), 2:1–2:11. Cerca con Google

[86] M. Liao, Q. Zhang, H. Wang, R. Yang, and M. Gong. “Modeling deformable objects from a single depth camera.” In: IEEE International Conference on Computer Vision (ICCV). 2009, pp. 167–174. Cerca con Google

[87] Y. Lipman and T. Funkhouser. “Möbius voting for surface correspondence.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 28.3 (2009), 72:1–72:12. Cerca con Google

[88] M. Loper and M. J. Black. “OpenDR: An approximate differentiable renderer.” In: European Conference on Computer Vision (ECCV). Vol. 8695. LNCS. 2014, pp. 154– 169. Cerca con Google

[89] M. Loper, N. Mahmood, and M. J. Black. “MoSh: Motion and Shape capture from sparse markers.” In: ACM Trans. on Graphics (Proc. SIGGRAPH Asia). Vol. 33. 6. 2014, 220:1–220:13. Cerca con Google

[90] D. Mateus, R. Horaud, D. Knossow, F. Cuzzolin, and E. Boyer. “Articulated shape matching using Laplacian eigenfunctions and unsupervised point registration.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2008, pp. 1–8. Cerca con Google

[91] H. Mirzaalian, G. Hamarneh, and T. Lee. “A graph-based approach to skin mole matching incorporating template-normalized coordinates.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2009, pp. 2152–2159. Cerca con Google

[92] H. Mirzaalian, T. Lee, and G. Hamarneh. “Uncertainty-based feature learning for skin lesion matching using a high order MRF optimization framework.” In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). Vol. 7511. LNCS. 2012, pp. 98–105. Cerca con Google

[93] L. Mundermann, S. Corazza, and T. Andriacchi. “Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–6. Cerca con Google

[94] T. Neumann, K. Varanasi, N. Hasler, M. Wacker, M. Magnor, and C. Theobalt. “Capture and statistical modeling of arm-muscle deformations.” In: Computer Cerca con Google

Graphics Forum 32.2 (2013), pp. 285–294. Cerca con Google

[95] R. Newcombe et al. “KinectFusion: Real-time dense surface mapping and tracking.” In: International Symposium on Mixed and Augmented Reality (ISMAR). 2011, pp. 127–136. Cerca con Google

[96] J. Nocedal and S. Wright. Numerical optimization. Springer, 2006. Cerca con Google

[97] I. Oikonomidis, N. Kyriazis, and A. Argyros. “Efficient model-based 3D tracking of hand articulations using Kinect.” In: British Machine Vision Conference (BMVC). 2011, pp. 122–147. Cerca con Google

[98] M. Ovsjanikov, Q. Merigot, Q. Memoli, and L. Guibas. “One point isometric matching with the heat kernel.” In: Computer Graphics Forum 29.5 (2010), pp. 1555–1564. Cerca con Google

[99] S. Park and J. Hodgins. “Capturing and animating skin deformation in human motion.” In: 25.3 (2006), pp. 881–889. Cerca con Google

[100] M. Pauly, N. Mitra, J. Giesen, M. Gross, and L. Guibas. “Example-based 3D scan completion.” In: Symposium on Geometry Processing (SGP). 2005, pp. 23–32. Cerca con Google

[101] D. Perednia, R. White, and R. Schowengerdt. “Automated feature detection in digital images of skin.” In: Computer Methods and Programs in Biomedicine 34.1 (1991), pp. 41–60. Cerca con Google

[102] D. Perednia, R. White, and R. Schowengerdt. “Automatic registration of multiple skin lesions by use of point pattern matching.” In: Computerized Medical Imaging and Graphics 16.3 (1992), pp. 205–216. Cerca con Google

[103] J. Pierrard and T. Vetter. “Skin detail analysis for face recognition.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–8. Cerca con Google

[104] I. Piryankova, J. Stefanucci, J. Romero, S. de la Rosa, M. J. Black, and B. Mohler. “Can I recognize my body’s weight? The influence of shape and texture on the perception of self.” In: ACM Trans. on Applied Perception for the Symposium on Cerca con Google

Applied Perception 11.3 (2014), 3:1–13:18. Cerca con Google

[105] M. Reuter, F. Wolter, and N. Peinecke. “Laplace-Beltrami spectra as ’Shape-DNA’ of surfaces and solids.” In: Computer Aided Design 38.4 (2006), pp. 342– 366. Cerca con Google

[106] D. Rigel, J. Russak, and R. Friedman. “The evolution of melanoma diagnosis: 25 years beyond the ABCDs.” In: CA: A Cancer Journal for Clinicians 60.5 (2010), pp. 301–316. Cerca con Google

[107] K. Robinette, H. Daanen, and E. Paquet. “The CAESAR project: A 3-D surface anthropometry survey.” In: International Conference on 3-D Digital Imaging and Modeling. 1999, pp. 380–386. Cerca con Google

[108] E. Rodolà, S. Rota Bulò, T. Windheuser, M. Vestner, and D. Cremers. “Dense non-rigid shape correspondence using random forests.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014, pp. 4177–4184. Cerca con Google

[109] G. Sansoni, M. Trebeschi, and F. Docchio. “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation.” In: Sensors 9.1 (2009), pp. 568–601. Cerca con Google

[110] H. Seo and N. Magnenat-Thalmann. “An example-based approach to human body manipulation.” In: Graphical Models 66.1 (2004), pp. 1–23. Cerca con Google

[111] A. Shapiro et al. “Rapid avatar capture and simulation using commodity depth sensors.” In: Computer Animation and Virtual Worlds 25.3–4 (2014), pp. 201–211. Cerca con Google

[112] L. Sigal, A. Balan, and M. J. Black. “Combined discriminative and generative articulated pose and non-rigid shape estimation.” In: Advances in Neural Information Processing Systems (NIPS). 2007, pp. 1337–1344. Cerca con Google

[113] P. Sloan, J. Kautz, and J. Snyder. “Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 21.3 (2002), pp. 527–536. Cerca con Google

[114] O. Sorkine and M. Alexa. “As-rigid-as-possible surface modeling.” In: Symposium on Geometry Processing (SGP). 2007, pp. 109–116. Cerca con Google

[115] J. Starck and A. Hilton. “Surface capture for performance-based animation.” In: Computer Graphics and Applications 27.3 (2007), pp. 21–31. Cerca con Google

[116] D. Sun, S. Roth, and M. J. Black. “A quantitative analysis of current practices in optical flow estimation and the principles behind them.” In: International Journal of Computer Vision 106.2 (2014), pp. 115–137. Cerca con Google

[117] J. Sun, M. Ovsjanikov, and L. Guibas. “A concise and provably informative multi-scale signature based on heat diffusion.” In: Computer Graphics Forum 28.5 (2009), pp. 1383–1392. Cerca con Google

[118] S. Taeg, W. Freeman, and H. Tsao. “A reliable skin mole localization scheme.” In: IEEE International Conference on Computer Vision (ICCV). 2007, pp. 1–8. Cerca con Google

[119] G. Tam et al. “Registration of 3D point clouds and meshes: A survey from rigid to nonrigid.” In: IEEE Trans. on Visualization and Computer Graphics 19.7 (2013), pp. 1199–1217. Cerca con Google

[120] J. Taylor, J. Shotton, T. Sharp, and A. Fitzgibbon. “The Vitruvian manidold: Inferring dense correspondences for one-shot human pose estimation.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2012, pp. 103–110. Cerca con Google

[121] J. Taylor et al. “User-specific hand modeling from monocular depth sequences.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014, pp. 644–651. Cerca con Google

[122] C. Theobalt, N. Ahmed, H. Lensch, M. Magnor, and H.-P. Seidel. “Seeing people in different light – joint shape, motion, and reflectance capture.” In: IEEE Trans. on Visualization and Computer Graphics 13.4 (2007), pp. 663–674. Cerca con Google

[123] N. Thorstensen and R. Keriven. “Non-rigid shape matching using geometry and photometry.” In: Asian Conference in Computer Vision (ACCV) Workshops. Vol. 5996. LNCS. 2009, pp. 644–654. Cerca con Google

[124] J. Tong, J. Zhou, L. Liu, Z. Pan, and H. Yan. “Scanning 3D full human bodies using Kinects.” In: IEEE Trans. on Visualization and Computer Graphics 18.4 (2012), pp. 643–650. Cerca con Google

[125] V. Tsiminaki, J. Franco, and E. Boyer. “High resoultion 3D shape texture from multiple videos.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014, pp. 1502–1509. Cerca con Google

[126] A. Tsoli, M. Loper, and M. J. Black. “Model-based anthropometry: Predicting measurements from 3D human scans in multiple poses.” In: IEEE Winter Conference on Applications of Computer Vision (WACV). 2014, pp. 83–90. Cerca con Google

[127] A. Tsoli, N. Mahmood, and M. J. Black. “Breathing life into shape: Capturing, modeling and animating 3D human breathing.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 33.4 (2014), 52:1–52:11. Cerca con Google

[128] D. Vlasic, M. Brand, H. Pfister, and J. Popovic. “Face transfer with multilinear models.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 24.3 (2005), pp. 426–433. Cerca con Google

[129] D. Vlasic et al. “Dynamic shape capture using multi-view photometric stereo.” In: ACM Trans. on Graphics (Proc. SIGGRAPH Asia) 28.5 (2009), 174:1–174:11. Cerca con Google

[130] J. Vogel et al. “Towards robust identification and tracking of nevi in sparse photographic time series.” In: SPIE. 2014. Cerca con Google

[131] H. Voigt and R. Classen. “Topodermatographic image analysis for melanoma screening and the quantitative assessment of tumor dimension parameters of the skin.” In: Cancer 75.4 (1995), pp. 981–988. Cerca con Google

[132] M. Volino, D. Casas, J. Collomosse, and A. Hilton. “Optimal representation of multi-view video.” In: British Machine Vision Conference (BMVC). 2014, pp. 105– 112. Cerca con Google

[133] T. Weise, B. Leibe, and L. Van Gool. “Fast 3D scanning with automatic motion compensation.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2007, pp. 1–8. Cerca con Google

[134] A. Weiss, D. Hirshberg, and M. J. Black. “Home 3D body scans from noisy image and range data.” In: IEEE International Conference on Computer Vision (ICCV). 2011, pp. 1951–1958. Cerca con Google

[135] R. Weyrich et al. “Analysis of human faces using a measurement-based skin reflectance model.” In: ACM Trans. on Graphics (Proc. SIGGRAPH). Vol. 25. 3. 2006, pp. 1013–1024. Cerca con Google

[136] C. Wu, K. Varanasi, Y. Liu, H.-P. Seidel, and C. Theobalt. “Shading-based dynamic shape refinement from multi-view video under general illumination.” In: IEEE International Conference on Computer Vision (ICCV). 2011, pp. 1108–1115. Cerca con Google

[137] C. Wu, K. Varanasi, and C. Theobalt. “Full-body performance capture under uncontrolled and varying illumination: A shading-based approach.” In: European Conference on Computer Vision (ECCV). Vol. 7575. LNCS. 2012, pp. 757–770. Cerca con Google

[138] C. Wu, M. Zollhöfer, M. Nießner, M. Stamminger, S. Izadi, and C. Theobalt. “Real-time shading-based refinement for consumer depth cameras.” In: ACM Trans. on Graphics 33.6 (2014), 200:1–200:10. Cerca con Google

[139] S. Wuhrer, C. Shu, and P. Xi. “Landmark-free posture invariant human shape correspondence.” In: The Visual Computer 27.9 (2011), pp. 843–852. Cerca con Google

[140] G. Ye, Y. Liu, N. Hasler, X. Ji, Q. Dai, and C. Theobalt. “Performance capture of interacting characters with handheld Kinects.” In: European Conference on Computer Vision (ECCV). Vol. 7573. LNCS. 2012, pp. 828–841. Cerca con Google

[141] A. Zaharescu, E. Boyer, K. Varanasi, and R. Horaud. “Surface feature detection and description with applications to mesh matching.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2009, pp. 373–380. Cerca con Google

[142] M. Zeng, J. Zheng, and X. Liu. “Templateless quasi-rigid shape modeling with implicit loop-closure.” In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2013, pp. 145–152. Cerca con Google

[143] Y. Zeng, C. Wang, Y. Wang, X. Gu, F. Samaras, and N. Paragios. “Intrinsic dense 3D surface tracking.” In: IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR). 2011, pp. 1225–1232. Cerca con Google

[144] L. Zhang, N. Snavely, B. Curless, and S. Seitz. “Spacetime faces: High resolution capture for modeling and animation.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 23.3 (2004), pp. 548–558. Cerca con Google

[145] Q. Zhou and V. Koltun. “Color map optimization for 3D reconstruction with consumer depth cameras.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 33.4 (2014), 155:1–155:10. Cerca con Google

[146] S. Zhou, H. Fu, L. Liu, D. Cohen-Or, and X. Han. “Parametric reshaping of human bodies in images.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 29.4 (2010), 126:1–126:10. Cerca con Google

[147] M. Zollhoefer et al. “Real-time non-rigid reconstruction using an RGB-D camera.” In: ACM Trans. on Graphics (Proc. SIGGRAPH) 33.4 (2014), 156:1–156:12. Cerca con Google

Solo per lo Staff dell Archivio: Modifica questo record