Example-Based Retargeting of Human Motion to Arbitrary Mesh Models


Creative Commons License

ÇELİKCAN U., YAZ I. O., Capin T.

COMPUTER GRAPHICS FORUM, cilt.34, sa.1, ss.216-227, 2015 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 34 Sayı: 1
  • Basım Tarihi: 2015
  • Doi Numarası: 10.1111/cgf.12507
  • Dergi Adı: COMPUTER GRAPHICS FORUM
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.216-227
  • Hacettepe Üniversitesi Adresli: Evet

Özet

We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built-in surface-based deformation system. As deformation for animation purposes may require non-rigid behaviour, we augment existing rigid deformation approaches to provide volume-preserving and squash-and-stretch deformations. We demonstrate our approach on well-known mesh models along with several publicly available motion-capture sequences.