Multimodal Motion Guidance: Techniques for Adaptive and Dynamic Feedback
Lab project in the area of Virtual and Augmented Reality.
About this Project
The ability to guide human motion through automatically generated feedback has significant potential for applications in areas, such as motor learning, human-computer interaction, tele-presence, and augmented reality.
This work focuses on the design and development of such systems from a human cognition and perception perspective. We analyze the dimensions of the design space for motion guidance systems, spanned by technologies and human information processing, and identify opportunities for new feedback techniques.
Project Partners
Kenichiro Fukushi, Alex Olwal, Ramesh Raskar (MIT Media Lab)
Additional Information
Results
We present a novel motion guidance system, that was implemented based on these insights to enable feedback for position, direction and continuous velocities. It uses motion capture to track a user in space and guides using visual, vibro-tactile and pneumatic actuation. Our system also introduces motion re-targeting through time warping, motion dynamics and prediction, to allow more flexibility and adaptability to user performance.
Current Work
We currently work on 3D interaction and multi-sensor feedback technologies in ambient mixed reality scenarios using a mobile hardware setup.
Downloads
Multimodal Motion Guidance Video | 92.6 MB | MPEG-4 video | Download |