

The design, implementation and test of a surface-based performance animation tool takes a system approach, addressing interaction design issues as well as challenges in extending current software architectures to support novel forms of animation control. These guidelines are tested in two point designs for direct input devices. Based on this, an interaction-centred analysis of computer animation culminates in the concept of direct animation interfaces and guidelines for their design. The foundation of the procedure is a conceptual framework in the form of a comprehensive discussion of the state of the art, a design space of interfaces for time-based visual media, and a taxonomy for mappings between user and medium space-time. The practical consequences for the development of motion creation and editing tools must be demonstrated with prototypes that are more direct, efficient, easy-to-learn, and flexible to use. The insights this brings for designing next generation animation tools must be examined and formalised. Computer animation methods and interfaces must be embedded in an interaction context. Three goals are formulated to illustrate the validity of this thesis. The hypothesis of this work is that an interaction approach to computer animation can inform the design and development of novel animation techniques. The reverse trend in human-computer interaction to make interfaces more direct, intuitive, and natural to use has so far hardly touched the animation world: decades of interaction research have scarcely been linked to research and development of animation techniques. This is largely due to the methods employed: in keyframe animation dynamics are indirectly specified over abstract descriptions, while performance animation suffers from inflexibility due to a high technological overhead. Yet computer animation, the art of instilling life into believable characters and fantastic worlds, is still a highly sophisticated process restricted to the spheres of expert users. Results show that both approaches can accurately adapt to the user's movements, however reversing playback can be problematic.Ĭreativity tools for digital media have been largely democratised, offering a range from beginner to expert tools. Adaptive video playback using a discrete Bayes and particle filter are evaluated on a data set collected of participants performing tai chi and radio exercises. The use of pre-existing videos removes the need to create bespoke content or specially authored videos, and the system can provide real-time guidance and feedback to better support users when learning new movements. We implement adaptive video playback in Reactive Video, a vision-based system which supports users learning or practicing a physical skill.

We present adaptive video playback, which seamlessly synchronizes video playback with the user's movements, building upon the principle of direct manipulation video navigation. Traditional video systems allow users to manipulate videos through specific user interface actions such as button clicks or mouse drags but have no model of what the user is doing and are unable to adapt in useful ways. Videos are a convenient platform to begin, maintain, or improve a fitness program or physical activity.
