Motion synthesis via adaptive guidance experts

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Existing methods for pose-centric motion synthesis often rely
on specialised architectural components to understand how motion changes
over time. In this paper, we propose a novel approach that incorporates
adaptive instance normalisation within a dedicated Adaptive Guidance
Expert, integrated into a Mixture-of-Experts (MoE) system. This design
enhances the model’s ability to capture temporal coherence in synthetic
motions. Our method achieves state-of-the-art performance in generating
realistic motion, as measured by Fréchet Inception Distance (FID),
while maintaining comparable diversity. We validate its effectiveness on
the CMU MoCap and HumanAct12 datasets, demonstrating its capability
to create plausible and high-quality motion sequences suitable for a
wide range of applications.
Original languageEnglish
Title of host publicationComputer Vision, Imaging and Computer Graphics Theory and Applications
PublisherSpringer Nature
Publication statusAccepted/In press - 17 Sept 2025
Event20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Porto, Portugal
Duration: 26 Feb 202528 Feb 2025

Publication series

NameCommunications in Computer and Information Science
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

Conference20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Period26/02/2528/02/25

Fingerprint

Dive into the research topics of 'Motion synthesis via adaptive guidance experts'. Together they form a unique fingerprint.

Cite this