PolySLGen: Online Multimodal Speaking–Listening Reaction Generation in Polyadic Interaction
April 8, 2026·,,,·
0 min read
Zhi-Yi Lin
Thomas Markhorst
Jouh Yeong Chew
Xucong Zhang
Abstract
Human-like multimodal reaction generation is essential for natural group interactions between humans and intelligent embodied AI. However, existing approaches are often limited to single-modality or speaking-only responses in dyadic interactions, making them unsuitable for realistic social scenarios. Many also overlook nonverbal cues and the complex dynamics of polyadic interactions, both critical for engagement and conversational coherence. In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation. Given past conversation and motion from all participants, PolySLGen generates a future speaking or listening reaction for a target participant, including speech, body motion, and speaking-state score. To model group interactions effectively, we propose a pose fusion module and a social cue encoder that jointly aggregate motion and social signals from the group. Extensive experiments, along with quantitative and qualitative evaluations, show that PolySLGen produces contextually appropriate and temporally coherent multimodal reactions, outperforming several adapted and state-of-the-art baselines in motion quality, motion-speech alignment, speaking state prediction, and human-perceived realism. Source code will be made publicly available.
Type