PolySLGen: Online Multimodal Speaking–Listening Reaction Generation in Polyadic Interaction
In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation.
zhi-yi-lin
In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation.
We propose MuPPet, a novel multi-person 2D-to-3D pose lifting framework that explicitly models inter-person correlations.
In this paper, we jointly combine image classification and image denoising, aiming to enhance human perception of noisy images captured by edge devices, like low-light security …