Qianli Ma

Hi there! I am a research scientist at NVIDIA Research. I obtained my PhD at Max Planck Institute for Intelligent Systems and ETH Zürich, co-advised by Michael Black and Siyu Tang. Prior to that, I received my Master's degree in Optics and Photonics from Karlsruhe Institute of Technology and Bachelor's degree in Physics from Peking University.

My research uses machine learning to solve computer vision and graphics problems, with a current focus on generative models, deformable 3D shape modeling, and their applications in creating realistic 3D virtual human models.

  Email  /  Google Scholar  /    Twitter  /    Github

profile photo
Publications
Dynamic Point Fields
Sergey Prokudin, Qianli Ma, Maxime Raafat, Julien Valentin, Siyu Tang
ICCV, 2023 (Oral)
Project Page / Code / Colab / arXiv / Video /

Explicit point-based representation + implicit deformation field = dynamic surface models with instant inference and high quality geometry. Robust single-scan animation of challenging clothing types even under extreme poses.


Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views
Siwei Zhang, Qianli Ma, Yan Zhang, Sadegh Aliakbarian, Darren Cosker, Siyu Tang
ICCV, 2023 (Oral)
Project Page / Code / arXiv / Video /

Generative human mesh recovery for images with body occlusion and truncations: scene-conditioned diffusion model + collision-guided sampling = accurate pose estimation on observed body parts and plausible generation of unobserved parts.


Neural Point-based Shape Modeling of Humans in Challenging Clothing
Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang
3DV, 2022
Project Page / Code / arXiv /

The power of point-based digital human representations further unleashed: SkiRT models dynamic shapes of 3D clothed humans including those that wear challenging outfits such as skirts and dresses.


EgoBody: Human Body Shape and Motion of Interacting People from Head-Mounted Devices
Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc Pollefeys, Federica Bogo, Siyu Tang
ECCV, 2022
Project Page / Code / Dataset / arXiv / Video /

A large-scale dataset of accurate 3D body shape, pose and motion of humans interacting in 3D scenes, with multi-modal streams from third-person and egocentric views, captured by Azure Kinects and a HoloLens2.


The Power of Points for Modeling Humans in Clothing
Qianli Ma, Jinlong Yang, Siyu Tang, Michael J. Black
ICCV, 2021
Project Page / Code / Dataset / arXiv / Video /

PoP — a point-based, unified model for multiple subjects and outfits that can turn a single, static 3D scan into an animatable avatar with natural pose-dependent clothing deformations.


MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang
NeurIPS, 2021
Project Page / Code / arXiv / Video /

Creating an avatar of unseen subjects from as few as eight monocular depth images using a meta-learned, multi-subject, articulated, neural signed distance field model for clothed humans.


SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
CVPR, 2021
Project Page / Code / arXiv / Video /

Modeling pose-dependent shapes of clothed humans explicitly with hundreds of articulated surface elements: the clothing deforms naturally even in the presence of topological change.


SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
CVPR, 2021 (Best Paper Candidate)
Project Page / Code / arXiv / Video /

Cycle-consistent implicit skinning fields + locally pose-aware implicit function = a fully animatable avatar with implicit surface from raw scans without surface registration.


PLACE: Proximity Learning of Articulation and Contact in 3D Environments
Siwei Zhang, Yan Zhang, Qianli Ma, Michael J. Black, Siyu Tang
3DV, 2020
Project Page / Code / arXiv / Video /

An explicit representation for 3D person-​scene contact relations that enables automated synthesis of realistic humans posed naturally in a given scene.


Learning to Dress 3D People in Generative Clothing
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black
CVPR, 2020
Project Page / Code / Dataset / arXiv / Full Video / 1-min Video /

CAPE — a graph-CNN-based generative model and a large-scale dataset for 3D human meshes in clothing in varied poses and garment types.