Qianli Ma

I am a PhD student at the Max Planck Institute for Intelligent Systems and ETH Zürich, co-supervised by Michael Black and Siyu Tang . I am also associated to the Max Planck ETH Center for Learning Systems. Prior to this, I received my Master's degree in Optics and Photonics from Karlsruhe Institute of Technology and Bachelor's degree in Physics from Peking University.

My research uses machine learning to solve computer vision and graphics problems, with a current focus on 3D representations and deformable 3D shape modeling.

Email  /  Google Scholar  /  Twitter  /  Github

profile photo

The Power of Points for Modeling Humans in Clothing
Qianli Ma, Jinlong Yang, Siyu Tang, Michael J. Black
ICCV, 2021
Project Page / Code / arXiv / Video

POP — a point-based, unified model for multiple subjects and outfits that can turn a single, static 3D scan into an animatable avatar with natural pose-dependent clothing deformations.

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang
NeurIPS, 2021
Project Page / Code / arXiv / Video

A multi-subject, articulated, neural signed distance field model for clothed humans, which can fast create an avatar of unseen subjects from as few as 8 monocular depth images.

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
CVPR, 2021
Project Page / Code / arXiv / Video

Modeling pose-dependent shapes of clothed humans explicitly with hundreds of articulated surface elements: the clothing deforms naturally even in the presence of topological change.

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
CVPR, 2021   (Oral, Best Paper nominee)
Project Page / Code / arXiv / Video

Creating an avatar with pose-dependent clothing deformation from raw scans without template surface registration.

PLACE: Proximity Learning of Articulation and Contact in 3D Environments
Siwei Zhang, Yan Zhang, Qianli Ma, Michael J. Black, Siyu Tang
3DV, 2020
Project Page / Code / arXiv / Video

An explicit representation for 3D person-​scene contact relations that enables automated synthesis of realistic humans posed naturally in a given scene.

Learning to Dress 3D People in Generative Clothing
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black
CVPR, 2020
Project Page / Code / arXiv / Dataset / Full Video / 1-min Video / Slides

CAPE — a graph-CNN-based generative model and a large-scale dataset for 3D human meshes in clothing in varied poses and garment types.

Template adapted from this awesome page.