Qianli Ma

I am a PhD student at the Max Planck Institute for Intelligent Systems and ETH Zürich, co-supervised by Michael Black and Siyu Tang . I am also associated to the Max Planck ETH Center for Learning Systems. Prior to this, I received my Master's degree in Optics and Photonics from Karlsruhe Institute of Technology and Bachelor's degree in Physics from Peking University.

My research uses machine learning to solve computer vision and graphics problems, with a current focus on 3D representations and deformable 3D shape modeling.

Email  /  Google Scholar  /  Twitter  /  Github

profile photo
Research

Neural Point-based Shape Modeling of Humans in Challenging Clothing
Qianli Ma, Jinlong Yang, Michael J. Black, Siyu Tang
3DV, 2022
Project Page / Code / arXiv

The power of point-based digital human representations further unleashed: SkiRT models dynamic shapes of 3D clothed humans including those that wear challenging outfits such as skirts and dresses.

EgoBody: Human Body Shape and Motion of Interacting People from Head-Mounted Devices
Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc Pollefeys, Federica Bogo, Siyu Tang
ECCV, 2022
Project Page / Dataset / Code / arXiv / Video / 🔥 ECCV Challenge

A large-scale dataset of accurate 3D body shape, pose and motion of humans interacting in 3D scenes, with multi-modal streams from third-person and egocentric views, captured by Azure Kinects and a HoloLens2.

The Power of Points for Modeling Humans in Clothing
Qianli Ma, Jinlong Yang, Siyu Tang, Michael J. Black
ICCV, 2021
Project Page / Code / Dataset / arXiv / Video

PoP — a point-based, unified model for multiple subjects and outfits that can turn a single, static 3D scan into an animatable avatar with natural pose-dependent clothing deformations.

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang
NeurIPS, 2021
Project Page / Code / arXiv / Video

Creating an avatar of unseen subjects from as few as eight monocular depth images using a meta-learned, multi-subject, articulated, neural signed distance field model for clothed humans.

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
CVPR, 2021
Project Page / Code / arXiv / Video

Modeling pose-dependent shapes of clothed humans explicitly with hundreds of articulated surface elements: the clothing deforms naturally even in the presence of topological change.

SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
CVPR, 2021   (Best Paper nominee)
Project Page / Code / arXiv / Video

Cycle-consistent implicit skinning fields + locally pose-aware implicit function = a fully animatable avatar with implicit surface from raw scans without surface registration.

PLACE: Proximity Learning of Articulation and Contact in 3D Environments
Siwei Zhang, Yan Zhang, Qianli Ma, Michael J. Black, Siyu Tang
3DV, 2020
Project Page / Code / arXiv / Video

An explicit representation for 3D person-​scene contact relations that enables automated synthesis of realistic humans posed naturally in a given scene.

Learning to Dress 3D People in Generative Clothing
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, Michael J. Black
CVPR, 2020
Project Page / Code / Dataset / arXiv / Full Video / 1-min Video / Slides

CAPE — a graph-CNN-based generative model and a large-scale dataset for 3D human meshes in clothing in varied poses and garment types.

Template adapted from this awesome page.