Mani-GS: Gaussian Splatting Manipulation with Triangular Mesh

Arxiv 2024
Xiangjun Gao 1*,   Xiaoyu Li 2*,   Yiyu Zhuang3,   Qi Zhang 2,   Wenbo Hu 2,  
  
Chaopeng Zhang 2 ,   Yao Yao 3 ,   Ying Shan 2,   Long Quan 1

1 HKUST

,

2 Tencent

,

3 Nanjing University

Abstract

Neural 3D representations such as Neural Radiance Fields (NeRFs), excel at producing photo-realistic rendering results but lack the flexibility for manipulation and editing which is crucial for content creation. Previous works have attempted to address this issue by deforming a NeRF in canonical space or manipulating the radiance field based on an explicit mesh. However, manipulating NeRF is not highly controllable and requires a long training and inference time. With the emergence of 3D Gaussian Splatting (3DGS), extremely high-fidelity novel view synthesis can be achieved using an explicit point-based 3D representation with much faster training and rendering speed. However, there is still a lack of effective means to manipulate 3DGS freely while maintaining rendering quality. In this work, we aim to tackle the challenge of achieving manipulable photo-realistic rendering. We propose to utilize a triangular mesh to manipulate 3DGS directly with self-adaptation. This approach reduces the need to design various algorithms for different types of Gaussian manipulation. By utilizing a triangle shape-aware Gaussian binding and adapting method, we can achieve 3DGS manipulation and preserve high-fidelity rendering after manipulation. Our approach is capable of handling large deformations, local manipulations, and even physics simulations while keeping high-quality rendering. Furthermore, we demonstrate that our method is also effective with inaccurate meshes extracted from 3DGS. Experiments conducted on NeRF synthetic datasets demonstrate the effectiveness of our method and its superiority over baseline approaches.

Method Overview

(1) Firstly, we extract a triangular mesh from 3DGS or a neural surface field. (2) Next, we bind N Gaussians to each triangle in the local triangle space, and optmize the local gaussian attributes ({u, R, s, o, c}). The triangle attributes ({u, R, e}) is calculated based on the triangle vertices. (3) Finally, we manipulate the GS by transferring the mesh manipulation directly, thus achieving manipulable rendering.

Large Deformation

Soft Body Simulation

Local Manipulation

Manipulation Results on DTU

The left two columns showcase the geometry and rendered image before manipulation, while the right three columns showcase the geometry and rendered image after manipulation. To highlight the deformed area, we have enclosed it within a red rectangle. The mesh proxy is extracted using screened poisson reconstruction and edited in Blender.