Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models

1CMU, 2Google DeepMind, 3Google, 4HKUST
ECCV 2024

Abstract

In this paper, we investigate the use of diffusion models which are pre-trained on large-scale image-caption pairs for open-vocabulary 3D semantic understanding. We propose a novel method, namely Diff2Scene, which leverages frozen representations from text-image generative models, along with salient-aware and geometric-aware masks, for open-vocabulary 3D semantic segmentation and visual grounding tasks. Diff2Scene gets rid of any labeled 3D data and effectively identifies objects, appearances, materials, locations and their compositions in 3D scenes. We show that it outperforms competitive baselines and achieves significant improvements over state-of-the-art methods. In particular, Diff2Scene improves the state-of-the-art method on ScanNet200 by 12%.

Open-Vocabulary 3D Perception-Problem Definition

Problem definition

Illustration of open-vocabulary 3D semantic scene understanding. We propose Diff2Scene, a 3D model that performs open-vocabulary semantic segmentation and visual grounding tasks given novel text prompts, without relying on any annotated 3D data. By leveraging discriminative-based and generative-based 2D foundation models, Diff2Scene can handle a wide variety of novel text queries for both common and rare classes, like “desk” and “soap dispenser”. It can also handle compositional queries, such as “find the white sneakers that are closer to the desk chair.”

Open-Vocabulary 3D Perception-Method Overview

Problem definition

Illustration of open-vocabulary 3D perception methods. (a) Directly minimizing the per-point feature distance between the CLIP-based model and the tuned 3D model. (b) Directly using a 3D mask proposal network trained on labeled 3D data to produce class-agnostic masks, and then pool corresponding representations from the CLIP feature map. (c) The proposed mask distillation approach, namely Diff2Scene, that uses Stable Diffusion and performs mask-based distillation. Diff2Scene leverages the semantically-rich mask embeddings from 2D foundation models and geometrically accurate masks from the tuned 3D model, and thus achieves superior performance compared to previous methods.

Method

Problem definition

Overview of our method. We propose Diff2Scene, an open-vocabulary 3D semantic understanding model. Diff2Scene contains two branches. The 2D branch is designed to be a diffusion-based 2D semantic segmentation model. It accepts a 2D image as input and predicts a set of 2D probabilistic masks with corresponding semantically-rich mask embeddings. The 3D branch utilizes the point cloud and 2D mask embeddings as input. The 2D mask embeddings are used as “semantic queries” to generate corresponding 3D probabilistic masks. The model learns salient patterns from the RGB images and geometric information from the point clouds.

Qualitative Results on Zero-Shot Semantic Segmentation

vis seg

Qualitative results from our model and OpenScene on zero-shot semantic segmentation. We observe that our model can predict coherent masks with accurate semantic labels compared to OpenScene for both head and tail categories.

Qualitative Results on Zero-Shot Visual Grounding

vis grounding

Qualitative results from our model and OpenScene on zero-shot visual grounding. Our open-vocabulary semantic understanding model is capable of handling different types of novel and compositional queries. Novel object classes as well as objects described by colors, shapes, appearances, locations, and usages are successfully retrieved by our method. Note that the located points are colored in yellow.

BibTeX

@inproceedings{zhu2024open,
        author    = {Zhu, Xiaoyu and Zhou, Hao and Xing, Pengfei and Zhao, Long and Xu, Hao and Liang, Junwei and Hauptmann, Alexander and Liu, Ting and Gallagher, Andrew},
        title     = {Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models},
        booktitle = {European Conference on Computer Vision (ECCV)},
        year      = {2024}
    }