Language-driven Scene Synthesis using Multi-conditional Diffusion Model

NeurIPS 2023

An Dinh Vuong1       Minh Nhat Vu2       Toan Nguyen1      
Baoru Huang3       Dzung Nguyen1       Thieu Vo4               Anh Nguyen5

1FPT Software AI Center   2ACIN - TU Wien   3Imperial College London  
4Ton Duc Thang University   5University of Liverpool

We introduce Language-driven Scene Synthesis task, which involves the leverage of human-input text prompts to generate physically plausible and semantically reasonable objects.

Abstract

Scene synthesis is a challenging problem with several industrial applications. Recently, substantial efforts have been directed to synthesize the scene using human motions, room layouts, or spatial graphs as the input. However, few studies have addressed this problem from multiple modalities, especially combing text prompts. In this paper, we propose a language-driven scene synthesis task, which is a new task that integrates text prompts, human motion, and existing objects for scene synthesis. Unlike other single-condition synthesis tasks, our problem involves multiple conditions and requires a strategy for processing and encoding them into a unified space. To address the challenge, we present a multi-conditional diffusion model, which differs from the implicit unification approach of other diffusion literature by explicitly predicting the guiding points for the original data distribution. We demonstrate that our approach is theoretically supportive. The intensive experiment results illustrate that our method outperforms state-of-the-art benchmarks and enables natural scene editing applications.

LSDM Neural Architecture

LSDM Architecture.

Our main contribution is the Guiding Points Network, where we integrate all information from the given conditions to generate guiding points.

Qualitative Results

Qualitative Result #1.
Qualitative Result #2.

From the qualitative results, we can observe that LSDM generates objects that are sematically plausible and aligned with the given scene layouts and the text prompt (user preferences). More qualitative results are demonstrated in the above video.

Editing Applications

Editing Applications

Our language-driven scene synthesis task can also enable natural scene editing. The editing examples are meaningful and show potential for animation, metaverse, or designing applications.

Acknowledgements

We borrow github page from HyperNeRF. Special thanks to them!