ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding

Shenghao Xie2, Jun Zhu1,3
1Tsinghua University, 2Peking University, 3ShengShu
(*Equal Contribution)

Abstract

Recently, the powerful text-to-image capabilities of GPT-4o have led to growing appreciation for native multimodal large language models. However, its multimodal capabilities remain confined to images and text. Yet beyond images, the ability to understand and generate 3D content is equally crucial. To address this gap, we propose ShapeLLM-Omni—a native 3D large language model capable of understanding and generating 3D assets and text in any sequence. First, we train a 3D vector-quantized variational autoencoder (VQVAE), which maps 3D objects into a discrete latent space to achieve efficient and accurate shape representation and reconstruction. Building upon the 3D-aware discrete tokens, we innovatively construct a large-scale continuous training dataset named 3D-Alpaca, encompassing generation, comprehension, and editing, thus providing rich resources for future research and training. Finally, by performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on the 3D-Alpaca dataset. Our work provides an effective attempt at extending multimodal models with basic 3D capabilities, which contributes to future research in 3D-native AI.

Method

ShapeLLM-Omni

Fig 1. the model architecture of ShapeLLM-Omni. ShapeLLM-Omni inherits Qwen2.5-vl’s strong multimodal capabilities and additionally supports text-to-3D, image-to-3D, 3D captioning, and 3D editing using text instruction.

3D-Alpaca dataset

Fig 2. overview of our constructed 3D-Alpaca dataset. Our proposed 3D-Alpaca dataset comprises 3D generation, 3D understanding, and 3D editing components, providing a comprehensive foundation for training and evaluating 3D largelanguage models.

Qualitative result

Some text-to-3D examples.

Some image-to-3D examples.

Demo Example

Fig 7. an example of our demo. We offer a demo showcasing our image-to-3D, text-to-3D, and 3D understanding capabilities. Please feel free to try it!

BibTeX

                
@article{ye2025shapellm,
  title={ShapeLLM-Omni: A Native Multimodal LLM for 3D Generation and Understanding},
  author={Ye, Junliang and Wang, Zhengyi and Zhao, Ruowen and Xie, Shenghao and Zhu, Jun},
  journal={arXiv preprint arXiv:2506.01853},
  year={2025}
}