<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>World Model | Haoyi Zhu</title><link>https://www.haoyizhu.site/tag/world-model/</link><atom:link href="https://www.haoyizhu.site/tag/world-model/index.xml" rel="self" type="application/rss+xml"/><description>World Model</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 12 Sep 2025 00:00:00 +0000</lastBuildDate><item><title>OmniWorld: A Multi-Domain and Multi-Modal Dataset for 4D World Modeling</title><link>https://www.haoyizhu.site/project/omniworld-a-multi-domain-and-multi-modal-dataset-for-4d-world-modeling/</link><pubDate>Fri, 12 Sep 2025 00:00:00 +0000</pubDate><guid>https://www.haoyizhu.site/project/omniworld-a-multi-domain-and-multi-modal-dataset-for-4d-world-modeling/</guid><description>&lt;p>&lt;strong>Abstract:&lt;/strong>&lt;/p>
&lt;p>The field of 4D world modeling - aiming to jointly capture spatial geometry and temporal dynamics - has witnessed remarkable progress in recent years, driven by advances in large-scale generative models and multimodal learning. However, the development of truly general 4D world models remains fundamentally constrained by the availability of high-quality data. Existing datasets and benchmarks often lack the dynamic complexity, multi-domain diversity, and spatial-temporal annotations required to support key tasks such as 4D geometric reconstruction, future prediction, and camera-control video generation. To address this gap, we introduce OmniWorld, a large-scale, multi-domain, multi-modal dataset specifically designed for 4D world modeling. OmniWorld consists of a newly collected OmniWorld-Game dataset and several curated public datasets spanning diverse domains. Compared with existing synthetic datasets, OmniWorld-Game provides richer modality coverage, larger scale, and more realistic dynamic interactions. Based on this dataset, we establish a challenging benchmark that exposes the limitations of current state-of-the-art (SOTA) approaches in modeling complex 4D environments. Moreover, fine-tuning existing SOTA methods on OmniWorld leads to significant performance gains across 4D reconstruction and video generation tasks, strongly validating OmniWorld as a powerful resource for training and evaluation. We envision OmniWorld as a catalyst for accelerating the development of general-purpose 4D world models, ultimately advancing machines&amp;rsquo; holistic understanding of the physical world.&lt;/p></description></item><item><title>DeepVerse: 4D Autoregressive Video Generation as a World Model</title><link>https://www.haoyizhu.site/project/deepverse-4d-autoregressive-video-generation-as-a-world-model/</link><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate><guid>https://www.haoyizhu.site/project/deepverse-4d-autoregressive-video-generation-as-a-world-model/</guid><description>&lt;p>&lt;strong>Abstract:&lt;/strong>
World models serve as essential building blocks toward Artificial General Intelligence (AGI), enabling intelligent agents to predict future states and plan actions by simulating complex physical interactions. However, existing interactive models primarily predict visual observations, thereby neglecting crucial hidden states like geometric structures and spatial coherence. This leads to rapid error accumulation and temporal inconsistency. To address these limitations, we introduce DeepVerse, a novel 4D interactive world model explicitly incorporating geometric predictions from previous timesteps into current predictions conditioned on actions. Experiments demonstrate that by incorporating explicit geometric constraints, DeepVerse captures richer spatio-temporal relationships and underlying physical dynamics. This capability significantly reduces drift and enhances temporal consistency, enabling the model to reliably generate extended future sequences and achieve substantial improvements in prediction accuracy, visual realism, and scene rationality. Furthermore, our method provides an effective solution for geometry-aware memory retrieval, effectively preserving long-term spatial consistency. We validate the effectiveness of DeepVerse across diverse scenarios, establishing its capacity for high-fidelity, long-horizon predictions grounded in geometry-aware dynamics.&lt;/p></description></item><item><title>Aether: Geometric-Aware Unified World Modeling</title><link>https://www.haoyizhu.site/project/aether-geometric-aware-unified-world-modeling/</link><pubDate>Tue, 25 Mar 2025 00:00:00 +0000</pubDate><guid>https://www.haoyizhu.site/project/aether-geometric-aware-unified-world-modeling/</guid><description>&lt;p>&lt;strong>Abstract:&lt;/strong>&lt;/p>
&lt;p>The integration of geometric reconstruction and generative modeling remains a critical challenge in developing AI systems capable of human-like spatial reasoning. This paper proposes Aether, a unified framework that enables geometry-aware reasoning in world models by jointly optimizing three core capabilities: (1) 4D dynamic reconstruction, (2) action-conditioned video prediction, and (3) goal-conditioned visual planning. Through task-interleaved feature learning, Aether achieves synergistic knowledge sharing across reconstruction, prediction, and planning objectives. Building upon video generation models, our framework demonstrates unprecedented synthetic-to-real generalization despite never observing real-world data during training. Furthermore, our approach achieves zero-shot generalization in both action following and reconstruction tasks, thanks to its intrinsic geometric modeling. Remarkably, even without real-world data, its reconstruction performance far exceeds that of domain-specific models. Additionally, Aether leverages a geometry-informed action space to seamlessly translate predictions into actions, enabling effective autonomous trajectory planning. We hope our work inspires the community to explore new frontiers in physically-reasonable world modeling and its applications.&lt;/p>
&lt;video controls autoplay loop muted>
&lt;source src="teaser_480p.mp4" type="video/mp4">
&lt;/video></description></item></channel></rss>