Thinking-while-Generating:
Interleaving Textual Reasoning throughout Visual Generation

Ziyu Guo*1, Renrui Zhang†*2, Hongyu Li*3, Manyuan Zhang†3, Xinyan Chen2
Sifan Wang, Yan Feng3, Peng Pei3, Pheng-Ann Heng1
1CUHK IMIXR  &  2MMLab   3Meituan
*Equal Contribution   Project Lead
Teaser Image
Interleaving Textual Reasoning throughout Visual Generation. We weave textual thoughts into the unfolding canvas, delivering on-the-fly guidance and reflection throughout synthesis.

Abstract

Recent advances in visual generation have increasingly explored the integration of reasoning capabilities. They incorporate textual reasoning, i.e., think, either before (as pre-planning) or after (as post-refinement) the generation process, yet they lack on-the-fly multimodal interaction during the generation itself.

In this preliminary study, we introduce Thinking-while-Generating (TwiG), the first interleaved framework that enables co-evolving textual reasoning throughout the visual generation process. As visual content is progressively generating, textual reasoning is interleaved to both guide upcoming local regions and reflect on previously synthesized ones. This dynamic interplay produces more context-aware and semantically rich visual outputs.

To unveil the potential of this framework, we investigate three candidate strategies: zero-shot prompting, supervised fine-tuning (SFT) on our curated TwiG-50K dataset, and reinforcement learning (RL) via a customized TwiG-GRPO strategy.

Intro Comparison
Comparison of Where the Textual Reasoning is Applied:
(a) Think-before-Generation injects a pre-planning thought prior to synthesis, limiting fine-grained control.
(b) Think-after-Generation verifies and revises the image only after completion, lacking timely adjustment.
(c) Our Thinking-while-Generating interleaves thoughts and reflections throughout synthesis for on-the-fly guidance.

Framework

TwiG Pipeline
Overall Pipeline. TwiG decouples generation into scheduling, reasoning, and reflection.
Scheduling

When to Think

The model determines an interleaved reasoning schedule $\mathcal{S}$ to decouple the generation process into controllable sub-tasks.

$$ \mathcal{S} = \mathrm{ULM}_{u}(T) $$

Reasoning

What to Say

At each step $k$, a textual thought $\tau_k$ is generated to guide the local visual region $\mathcal{V}_k$, conditioned on all previous context.

$$ \tau_k = \mathrm{ULM}_{u}(T, \{\tau_j\}_{j \lt k}, \{\mathcal{V}_j\}_{j \lt k}) $$

Reflection

How to Refine

Before the next step, a critique $ c_k = (r_k, \hat{\tau}_{k}) $ is generated, where $r_k$ is a critic score for region $\mathcal{V}_k$ and $\hat{\tau}_{k}$ is a revised caption. If the score is low, a local reflection is triggered to refine the region $\hat{\mathcal{V}}_k$.

$$ c_k = \mathrm{ULM}_{u}(T, \{\tau_j\}_{j \leq k}, \{\mathcal{V}_j\}_{j \leq k}) $$

Visualizations

Qualitative Comparison
Comparison
Comparison of the baseline (Janus-Pro-7B), TwiG-ZS, -SFT, and TwiG-RL. Our method demonstrates progressive improvements in compositional fidelity, object counting, and visual realism.
Reflection Capacity
Reflection
The reflection within our Thinking-while-Generating refines both semantic and visual consistency, e.g., improving spatial alignment, shadow coherence, and overall realism across diverse prompts.
The Thinking Process
Thinking Process
Process of TwiG-RL. Each example showcases how the model iteratively interleaves its textual reasoning and visual outputs, progressively improving compositional accuracy.

BibTeX

@article{guo2026thinking,
  title={Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation},
  author={Guo, Ziyu and Zhang, Renrui and Li, Hongyu and Zhang, Manyuan and Chen, Xinyan and Wang, Sifan and Feng, Yan and Pei, Peng and Heng, Pheng-Ann},
  journal={arXiv:2511.16671},
  year={2025}
}