Feature hasn't been suggested before.
Describe the enhancement you want to request
Summary
When the assistant is generating a response, the TUI viewport auto-follows the bottom of the stream by jumping one full line at a time as each line arrives. On fast-streaming responses this looks visibly jerky / "ratcheting." Codex's TUI does the same job but interpolates the scroll offset across frames, which feels smooth and noticeably easier on the eyes during long responses.
This is a streaming auto-follow smoothness ask, distinct from user-driven scroll smoothness, which is already handled by scroll_acceleration.enabled / scroll_speed in tui.json.
Current behavior
- v1.x opentui renderer.
- During streaming, the viewport snaps to bottom whenever the rendered content grows past the bottom edge.
- The snap is per-line (or per-chunk reflow), with no interpolation between the previous and new scroll offset.
- Result: visible "step / step / step" motion, especially with fast models on a tall pane, making it difficult to read in real-time.
Expected behavior
- During streaming, when auto-follow is engaged, animate the scroll offset from old → new over a few frames (e.g. 60 fps for ~80–120 ms with an ease-out curve).
- If a new line arrives before the previous animation finishes, retarget to the new bottom rather than queueing.
- Disengage cleanly the moment the user scrolls up (same as today's auto-follow disengage rule).
Reference
Codex's TUI achieves this with frame-paced redraws plus interpolated auto-follow offsets. Side-by-side, Opencode visibly steps while Codex glides.
Proposed config
A new opt-in key in tui.json, e.g.:
{
"stream_follow": {
"smooth": true,
"duration_ms": 120,
"easing": "ease-out"
}
}
Defaulting smooth to false would preserve current behavior; users can opt in. Alternatively, a single boolean stream_follow.smooth is fine — the duration/easing tuning can come later.
Why this is not a duplicate
Feature hasn't been suggested before.
Describe the enhancement you want to request
Summary
When the assistant is generating a response, the TUI viewport auto-follows the bottom of the stream by jumping one full line at a time as each line arrives. On fast-streaming responses this looks visibly jerky / "ratcheting." Codex's TUI does the same job but interpolates the scroll offset across frames, which feels smooth and noticeably easier on the eyes during long responses.
This is a streaming auto-follow smoothness ask, distinct from user-driven scroll smoothness, which is already handled by
scroll_acceleration.enabled/scroll_speedintui.json.Current behavior
Expected behavior
Reference
Codex's TUI achieves this with frame-paced redraws plus interpolated auto-follow offsets. Side-by-side, Opencode visibly steps while Codex glides.
Proposed config
A new opt-in key in
tui.json, e.g.:Defaulting smooth to false would preserve current behavior; users can opt in. Alternatively, a single boolean stream_follow.smooth is fine — the duration/easing tuning can come later.
Why this is not a duplicate
Environment