← Back to Blog

Deterministic UI in the Age of Streaming AI (And Why It Keeps Breaking)

Andreea

Traditional applications are beautifully simple: you send a request, get a complete JSON response, and render the UI.

AI applications destroy that model entirely.

Instead of a clean response, you receive a fragmented, real-time stream of Server-Sent Events. Each chunk can contain incomplete text, nested tool-call arguments mid-parse, evolving agent reasoning, or an error state. All of it is immediately visible to the user.


The Frontend Nightmare

When we built AICoven, standard Swift rendering loops were choking on the throughput. We were receiving layout invalidations faster than the screen could refresh — scroll position jumping, text flickering, the works.

We moved string concatenation and JSON fragment parsing off the Main Actor using Swift 6 Strict Concurrency. The goal was elegant, safe, concurrent stream ingestion.

The reality was far from it.

We spent weeks:

  • Fighting the Swift 6 compiler, trying to figure out why an @MainActor-isolated initialiser was complaining about a Sendable closure three layers deep.
  • Dropping Task.sleep into random queues, hoping it would suppress a race condition we couldn't trace.
  • Hacking together ingestion buffers that only mostly work.

The chat bubbles do scroll at a buttery 60 fps, even while the Neural Engine maxes out the hardware in the background. But the codebase underneath is held together with duct tape. Here's roughly what our stream ingestion looks like:

func ingest(_ chunk: SSEChunk) {
    buffer.append(chunk.text)
    if let parsed = tryParseToolCall(buffer) {
        await MainActor.run { viewModel.appendToolCall(parsed) }
        buffer.removeAll()
    } else {
        // Partial — keep buffering, pray the next chunk closes the JSON
        await MainActor.run { viewModel.updateStreamingText(buffer) }
    }
}

The Beautiful UI That Never Renders

Even if the concurrency was perfect, the UI still fails to render properly most of the time.

We built a gorgeous, animated "Agent Scratchpad" panel that shows exactly what the AI is planning and executing. Strict regex parsers catch [ ] sub-tasks and turn them into native Swift checklists.

And yet, the model ignores its system prompt constantly:

  • It forgets to output the correct XML tags.
  • It streams the scratchpad as plain text instead of the structured format we asked for.
  • It decides, inexplicably, to format its tool arguments as a Markdown table rather than valid JSON.

When the parser breaks, the beautiful UI disappears. The app falls back to dumping the raw text onto the screen so the user can at least see something.


The Core Challenge

Building a deterministic UI on top of a non-deterministic brain is Sisyphean. Clean concurrency can handle the stream. But you're still relying on an LLM choosing to follow formatting instructions at any given moment.

We're constantly tweaking regexes and forcing structural outputs. But building with AI right now means accepting a hard truth:

Half your beautiful UI will render as plain-text markdown simply because the model decided to go rogue today.

If you've built streaming AI UIs and found patterns that actually hold up, we'd love to hear about it — @aicoven.

About the Author

I'm Andreea, the creator of AICoven. I build local-first tools for developers who care about architecture, privacy, and prompt economics.

See more of my work at papillonmakes.tech →