A glimpse into Chronicle's new AI interface and how it repositions their product with the users.
When someone dumps 40 pages of quarterly data on you and expects a "compelling narrative" by Friday, you're left staring at blank slides. Do you analyze the data first? Start with a structure? Pick a template and hope for the best?
That's what Chronicle solves. Storytelling for your data through presentations.
I sat down with Claire, Chronicle's founding designer, to get a glimpse into their new UX; from their current page-by-page linear flow to a chat-first experience.
In my opinion, a BIG step towards repositioning their product with the users.
Chronicle is moving from a tool that competes on speed and quality of output, to a collaborator that competes on quality of thinking and memory (something we're very bullish on in aiverse).
The conversation itself becomes part of the output, not limited to the final slides.
This gives Chronicle the flexibility to:
So how does chat-first actually enable this? Or does this just make Chronicle another chat-app competing with ChatGPT?
Let's get into it.
On the surface, it could easily collapse into one, a Skill for Claude Code. But a general chatbot can do anything, which means it's not optimized for anything. Nor is the experience of using the skill as intuitive and flexible as a separate app. Chronicle is specifically built for presentations.
So how does chat-first help them?
It becomes collaborative by nature. Iterative. Back and forth.
A major win, in my opinion, is that iterating on the thinking and understanding now becomes primary, and the final slides become secondary (unlike in the older version).
Since the system can now ask questions, it's only obvious for Chronicle to have proactive follow-ups when input prompts are unclear or vague.
Which is what Chronicle have done so effectively with their chat-first flow:
What's missing?
One feature that I loved from the previous iteration though, something that's now been removed — being able to select the output structure type I want for each slide. But I do see it being a technical feature, not something used by everyone; I am the anomaly.
From 4 independent steps to one open input to provide a variety of context in one go. While open input has demerits, like “blank canvas, don’t know where to start”, it provides a range of options for user to input, combining different forms to provide the context.
Further, you can use Suggested prompts AI pattern to solve the blank canvas problem.
If you've been wondering, "why didn't they do a sidebar chat or pop-out chat?" then you're on the right track. I asked Claire the same thing and turns out, they did. They prototyped, discussed and realized, each mentioned solution had its own constraints.
Iteration 1: Sidebar chat
It’s kind of like the Cursor / IDE layout, bringing focus on the presentation while keeping the chat as a secondary agent.
One technical constraint they faced was that on opening the chat, slides shrunk and made it difficult to navigate.
Iteration 2: Command + K
Lovely interaction, but more useful when combined with other Chronicle features like search and adding widgets. Accessing AI alone wouldn’t justify this interaction as it’s a good secondary (quick access) feature, not a primary flow.
Iteration 3: Left sidebar
On paper, this sounds correct: presentation should lead, AI should assist. However, in practice, it weakens the conversational aspect. More emphasis is on the output than the planning (storytelling).
As a user, you start treating the chat like a help section instead of a thinking partner. Which in case of v0 or Lovable works well, because output is everything, but not for a story-focused application.
Iteration 4: Single column chat
This was a complete 360° — everything inline, everything conversational; artifacts, prompts, slides, system messages, all collapsed into a single vertical narrative. While it looks cohesive, under real usage, it's just a very long conversation without a final destination.
As a user, it becomes difficult finding the final artifact. Or return to a specific slide or decision without scrolling through a long conversational log. Everything feels like a suggestion or a temporary state.
One of the key flows for every AI interaction is showing behind the black box. Keeping the user engaged, building their trust that the system is working in the right direction and letting them know that the system isn't broken.
As a designer, defining these states now becomes the deliverable.
Did I ask for a little behind the scenes? Yes.
Fun fact: Chronicle calls the "thinking" as "musings"; more fun, and a better encapsulation of raw thoughts.
It also comes from how our team talk to each other, we often say “here’s a musing” or “here’s my musings on how to approach this feature”
Now that Chronicle is conversational, the design problems are no longer about UI. The product decisions shift from how to improve the output quality to how can we leverage memory for a better experience & output?
What should AI remember? For how long? How should it be called into the chat?
The three-layer problem
Memory can exist at three distinct levels.
It would be interesting to know how they tackle this and what they prioritise first.
Should the AI sound the same to everyone?
A very interesting concept that Claire shared was how they see personality of their AI as a transforming character. There still needs to be a balance between the brand voice and the personalized voice.
Maybe you want it a little spicy with a tinge of diabolism?
Chronicle’s move to chat-first UX is rather strategic. While many are still debating "Chat is a Command Line UI" and holding off from becoming conversation first, the storytelling application is embracing the pattern and repositioning their product for its users.
Having said that, what comes next is what excites me: If conversation becomes the interface and slides are just the byproduct, does Chronicle stop being a presentation tool and start becoming a thinking partner?
It's still something you have to go to, rather than something that meets you where you are.

How I designed with an AI partner, what worked, where it broke, and what I learned
And how this fits inside real user research workflows
for designers and product teams in the new AI paradigm.
