The new AI era requires a new tool, a new prototyping layer for designers.

Every era of software has had its prototyping layer. We used to create mockups in Photoshop. Then in Sketch. And now in Figma, collaboratively on the web. Prototyping interactions in Origami or Protopie. Landing pages in Framer. Shaders and moving aesthetics in Paper. For every new wave, there's been a layer of abstraction that hid the complexity. The tools that make it easy to ideate and play with technology.
We're in one of those moments again.
Just to back track, all of 2024 was about bolting on AI on every interface behind a sparkle icon. But as we progress into 2026, AI is not a feature anymore. It's a new material. And like every other material (glass, steel, wood), AI has its properties too:
It's non-deterministic - the same input produces different outputs.
It has its own timeline - sometimes responses are instant, and sometimes it takes 4 seconds.
It fails - Infact it fails with confidence. It hallucinates. It agrees "You're absolutely right!" even when it shouldn't.
It makes decisions - The model is constantly deciding; what to include, what to emphasize, what to assume about intent.
And yet—we are still prototyping AI experiences the same way they prototype a settings page. A rectangle. Some placeholder text. A carefully chosen screenshot of the one time the model said something perfect. Ship it.
"If you're designing UI for something that you haven't played with, the risk is that you're designing UI for a perfect case that isn't representative of how it will work. "
— Barron Webster, Model Designer at Figma
Designing an AI feature is very different from traditional product design. It's almost like teaching an intern and in order to have a good outcome, there needs to be a LOT of communication.
How are you communicating and testing AI?
Some API playground that looks like it was built for engineers? Or Notion? Do you then switch between multiple playgrounds to try out different LLMs? How do you compare?
A spatial workspace where you're also designing rest of your application and your prompts aren't hidden behind a configuration panel.
(interactive demos below)
More importantly, you can reference things. Your prompt can point at a file on the canvas, a piece of context, or at another prompt's output and create a chain.
And to make it even better, link it to an actual coded component and make it dynamic. You can manipulate the prompts to see the interface change. Try out different variations where none repeat itself. Get yours hands dirty and play with electricity.
The first step to becoming a new-age AI designer is removing the wall between backend prompts and frontend UI. Letting designers see and manipulate the thing that actually drives the experience, right next to the experience itself!
We're calling it 'Prototyping intelligence'.
The shift happened faster than anyone expected.

A real masterclass on why Perplexity *feels* remarkably faster than any other AI product.

A glimpse into Chronicle's new AI interface and how it repositions their product with the users.


From ChatGPT to Figma AI, explore the best AI UX patterns from leading products.