Community blogs
1. What “generative UI” platforms actually do
Generative UI systems (Thesys C1, Vercel v0.dev, LangGraph’s React helpers, etc.) intercept or post-process an LLM’s textual reply, translate it into a structured JSON or JSX description of UI widgets, then let a renderer turn that description into live React / HTML views. The LLM is guided by a system prompt that explains the component palette, design tokens, and any guard-rails. The renderer handles:
Component mapping – choosing the right widget type for the semantic payload.
State & data binding – linking UI state back to subsequent prompts or API calls.
Styling – injecting design-system classes so the output feels native.
Streaming / partial updates – sending small patches as the conversation evolves. Vercel
2. Taxonomy of common components and when generative UI selects them
Category | Component | Typical LLM Hint Phrases | Best-fit Context | Why it is picked |
Navigation | Tabs | “Compare A, B, C”, “three sections”, “different categories” | Parallel data sets that should stay visible after switching | Keeps one mental model while hiding unrelated info Material Design |
Sidebar / Drawer | “Options”, “filters”, “navigation links” | Large hierarchies or filter panels | Leaves main canvas uncluttered | |
Data display | Table / Data grid | “List the rows”, CSV-like answer, many numeric columns | Multi-row structured data needing sort or filter | Compact, scan-friendly Cloudscape |
Card list | “Show profiles”, “product catalog”, small item count | Visual summaries with thumbnails | Allows media + text + action buttons | |
Description list / Key-value | “Details”, “specifications” | Entity summary with ≤10 fields | Quick scanning of properties | |
Visualization | Bar / Line / Pie chart | “trend over time”, “percent distribution”, numeric array | Quantitative comparisons | Faster pattern recognition than a table Medium |
KPI metric tile | “overall score”, “total revenue” | Single scalar that matters | Draws focus, good for dashboards | |
Input & forms | Checkbox set | “Select all that apply” | Unordered multi-choice | Users can toggle more than one |
Radio buttons | “Choose one”, mutually exclusive list ≤5 | Single choice | No accidental multi-selection | |
Toggle switch | “Enable/disable”, boolean flag | Binary settings | Instant feedback | |
Slider / Range | “Pick a value between”, “set percentage” | Continuous values | Fine control without manual typing | |
Controls | Primary / secondary buttons | “Submit”, “Run again”, “Download” | Trigger actions | Clear affordance |
Dropdown / Select | Long enumerations, dynamic options | Space-efficient pick list | Saves screen space | |
Pagination / Infinite scroll | “Next 10 results”, large data sets | Avoids flooding UI | Performance friendly | |
Contextual helpers | Tooltip | “?” icon or on-hover hints | Dense data tables or unfamiliar terms | Keeps surface clean |
Modal / Dialog | “Are you sure”, destructive action | Interruptive confirmation | Prevents errors | |
Feedback | Toast / Alert banner | Success, warning, error messages | Ephemeral status | Non-blocking feedback |
Skeleton loader | Slow API call anticipated | Placeholder during fetch | Perceived performance boost |
3. How the LLM decides which component to suggest
Intent classification – A few-shot prompt shows examples: lists → tables, comparisons → charts, categories → tabs.
Schema inference – The model detects tabular patterns (commas, line breaks) or numeric arrays.
Constraint reasoning – The system prompt encodes brand rules (e.g., “never use modals except for destructive actions”) so the model filters choices.
Responsiveness rules – If viewport < 640 px, prefer accordion over multi-column table.
Safety filters – If output might execute code, wrap it in a sandboxed iframe rather than raw HTML.
4. Component-level context cheat-sheet
Tabs vs. Accordion – Tabs suit peer views that users may switch between frequently; accordions are better for long form pages where only one panel is expanded at a time.
Tables vs. Card lists – Tables shine with many rows and numeric sorting; card grids excel when imagery or ratings are key and row height would grow unevenly.
Charts vs. KPI tiles – Use chart when trend or distribution matters; use KPI when a single number is the headline.
Checkbox set vs. Tag filter chips – Checkboxes support static options; dynamic tag chips can be created on the fly by the user.
Modal vs. Drawer – Modal blocks workflow until dismissed; drawer lets the user peek without losing context.

Screenshot of Thesys website
5. Patterns illustrated by Thesys C1 prompt gallery
Prompt example | Components rendered (observed in live demo) | Reasoning |
“Cast of Harry Potter” | Card grid with actor photo, name, role | Visual content with equal-weight items |
“Compare different diet plans” | Table + bullet list + bar chart | Quantitative macros fit table; chart highlights differences |
“Celestial objects” | Gallery carousel + info pop-over | High-resolution imagery plus factual overlay |
6. Design and accessibility notes
Keeping one design system; generative outputs inject tokens instead of raw styles so colours, radius and typography stay consistent.
Provide ARIA roles in the renderer because the LLM cannot guarantee semantic correctness.
Cap generated table rows to avoid overwhelming mobile layouts, support virtual scrolling for big data.
Sanitizing any user-supplied markdown or HTML before rendering.
7. Key takeaways for “decoding” a GenUI stack
Prompt engineering is your UI spec – the clearer the palette description, the more predictable the component choices.
Renderer is the source of truth – treating model output as a declarative suggestion, not executable code.
Map contexts to components – building a matrix like the taxonomy above so reviewers can predict what appears where.
Instrument everything – logging which components are suggested and measuring interaction rates to refine prompts.