The Complete Guide to Google Stitch, Vibe Design, and the AI Design Tool Landscape
On March 19, 2026, Google updated its AI design tool Stitch with an infinite canvas, voice interaction, and a concept it calls "vibe design." Figma's stock dropped 12% over the next two days, erasing roughly $2 billion in market value. The reaction was immediate and visceral, even though the tool itself is free, still in Google Labs, and cannot do real-time collaboration.
That market reaction tells you everything about where the design industry is heading. Not because Stitch replaces Figma today. It does not. But because it compresses the first phase of design, going from idea to high-fidelity mockup, from days into minutes. And that compression changes everything downstream.
This guide covers what Stitch actually is, how to use it, where it fits in the design ecosystem, how it compares to every major alternative, and what it means for designers, developers, and product teams in 2026.
This guide is written by Yuma Heymans (@yumahey), founder of o-mega.ai, the AI workforce platform where autonomous agents learn to use business tool stacks and execute workflows.
Contents
- What Is Google Stitch
- The Galileo AI Origin Story
- Features and Capabilities (March 2026)
- How to Use Stitch: A Practical Walkthrough
- DESIGN.md: The Agent-Readable Design System
- The MCP and SDK Integration
- Limitations and What Stitch Cannot Do
- The AI Design Tool Ecosystem
- Where Traditional Tools Still Win
- The Emerging Workflow: Stitch to Figma to Code
- What AI Means for the Design Profession
- What Comes Next
1. What Is Google Stitch
Stitch is an AI-native design tool from Google Labs, available for free at stitch.withgoogle.com. You describe what you want in natural language, upload a sketch or screenshot, or paste a URL, and Stitch generates high-fidelity UI screens. You can stitch multiple screens into interactive prototypes, export to Figma, download as HTML/CSS, Tailwind, or React/JSX, and share the results.
Google coined the term "vibe design" for what Stitch does. The concept is directly descended from "vibe coding," a term that Andrej Karpathy (former Director of AI at Tesla, OpenAI co-founder) introduced in early 2025. Where vibe coding means describing software to an AI and accepting the output without reading every line, vibe design means describing a visual direction ("premium and minimalist, like Stripe's checkout") and letting the AI generate multiple design directions from that description.
The March 19, 2026 update transformed Stitch from a prompt-to-mockup experiment into what Google calls an "AI-native software design canvas." The update introduced an infinite canvas workspace, voice interaction, a Design Agent that tracks your project's evolution, an Agent Manager for exploring multiple directions in parallel, direct editing of generated designs, and developer integrations including an SDK and MCP server.
Current status: Free, in Google Labs, available in regions where Gemini is available. No paid tier has been announced. Standard Mode runs on Gemini 2.5 Flash (350 generations per month). Experimental Mode runs on Gemini 2.5 Pro with a Gemini 3 backend (50-200 generations per month).
2. The Galileo AI Origin Story
Stitch did not start inside Google. Its lineage traces directly to Galileo AI, a startup founded in 2022 by Arnaud Benard and Helen Zhou. Galileo was one of the first tools that could turn text prompts into polished UI designs, and it quickly built a following among product teams and solo founders.
On May 20, 2025, at Google I/O 2025, Google announced it had acquired Galileo AI. Arnaud Benard confirmed the acquisition on X: "Galileo AI has been acquired by Google. We launched today the next generation of our product, powered by Gemini: Stitch." The original Galileo team joined Google and continued leading the product. Galileo AI was immediately deprecated as a standalone product, with a 30-day migration window for existing users.
The acquisition explains why Stitch was capable from day one. The core generative engine had been developed and refined at Galileo for over two years before Google added Gemini's multimodal capabilities on top of it. This is not a product that Google built from scratch. It is a proven technology that Google scaled up with its own infrastructure.
Timeline
| Date | Milestone |
|---|---|
| 2022 | Galileo AI founded by Arnaud Benard and Helen Zhou |
| May 20, 2025 | Google acquires Galileo AI, launches Stitch at Google I/O |
| December 2025 | Gemini 3 update: better layouts, Prototypes feature |
| March 19, 2026 | Major redesign: infinite canvas, voice, DESIGN.md, SDK, MCP |
3. Features and Capabilities (March 2026)
Two Generation Modes
| Feature | Standard Mode | Experimental Mode |
|---|---|---|
| Model | Gemini 2.5 Flash | Gemini 2.5 Pro (Gemini 3 backend) |
| Speed | Fast (seconds) | Slower (10-15 seconds per screen) |
| Monthly limit | 350 generations | 50-200 generations |
| Text prompts | Yes | Yes |
| Image input | No | Yes (sketches, screenshots, competitor UI) |
| Figma export | Yes | No |
| HTML/CSS export | Yes | Yes |
| React/JSX export | Yes | Yes |
| Tailwind export | Yes | Yes |
The gap between Standard and Experimental modes is notable. If you want to upload a hand-drawn sketch or a competitor's screenshot and generate a UI from it, you need Experimental Mode. But Experimental Mode does not support Figma export. You cannot currently combine image input with Figma export in a single workflow.
Input Methods
Text prompts. Describe what you want: "A dashboard for a SaaS analytics platform with a sidebar navigation, a header showing key metrics, and a main chart area." Stitch generates a high-fidelity screen.
URL extraction. Paste any public URL and Stitch fetches the page, extracts design tokens (colors, fonts, spacing), and generates an initial screen in that visual style. This takes 15-45 seconds. Useful for capturing a competitor's aesthetic as a starting point.
Image upload (Experimental Mode). Upload a wireframe sketch from paper, a whiteboard photo, a screenshot of an existing app, or a competitor's UI. Stitch reads spatial relationships and generates a polished interface from it.
The Infinite Canvas
The March 2026 update replaced the previous chat-based interface with an infinite canvas workspace:
- Multiple visual assets displayed side by side for full project context.
- Add images, text, code snippets, competitor references, and screenshots to the canvas simultaneously.
- A Design Agent tracks the entire project history and reasons across all context on the canvas.
- The Agent Manager enables parallel exploration of multiple design directions at once.
Voice Canvas
New in March 2026. You can speak directly to the canvas:
- Ask for real-time design critiques by describing what you see.
- Design a new page through an interview: the agent asks clarifying questions about layout, content, and user goals before generating.
- Make live updates verbally: "give me three different menu options" or "show me this screen with a darker color palette."
- The agent listens and updates the canvas in real time.
Instant Prototyping
This is one of Stitch's most useful features for product teams:
- Connect multiple screens into an interactive flow by clicking "Stitch" (the feature that gives the tool its name).
- Click "Play" to preview the interactive app flow.
- Stitch automatically assesses the logical order of screens and adds navigation connections between them.
- Ask Stitch to generate the next logical screen based on where a user would click, mapping out entire user journeys without manually prompting each screen.
Direct Edits
Added in March 2026, addressing one of the most requested features since launch: you can now manually tweak text, swap images, and adjust details directly inside Stitch without going back to the prompt. Previously, every change required a new generation.
4. How to Use Stitch: A Practical Walkthrough
Getting Started
- Go to stitch.withgoogle.com.
- Sign in with a Google account (must be 18+, in a Gemini-available region).
- Choose Standard Mode (fast, text-only) or Experimental Mode (slower, image-capable).
- Type your first prompt in the chat box at the bottom left of the canvas.
Writing Effective Prompts
Google's official Stitch Prompt Guide (available on the Google AI Developers Forum) establishes key principles:
Be specific about platform. Stitch restricts itself to one platform per design thread. Specify "mobile app" or "desktop web app" at the start. Switching mid-thread requires starting a new thread.
Include visual references. "A pricing page like Stripe's, but with a dark background and three tiers" produces significantly better results than "a pricing page."
Describe the user. "An onboarding flow for a 25-year-old professional using a fitness app for the first time" gives Stitch context about information density, tone, and visual expectations.
Name the screens you need. Rather than generating one screen and iterating, describe multiple screens upfront: "I need a login screen, a home dashboard, a settings page, and a profile screen."
Multi-Screen Workflow
- Generate your first screen with a detailed prompt.
- Add subsequent screens by describing each one. Stitch maintains visual consistency across screens (though reviewers note this consistency is imperfect with complex designs).
- Connect screens by clicking the "Stitch" button. The tool auto-detects logical navigation flows.
- Preview by clicking "Play" to walk through the interactive prototype.
- Iterate by asking for changes verbally or via text: "make the header more compact" or "change the primary color to blue."
Export Workflow
To Figma (Standard Mode only): Use the Stitch to Figma plugin (available in the Figma Community). Designs export as editable layers with Auto Layouts, named layers, and editable text. Note: the export produces roughly 3x more layers than needed with a messy structure, so cleanup is required.
To code: Download as HTML/CSS, Tailwind CSS, or React/JSX. The React export produces reusable component structures that serve as a solid starting point but require refinement for production use.
Recommended downstream flow: Generate in Stitch > Refine in Figma > Implement with v0, Bolt, or Lovable.
5. DESIGN.md: The Agent-Readable Design System
One of the most significant additions in the March 2026 update is DESIGN.md, a markdown file that encodes a Stitch project's design system.
What It Contains
- Color palette with hex values and usage rules.
- Typography scale (font families, sizes, weights, line heights).
- Spacing scale (padding, margins, gaps).
- Component patterns (buttons, inputs, cards, navigation).
- Style rules and constraints specific to the project.
Why It Matters
DESIGN.md is described as "agent-friendly" or "agent-readable." It is structured so that LLM-based coding agents (Claude Code, Cursor, Gemini CLI) can parse it and generate code that follows the design system automatically.
The workflow:
- Design your UI in Stitch.
- Export DESIGN.md from your Stitch project.
- Save it to the root of your code repository.
- When your coding agent generates UI code, it references DESIGN.md to maintain visual consistency.
You can also extract a design system from any public URL. Paste a URL into Stitch, and it generates a DESIGN.md based on the site's colors, typography, and spacing. This is useful for reverse-engineering a competitor's visual language or maintaining consistency with an existing brand.
The Limitation
DESIGN.md is currently tightly coupled to Stitch's ecosystem. It is not an open standard with a published schema and community governance. While any tool can read a markdown file, the specific structure and conventions are Stitch-specific. Whether this becomes a de facto standard or remains proprietary depends on adoption.
6. The MCP and SDK Integration
Google released an official SDK and MCP server for Stitch in March 2026, creating a bridge between design and code.
The SDK
Package: @google/stitch-sdk
Repository: google-labs-code/stitch-sdk
The SDK provides programmatic access to Stitch projects. You can retrieve screen designs, extract code, and access design metadata from your own applications.
The MCP Server
MCP (Model Context Protocol) is the open standard for connecting AI models to external tools. Stitch's MCP server lets coding agents query Stitch projects directly.
Configuration:
{
"mcpServers": {
"stitch": {
"command": "npx",
"args": ["@_davideast/stitch-mcp", "proxy"]
}
}
}
Compatible tools: VS Code, Cursor, Claude Code, Gemini CLI, Codex, OpenCode.
Available MCP tools:
build_site: Builds a site from a project by mapping screens to routes.get_screen_code: Retrieves HTML code for a specific screen.get_screen_image: Retrieves a screenshot as base64.
The Practical Impact
With the MCP integration, the design-to-code handoff changes fundamentally. Instead of a designer exporting assets and a developer manually translating them, you can tell your coding agent: "implement the dashboard screen from our Stitch project." The agent pulls the design context through the MCP connection, reads the DESIGN.md for style rules, and generates code that matches the design without manual copying.
This also connects to Google AI Studio. Designs can be exported into AI Studio, where they can be connected to live Gemini logic and tested as functional prototypes.
7. Limitations and What Stitch Cannot Do
An honest assessment of Stitch's current limitations is essential for anyone considering it for real work.
Layout Variety
Stitch's outputs default to a small set of familiar layout structures. Many generations end up looking similar with only minor variations. Bitovi (a product design consultancy) reviewed the tool and concluded: "The generated visuals are generic and off-base, unusable even as inspiration." Navigation varied from page to page even with detailed input.
This is the common weakness of all generative UI tools in 2026. The models have learned patterns from existing interfaces, and they tend to reproduce those patterns rather than innovate. Stitch produces competent, standard-looking UIs. It does not produce creative, boundary-pushing designs.
Accessibility
Generated designs frequently fail WCAG compliance. Color contrast ratios and touch target sizes often need manual correction. If accessibility is a requirement (and for most professional products it should be), Stitch's output is a starting point, not a finished product.
Responsiveness
Stitch generates static, single-viewport layouts. There is no automatic responsive adaptation. If you need a design that works across mobile, tablet, and desktop, you must generate separate screens for each viewport and adapt manually.
No Animation or Stateful Logic
Stitch is purely visual layout generation. No animations, transitions, hover states, loading states, or conditional logic. The prototyping feature connects screens with click-through navigation, but there is no way to demonstrate dynamic interactions.
No Real-Time Collaboration
Unlike Figma, which supports simultaneous multi-user editing, Stitch has no multiplayer capabilities. The Agent Manager enables parallel design exploration within a single user's session, but multiple people cannot work on the same canvas simultaneously. There are no comments, no permissions management, and no cross-project design system management.
Figma Export Quality
The Figma export produces roughly 3x more layers than actually needed with a messy structure. The layers are editable but require significant cleanup before they are production-ready. This is usable for concept iteration but frustrating for anyone expecting clean, organized Figma files.
Mode Incompatibility
Standard Mode supports Figma export but not image input. Experimental Mode supports image input but not Figma export. You cannot currently combine both capabilities.
Regional Access
Availability is tied to Gemini's regional rollout. Users in some countries report access issues even where Gemini is technically available. Google's developer forums have multiple active threads tracking these restrictions.
Monthly Generation Caps
350 Standard + 50-200 Experimental generations per month with no way to purchase more. For individual exploration this is generous. For a team doing daily iterations across multiple projects, it could become a constraint.
8. The AI Design Tool Ecosystem
Stitch exists within a rapidly expanding ecosystem of AI-powered design and development tools. Each occupies a different position in the idea-to-production pipeline.
Comparison Table
| Tool | What It Does | Best For | Pricing | Output |
|---|---|---|---|---|
| Google Stitch | Text/image to UI screens and prototypes | Early exploration, "0 to first draft" | Free (350 standard/mo) | UI screens, prototypes, HTML/CSS, React |
| v0 by Vercel | Text/Figma to production React components | Developers needing production-quality React | Free ($5 credits), Pro $20/mo | React/Next.js with shadcn/ui |
| Lovable | Text to full-stack web apps | Non-technical founders, MVP validation | Free (5/day), Starter $20/mo | Working apps (React + Supabase) |
| Bolt.new | Text to full-stack apps in browser IDE | Developers wanting maximum flexibility | Free (150K tokens/day), Pro $25/mo | Full-stack apps (any framework) |
| Framer | AI-powered animated marketing websites | Marketing teams, landing pages | Free (limited), paid plans | Published live websites |
| Canva Magic Studio | AI graphic design and content creation | Marketers, social media, small business | Free, Pro $12.99/mo | Graphics, presentations, content |
| Adobe Firefly | Generative images, video, vectors | Creative professionals, enterprise | Standard $9.99/mo, Pro $19.99/mo | Images, vectors, video effects |
| Figma | Professional collaborative UI design | Design teams, enterprises | Free (limited), Pro $12/editor/mo | Design files, prototypes, dev specs |
v0 by Vercel
v0 converts natural language and Figma designs into production-quality React and Next.js code using the shadcn/ui component library. The output is significantly more polished than Stitch's React export because v0 is optimized for code quality, not visual exploration.
Key differentiator: v0 includes a "Design Mode" for design-focused work and a Figma-to-code pipeline. It connects directly to Figma files and generates components that match your existing design system.
Pricing: Free tier with 200 credits/month (~$5 value). Premium $20/month (5,000 credits). Team $30/user/month.
When to use v0 over Stitch: When you need production-ready React components rather than visual exploration. v0 is the "refinement and implementation" tool; Stitch is the "exploration and ideation" tool.
Lovable
Lovable converts plain English into full-stack web applications with React frontends, Supabase backends, and one-click deployment. It supports real-time multi-user editing with up to 20 collaborators.
Key differentiator: Lovable produces working applications, not just designs or components. It includes database setup, authentication, and deployment. The "Chat Mode" lets you discuss problems without touching code.
Pricing: Free (5 daily credits). Starter $20/month. Launch $50/month. Scale $100/month.
When to use Lovable over Stitch: When you need a functional application, not just a visual mockup. Lovable is for shipping MVPs; Stitch is for exploring what the MVP should look like.
Bolt.new
Bolt runs a real development environment in the browser. It handles frontend, backend, APIs, and database integrations with the widest framework support of any tool in this category. It upgraded to Claude Opus 4.6 as its generation model.
Key differentiator: Maximum flexibility. You can choose your framework, customize the architecture, and work with the full development stack in a single browser tab.
Pricing: Free (150,000 tokens/day, 1 million/month). Pro $25/month (10 million tokens).
When to use Bolt over Stitch: When you are a developer who wants a complete environment with full control over the stack, not just a UI starting point.
Framer
Framer is an AI-powered website builder focused on published, animated marketing sites. It generates layouts, suggests fonts, creates color palettes, and translates entire sites into multiple languages with one click.
Key differentiator: Best-in-class animation capabilities among no-code tools. Outputs are published live websites, not design files.
When to use Framer over Stitch: When you need to ship a polished marketing website with animations, not design an application UI.
Canva Magic Studio
Canva's AI suite includes over 20 tools: Magic Layers (breaks flat images into editable layers), Magic Design (generates complete designs from prompts), Magic Edit (add/replace image elements from text), and Magic Write (AI copywriting with brand voice).
Key differentiator: Breadth. Canva covers graphic design, presentations, social media content, video, and print. It is a content creation platform, not a product design tool.
March 2026 addition: Magic Layers, which decomposes flat PNG/JPG images into separate editable layers in 8-15 seconds.
When to use Canva over Stitch: When you need marketing materials, social media graphics, or presentations. Not for application UI design.
Adobe Firefly
Adobe's generative AI for images, video, audio, and vector graphics. Integrates with Photoshop, Illustrator, and Adobe Express.
Key differentiator: Enterprise safety. Firefly is trained exclusively on licensed content and offers IP indemnification for enterprise customers. It is positioned as "the safest AI image generator for commercial use."
Pricing: Standard $9.99/month (2,000 credits). Pro $19.99/month (4,000 credits). Premium $199.99/month (50,000 credits).
When to use Firefly over Stitch: When you need generative images, vectors, or video effects for creative production. Not for UI layout design.
9. Where Traditional Tools Still Win
Figma
Figma holds over 80% of the UI design market. It went public on the NYSE in July 2025. Despite the 12% stock drop after Stitch's March 2026 update, Figma's position in professional workflows remains dominant for reasons that AI tools have not yet addressed:
Real-time multiplayer editing. Multiple designers working simultaneously on the same file. This is foundational for design teams and has no equivalent in any AI design tool.
Design systems at scale. Shared component libraries, design tokens, style guides that propagate across hundreds of files. Figma's architecture is built for maintaining consistency across large product surfaces.
2,000+ plugins. An ecosystem of specialized tools for everything from accessibility checking to animation to design handoff.
Version history. Full history of every change, with the ability to restore any previous state.
Dev mode. Developers can inspect designs, measure spacing, copy CSS properties, and extract assets directly.
Figma's own AI additions (2026):
- Figma Make: AI prompt-to-code tool, now generally available, includes Supabase backend integration.
- Figma MCP server: AI agents can write directly to Figma files using existing components and design tokens.
- AI Vectorize: Converts raster images to editable vectors.
- AI Administration: Admins can purchase AI credits with volume-based pricing.
Pricing: Free (3 files). Professional $12/editor/month (annual). Organization $45/editor/month. Enterprise $90/editor/month.
Sketch
Mac-native, performance-focused. Sketch rebuilt its Vector Engine to be significantly faster than Figma for large, asset-heavy documents.
Pricing: $129 flat (one-time, includes one year of updates).
Status in 2026: Declining market share but retains a loyal following among Mac-centric freelance designers who prioritize performance and prefer a one-time purchase over subscription pricing.
The Fundamental Distinction
Stitch and Figma are not competing for the same moment in the design process:
- Stitch: The "0 to 1" phase. Ideation, exploration, first draft. Getting from a blank page to a visual starting point.
- Figma: The "1 to 100" phase. Refinement, production-readiness, collaboration, design systems at scale, developer handoff.
The 12% stock drop reflected market anxiety about where AI design is heading, not a realistic assessment of Stitch replacing Figma today. Professional design teams need collaboration, version control, shared libraries, and structured handoff processes. Stitch provides none of these. What Stitch does is compress the exploration phase from days to minutes, which is genuinely valuable but serves a different function.
10. The Emerging Workflow: Stitch to Figma to Code
The most effective design workflow emerging in 2026 combines AI generation with traditional refinement:
Phase 1: Explore in Stitch (minutes)
Use Stitch to generate multiple design directions from a text description or visual reference. Generate 5-10 variations. Use voice to iterate. Build a basic prototype connecting screens. Export DESIGN.md to capture the design system.
Time: 20-60 minutes for a complete set of initial screens.
Phase 2: Refine in Figma (hours to days)
Export the best direction to Figma. Clean up the layer structure (remember: Stitch exports 3x more layers than needed). Apply your actual design system components. Ensure accessibility compliance. Build responsive variants. Add interaction details, edge cases, and error states. Collaborate with the team for feedback.
Time: Hours to days, depending on complexity.
Phase 3: Implement in Code (hours)
Use v0 for production-quality React components from refined Figma designs. Use Bolt or Lovable for full-stack implementation. Reference DESIGN.md in your coding agent's context for consistency.
Time: Hours for standard interfaces, days for complex applications.
Why This Workflow Works
Each tool handles what it does best. Stitch eliminates the blank-page problem and generates starting points faster than any human can sketch. Figma provides the professional infrastructure for refinement and collaboration. Code generation tools handle the implementation. No single tool covers all three phases well, but the combination is faster than any individual approach.
11. What AI Means for the Design Profession
The Nielsen Norman Group Assessment
The authoritative "State of UX 2026" report from Nielsen Norman Group found:
- The UX job market began stabilizing from late 2024 through 2025.
- Senior practitioners and generalist roles are recovering faster than entry-level positions.
- AI now automates up to 40% of entry-level UI tasks.
- Junior designers who master prompt engineering, UX research, and design systems see 2x more interview callbacks than those who do not.
- The shift is from output-driven work to insight-driven work.
What AI Automates
- Initial layout generation.
- Component composition and arrangement.
- Wireframe iteration.
- Basic visual design production.
- Style exploration and variation.
What AI Does Not Automate
- User research. Understanding real user behavior through interviews, observation, and testing.
- Strategic thinking. Deciding what to build and why. Aligning design with business goals.
- Contextual judgment. Knowing when a pattern is appropriate and when it is not, based on domain-specific understanding.
- Taste and curation. The ability to evaluate multiple outputs and choose the one that best serves the user and the brand.
- Accessibility expertise. Understanding WCAG requirements and their practical implications beyond automated contrast checks.
- System thinking. Designing component libraries and design systems that scale across a large product surface.
The Professional Consensus
The design community has largely converged on the view that AI replaces tasks, not the design role itself. The "AI Designer" is now a recognized formal title on design teams. Designers who focus on research, strategy, and human-centered thinking are thriving. Those who relied solely on UI composition skills, producing layouts and arranging components, are finding that work increasingly automated.
The practical advice for designers in 2026: learn to use AI tools (Stitch, v0, Figma AI) as force multipliers. Master prompt engineering for design generation. Double down on the skills AI cannot replicate: research, strategy, systems thinking, and curated taste.
12. What Comes Next
Near-Term (Google I/O 2026, Expected May 19-20)
Leaked materials and analyst expectations suggest Google will announce at I/O 2026:
- 3D workspace: An immersive environment for visualizing and manipulating designs with spatial awareness.
- Direct React code generation: Full React application generation from designs, not just component-level JSX export.
- Deeper agent integration: More voice options, an agent activity log, and richer MCP export capabilities.
The Stitch Bet
Google's strategic play with Stitch follows a familiar playbook: offer a free, capable tool that integrates deeply with the Google ecosystem (Gemini, AI Studio, Google Workspace), establish a new category ("vibe design"), and capture users who then adopt adjacent paid products.
The question is whether Stitch moves from Google Labs experiment to a fully supported Google product. Google has a history of shutting down Labs experiments (Google Wave, Google+, Stadia). But the Galileo AI acquisition and the SDK/MCP investment suggest a deeper commitment than a typical experiment.
The Broader Trajectory
The design tool market in 2026 is splitting along a clear axis:
AI-native tools (Stitch, v0, Lovable, Bolt) optimize for speed. Generate first, refine later. Best for exploration, prototyping, and getting to a starting point fast.
Professional tools (Figma, Sketch) optimize for quality and collaboration. Build design systems, maintain consistency at scale, collaborate with teams, hand off to developers with precision.
The convergence is already happening. Figma is adding AI generation (Figma Make). Stitch is adding professional features (DESIGN.md, Figma export). v0 is adding design mode alongside its code generation. Within two years, the distinction between "AI design tool" and "traditional design tool" will be less meaningful than it is today. The tools that survive will be the ones that handle both exploration and production well.
For now, the practical reality is clear: use Stitch to explore, Figma to refine, and coding tools to implement. The team that masters all three will ship faster than the team that relies on any one alone.