Jurij Tokarski

Jurij Tokarski

Nobody Finishes a 15-Minute AI Interview

How I decomposed a monolithic AI discovery interview into 8 standalone tools — each with its own deliverable, landing page, and search intent.

Last year I launched an AI-powered discovery tool for software founders. The idea was simple: instead of paying for a product consultant, sit through a 15-minute AI interview and get a comprehensive development roadmap. Business model, market sizing, personas, competitive analysis, PRD, tech stack, budget, action plan — all in one session, delivered as a PDF report.

The output was genuinely useful. Founders who completed it got something they could hand to a developer and start building from.

But most founders didn't complete it.

Where Sessions Died

I didn't need sophisticated analytics to see the pattern. Founders would start, get three or four exchanges in, and disappear. Not because the questions were wrong. Because they'd hit a question they couldn't answer yet.

"What's your monetization model?" at minute six, right after they'd just gotten excited describing the product idea. Or a market sizing question when they hadn't done that research. The session demanded answers in a fixed order. Real founder thinking doesn't work that way.

I spent weeks trying to fix the session — better prompts, shorter flows, smarter branching. None of it changed the completion rate. I was solving the wrong problem: "how do I get founders to finish a 15-minute interview" instead of "what does a founder actually need, when they need it."

The Insight Came From SEO

While researching keywords for content, I noticed something. "Competitive analysis template for startups" — thousands of monthly searches. "TAM SAM SOM calculator" — same. "PRD generator" — same. Each stage of the founder journey had its own search intent, its own moment of urgency.

I had been thinking about building a standalone tool around one of these keywords. Then it struck me: my discovery tool already does all of this and more. But a founder searching for "lean canvas generator" doesn't think of it as part of a 15-minute discovery interview. They want the canvas. Right now.

The monolithic tool was doing eight things well, packaged in a way that required commitment to all eight. The fix wasn't better prompting. It was decomposition.

Eight Tools, Eight Deliverables

The rebuild started with twelve steps, got trimmed to ten, and settled at eight. One per stage of the founder journey:

  1. Business Model Canvas — lean canvas with revenue streams, cost structure, key partners
  2. Competitive Analysis — positioning matrix, differentiation signals, competitor tech indicators
  3. Market Sizing — TAM/SAM/SOM with growth assumptions
  4. User Personas — typed persona objects with platform preferences and jobs-to-be-done
  5. Feature Prioritization — domain classification (core / supporting / generic)
  6. Tech Strategy — build-vs-buy decisions mapped to domain classification, specific stack recommendations
  7. Project Requirements — scoped feature list, acceptance criteria, out-of-scope boundary
  8. Build Cost & Plan — weekly estimate with a concrete action plan attached

The last two were originally four separate tools: Build vs Buy, Tech Stack Advisor, MVP Cost Estimator, and Action Plan. I merged them in pairs. "Should I build auth?" and "which auth provider?" aren't sequential questions — they're the same question. A cost estimate without an action plan is just a number that makes founders anxious. Eight made more sense than ten or twelve.

Each tool is fully self-contained. It works with no prior context, no prior steps. But designed to hand off cleanly if the founder continues.

The Architecture

Each tool gets its own SEO landing page — keyword-targeted hero, explanation copy, FAQ, and an input form, all server-rendered. The page doubles as the app: before generation it's a landing page Google can crawl, after the founder starts it becomes the chat interface. One URL, two render states.

The chat itself is a streaming conversation with a constrained AI model. Each tool has its own system prompt scoped to the decisions that step owns — Feature Prioritization scores by business value only, no effort or cost questions (those belong to later steps). The AI drives the conversation, but the scope is narrow: ask the right questions for this deliverable, produce a typed artifact, stop.

Three server-side tools do the heavy lifting:

  • update_artifact — incrementally builds the step's structured output as the conversation progresses
  • complete_step — finalizes the artifact, captures analysis and summary
  • send_report — collects all completed artifacts, generates a consolidated PDF, delivers via email

The artifact panel shows the structured output updating in real time as the conversation progresses — the founder sees their canvas or competitive matrix forming, not just chat bubbles.

How Context Moves Between Tools

Each tool produces a typed artifact. The Business Model Canvas produces an object with key_partners, revenue_streams, cost_structure. User Personas produces an array of persona objects. Feature Prioritization produces a classification map.

When a founder continues to the next tool, those artifacts get injected into the new tool's system prompt as structured JSON. Chat history doesn't cross tool boundaries — the back-and-forth of step one is noise inside step six. What crosses is the concluded output.

Each tool ends with two inline options rendered as suggestion pills on the last AI message: Continue to [next tool] or Send report via email. If the founder requests the report, all completed artifacts get compiled into a PDF and delivered to their inbox. If they continue, the next tool opens with context already loaded. Both outcomes are first-class. Stopping after step two means you have a competitive analysis report — that's a complete deliverable, not an abandoned session.

Email capture happens at the moment a founder requests their report — after they've gotten value, not before they've seen anything. That single change converted capture from a gate into an offer.

The Prompt Engineering That Wasn't

Early in the build, I added a line to each tool's system prompt: "Use prior context if available to inform your analysis." Seemed reasonable.

It didn't work. The model would occasionally reference something from an earlier step, but inconsistently and shallowly. Feature Prioritization wasn't connecting domain classifications to the Tech Strategy decisions that depended on them. I spent two hours trying different phrasings before accepting the problem wasn't the wording.

The fix was specificity. Not "use prior context" — enumerate every upstream artifact by name, every relevant field, and exactly how it should influence the current step:

## Prior Step Context

If the following steps are complete, use their outputs as described:

- **Feature Prioritization** — use `domain_classification` (core / supporting / generic)
  to anchor build-vs-buy decisions. Core = build custom. Generic = always buy.
- **User Personas** — use `technical_proficiency` and `platform_preferences`
  to shape deployment and integration decisions.
- **Market Sizing** — use TAM/SAM/SOM scale to calibrate infrastructure complexity.

The model follows explicit field references. It ignores vague instructions to "use context." The more precisely you enumerate the step name, the field name, and how to apply it — the more consistently the output reflects what prior steps actually found.

Build Around the Deliverable

The session format is an inherited assumption from chat UIs. It made sense for general-purpose assistants. It doesn't make sense for a process that unfolds across days or weeks, where each stage has its own mental context and its own moment of urgency.

Decomposing the monolithic tool changed everything downstream. Eight tools means eight landing pages means eight keywords. Each tool is a complete product for someone who needs just that one thing. The full journey still exists for founders who want it — they just don't have to commit to it upfront.

If your AI tool covers something that spans multiple sittings and mental states, the deliverable is the right unit to build around. Not the conversation.

Live: varstatt.com/discovery

Got thoughts on this post? Reply viaEmail/Twitter/X/LinkedIn

Subscribe to the newsletter:

About Jurij Tokarski

Hey 👋 I'm Jurij. I run Varstatt and create software. Usually, I'm deep in the work shipping for clients or building for myself. Sometimes, I share bits I don't want to forget: mostly about software, products and self-employment.

RSSjurij@varstatt.comx.comlinkedin.com