How the AI Works
Tap's intelligence comes from Claude AI (Anthropic's Claude 3.5 Haiku model), which powers every smart feature in the platform — from helping creators write unbiased questions, to guiding participants through adaptive conversations, to generating insights from collected feedback.
This page explains how the AI actually works behind the scenes. For a quick reference list of all 12 AI functions, see AI Integration.
The Big Picture
Tap's AI does three distinct jobs, each with its own approach:
- Mission Wizard — Helps campaign creators define a clear, unbiased opening question
- Conversation Engine — Generates real-time follow-up questions during participant chats
- Analysis Pipeline — Processes all responses to produce sentiment scores, themes, summaries, and Q&A
Summary: Every AI call follows the same pattern — the backend constructs a tailored prompt, sends it to Claude along with relevant context, and parses the structured JSON response. Different functions use different prompt strategies depending on whether they need precision (temperature 0) or creativity (temperature 0.7).
Prompt Architecture
Every AI function uses the same basic structure: a system prompt that defines Claude's role and rules, plus a user message containing the actual data to process. The system prompt is where most of the intelligence lives.
How Prompts Are Constructed
Each AI function builds its prompt dynamically by inserting campaign-specific context (mission text, audience, conversation history) into a carefully designed template. Here's the general pattern:
| Component | Purpose | Example |
|---|---|---|
| Role definition | Tells Claude what persona to adopt | "You are an empathetic interviewer conducting a feedback campaign" |
| Campaign context | Injects the specific campaign's mission and question | "Campaign Mission: Understand remote work satisfaction" |
| Behavioral rules | Numbered list of constraints and goals | "Keep questions open-ended", "Don't repeat topics already covered" |
| Response format | Exact JSON schema the output must follow | { "sentiment": "positive", "themes": [...] } |
| Examples | Sample inputs and outputs for complex functions | Shown in mission wizard prompts |
Temperature Settings
Temperature controls how creative vs. deterministic the AI's responses are:
| Function Type | Temperature | Why |
|---|---|---|
| Mission clarification | 0 (strict) | Needs precise, consistent information extraction |
| Mission refinement | Default | Balances creativity with accuracy for question options |
| Mission summary | 0.7 (creative) | Should feel warm, inviting, and natural |
| Follow-up questions | Default | Needs to be conversational but relevant |
| All analysis functions | Default | Balances accuracy with readable output |
The Mission Wizard
The mission wizard is a multi-step AI flow that guides campaign creators from a vague idea to a polished, unbiased opening question. It uses up to three AI functions in sequence.
Summary: The creator describes what feedback they want. The AI extracts the topic, audience, and goal — asking clarifying questions only if needed. Then it generates three unbiased question options for the creator to choose from, and finally writes an engaging summary for the invitation email.
Step 1: Clarify the Input
The clarifyMissionInput function extracts three pieces of information from the creator's initial message:
- Topic — What they want feedback about ("new remote work policy")
- Audience — Who they're asking ("engineering team")
- Success Criteria — What they hope to learn ("assess satisfaction levels")
If all three are present, the AI marks the status as "complete" and moves on. If any are missing, it asks one focused clarifying question. The AI is specifically instructed to never ask how feedback will be collected (because Tap is always the method) — only what, who, and why.
Step 2: Refine the Question
Once the AI has all three pieces, refineMissionQuestion generates three polished question options. The AI is instructed to:
- Detect bias and leading language in the original phrasing
- Ensure questions are open-ended (not yes/no)
- Provide a rationale for why each option works
- Keep questions neutral and specific
The creator picks their preferred option (A, B, or C), and the AI confirms the final question along with a formal mission statement.
Step 3: Generate the Summary
generateMissionSummary writes a 2-3 sentence description of the campaign's purpose, designed to appear in invitation emails. This uses a higher temperature (0.7) to produce warm, inviting language that makes participants want to respond.
The Unified Agent
There's also a unifiedMissionAgent function that combines Steps 1 and 2 into a single conversation. It targets 1-3 total exchanges before presenting question options, making the wizard flow faster for experienced users.
The Conversation Engine
This is Tap's core differentiator — the AI that conducts adaptive, real-time conversations with participants. Two functions work together during each participant interaction.
Follow-Up Decision Logic
When a participant sends a response, the generateFollowupQuestion function decides what happens next. Here's the decision flow:
Summary: The AI reads the full conversation history and decides whether to ask another follow-up or end the conversation. It considers the campaign's follow-up limit, whether the participant has shared enough depth, and whether there are unexplored topics.
The AI's system prompt instructs it to:
- Ask thoughtful questions that dig deeper into the participant's responses
- Keep questions open-ended and conversational (1-2 sentences)
- Focus on understanding perspective, feelings, and suggestions
- Never repeat questions or revisit topics already covered
- Gracefully conclude if the participant has shared everything meaningful
The prompt includes the current follow-up count (e.g., "2 of 3"), so the AI knows how many questions it has left and can pace the conversation accordingly.
Hard stop: Before the AI is even called, the backend checks if currentFollowupCount >= maxFollowups. If so, the conversation ends immediately without an AI call — this is a safeguard that prevents runaway conversations regardless of what the AI might decide.
Soft stop: The AI can end the conversation early by returning the exact string "CONVERSATION_COMPLETE". This happens when the participant has already provided comprehensive feedback and more questions would feel forced.
Real-Time Response Analysis
Each participant response is also passed to analyzeResponse, which extracts:
- Sentiment — Positive, neutral, or negative
- Themes — 3-5 keyword themes from the response
- Summary — A 1-2 sentence summary of the main points
This per-response analysis feeds into the campaign-level analysis later. It runs alongside the follow-up generation but doesn't affect what question is asked next.
Why Haiku?
All conversation functions use Claude 3.5 Haiku rather than a larger model. The reason is latency — participants are waiting in a chat interface for the AI's next question. Haiku responds in under a second, which keeps the conversation feeling natural. The max token limit for follow-ups is just 200 tokens (roughly 1-2 sentences), further reducing response time.
The Analysis Pipeline
When a campaign creator requests analysis, Tap runs four AI functions in parallel against all collected responses. Results are cached in the database so subsequent views load instantly.
Summary: The analysis pipeline processes all participant responses through four parallel AI functions — sentiment analysis, theme extraction, executive summary, and comprehensive analysis (conversation grouping). Results are stored in a campaign_analysis table and only regenerated when new responses come in.
Sentiment Analysis
analyzeSentiment classifies every participant response as positive, neutral, or negative, then calculates the overall distribution.
- Processes responses in batches of 50 to stay within token limits
- Returns counts and percentages for each sentiment category
- On error, defaults to all-neutral (doesn't crash the analysis)
Theme Extraction
extractThemes identifies up to 10 recurring themes across all responses, sorted by frequency.
Each theme includes:
- A concise name (e.g., "Communication Gaps")
- A brief description
- The number of responses that mention it
- The percentage of total responses
Executive Summary
generateExecutiveSummary produces a management-ready overview including:
- A 2-3 sentence overview paragraph
- 3-5 key findings as bullet points
- Overall sentiment assessment with reasoning
- Top 3 themes
- 2-4 actionable recommendations
- Participation stats (invited, responded, response rate)
Comprehensive Analysis (Conversation Grouping)
generateComprehensiveAnalysis takes a different approach — instead of analyzing individual responses, it groups entire conversations into thematic categories:
- Creates a one-sentence summary of all feedback
- Identifies 3 key findings
- Groups conversations into 3-6 thematic categories (e.g., "Price Concerns", "Feature Requests")
- Each conversation appears in exactly one group
This view helps creators quickly see clusters of similar feedback and drill into specific conversation transcripts.
Natural Language Q&A
Beyond the automated analysis, creators can ask free-form questions about their data using answerQuery. For example:
- "What are the main complaints?"
- "How do participants feel about the new policy?"
- "What suggestions came up most often?"
The AI answers based only on the actual responses, provides 2-3 supporting quotes, and rates its own confidence as high, medium, or low.
Caching Strategy
Analysis results are stored in a campaign_analysis table with one row per campaign. The cache is considered valid when:
- A cached row exists for the campaign
- The stored
response_countmatches the current number of responses - The creator hasn't explicitly requested regeneration
If new responses have come in since the last analysis, the entire analysis is regenerated (all four functions run fresh) and the cached row is replaced via upsert. This keeps results current without redundant AI calls on repeated page views.
Error Handling and JSON Extraction
Safe JSON Parsing
AI responses sometimes include markdown code blocks, explanatory text before or after the JSON, or other formatting. The extractJSONFromResponse utility handles this using a character-by-character parser that:
- Strips markdown code block delimiters (
```json ... ```) - Finds the first opening brace
{ - Tracks nested braces and string boundaries (ignoring braces inside quotes)
- Returns the complete JSON substring
This parser was specifically designed to be ReDoS-safe — it avoids regex patterns like [\s\S]*? that could cause catastrophic backtracking on malformed input.
Graceful Fallbacks
Every AI function wraps its call in a try-catch block. When something goes wrong (API timeout, malformed response, parsing failure), the system returns safe defaults rather than crashing:
| Function | Fallback Behavior |
|---|---|
analyzeResponse | Returns neutral sentiment, empty themes, truncated response as summary |
analyzeSentiment | Returns all responses classified as neutral |
extractThemes | Returns empty array |
answerQuery | Returns a friendly error message with low confidence |
| Mission wizard functions | Throws error (displayed to creator in the UI) |
| Follow-up generation | Throws error (retried by the frontend) |
The analysis functions fail silently because partial analysis is better than no analysis. The mission wizard and conversation functions throw errors because those require human interaction to recover from.