Lesson
10
/
20

Pointing AI at the wrong tasks rarely delivers results. Many companies announce "AI-powered" features, but how many actually improve key metrics for themselves and their clients? This lesson shares what worked at Carta—the back-office system for private capital.

Carta manages equity (cap tables) for thousands of VC-backed startups and runs a service-intensive fund administration business where rules exist in legal documents outside their app. Instead of creating generic features, Carta built internal AI agents that extract relevant context from documents and systems, diagnose problems, and recommend—or take—next steps.

The results speak for themselves: in cash reconciliation, an 11-minute process now takes seconds, saving 3,500+ hours monthly. Their goal wasn't superficial cost-cutting but delivering better service and faster resolution on judgment-heavy work.

Carta showed that AI succeeds when you: (1) target workflows where humans must decide but context slows them down, and (2) build the context pipeline so agents can act confidently. You'll learn this playbook here.

The dynamic-context AI playbook

Phase 1: Decide where to apply AI

Start by identifying workflows that pass all three filters below:

Filter 1: Judgment-heavy, but slow due to context

Look for tasks where people need to make decisions but waste time hunting for information. For example, support agents digging through manuals, old tickets, and product details before they can help a customer, or sales reps researching potential clients across LinkedIn, company websites, and CRM data.

Filter 2: Clear "source of truth" exists outside the core system

The knowledge needed for making decisions already exists—it's just hidden in documents, Slack messages, people's expertise, or spread across different tools. Your AI's job is to find and use these existing rules consistently. Examples include: written instructions (even if they're old or scattered), patterns in how decisions are made (even if not written down), and information that can be pulled from different sources (like APIs, databases, and documents).

Filter 3: Happens often enough to matter

One-off tasks rarely generate sufficient ROI. Focus instead on recurring workflows that provide repeated value. Tip: calculate the hours saved monthly—less than 10 hours isn't worth the investment. Aim for at least 40 hours monthly to justify any significant work implementing AI.

Phase 2: Do a tiny PoC first

Before building a full-featured solution, run a small pilot to answer these four key questions:

  1. Input: Can the agent access the right data?
  2. Output: What's the most effective way to present diagnosis and recommended next steps?
  3. Validation: Do early users confirm the outputs are accurate, grounded, and useful?
  4. Autonomy: What product changes would enable the agent to complete tasks end-to-end?

Phase 3: The Context

Your AI solution is only as good as the context you feed it.

Step 1: Map the workflow visually

Use any diagramming tool. Treat this diagram as living documentation—it's your source code for the system prompt.

Step 2: Convert to structured prompts (recommended)

Create an internal generator that converts diagrams into JSON, then automatically feeds this structured data to your agent.

Step 3: Apply "gold-standard" prompting

Include concrete examples, explicit output schemas, detailed task framing, and have domain experts review every version.

Phase 4: Iterate fast

Success depends on three factors: model selection, instructions, and context. Separate this intelligence layer from your user interface and workflow infrastructure. Launch simple interfaces quickly, collect expert input, and continually enhance reasoning quality without completely rebuilding your product.

Three tactics to accelerate learning:

  1. When the AI makes a mistake, ask experts to explain how they would solve the problem. Then add their approach directly to your system's instructions.
  2. Allow users to fix mistakes without starting over. When users correct errors and continue working, you can learn exactly where the system fails.
  3. Keep your user interface clean and simple. Don't show technical tool options. Instead, focus on improving how the system works behind the scenes.

Pointing AI at the wrong tasks rarely delivers results. Many companies announce "AI-powered" features, but how many actually improve key metrics for themselves and their clients? This lesson shares what worked at Carta—the back-office system for private capital.

Carta manages equity (cap tables) for thousands of VC-backed startups and runs a service-intensive fund administration business where rules exist in legal documents outside their app. Instead of creating generic features, Carta built internal AI agents that extract relevant context from documents and systems, diagnose problems, and recommend—or take—next steps.

The results speak for themselves: in cash reconciliation, an 11-minute process now takes seconds, saving 3,500+ hours monthly. Their goal wasn't superficial cost-cutting but delivering better service and faster resolution on judgment-heavy work.

Carta showed that AI succeeds when you: (1) target workflows where humans must decide but context slows them down, and (2) build the context pipeline so agents can act confidently. You'll learn this playbook here.

The dynamic-context AI playbook

Phase 1: Decide where to apply AI

Start by identifying workflows that pass all three filters below:

Filter 1: Judgment-heavy, but slow due to context

Look for tasks where people need to make decisions but waste time hunting for information. For example, support agents digging through manuals, old tickets, and product details before they can help a customer, or sales reps researching potential clients across LinkedIn, company websites, and CRM data.

Filter 2: Clear "source of truth" exists outside the core system

The knowledge needed for making decisions already exists—it's just hidden in documents, Slack messages, people's expertise, or spread across different tools. Your AI's job is to find and use these existing rules consistently. Examples include: written instructions (even if they're old or scattered), patterns in how decisions are made (even if not written down), and information that can be pulled from different sources (like APIs, databases, and documents).

Filter 3: Happens often enough to matter

One-off tasks rarely generate sufficient ROI. Focus instead on recurring workflows that provide repeated value. Tip: calculate the hours saved monthly—less than 10 hours isn't worth the investment. Aim for at least 40 hours monthly to justify any significant work implementing AI.

Phase 2: Do a tiny PoC first

Before building a full-featured solution, run a small pilot to answer these four key questions:

  1. Input: Can the agent access the right data?
  2. Output: What's the most effective way to present diagnosis and recommended next steps?
  3. Validation: Do early users confirm the outputs are accurate, grounded, and useful?
  4. Autonomy: What product changes would enable the agent to complete tasks end-to-end?

Phase 3: The Context

Your AI solution is only as good as the context you feed it.

Step 1: Map the workflow visually

Use any diagramming tool. Treat this diagram as living documentation—it's your source code for the system prompt.

Step 2: Convert to structured prompts (recommended)

Create an internal generator that converts diagrams into JSON, then automatically feeds this structured data to your agent.

Step 3: Apply "gold-standard" prompting

Include concrete examples, explicit output schemas, detailed task framing, and have domain experts review every version.

Phase 4: Iterate fast

Success depends on three factors: model selection, instructions, and context. Separate this intelligence layer from your user interface and workflow infrastructure. Launch simple interfaces quickly, collect expert input, and continually enhance reasoning quality without completely rebuilding your product.

Three tactics to accelerate learning:

  1. When the AI makes a mistake, ask experts to explain how they would solve the problem. Then add their approach directly to your system's instructions.
  2. Allow users to fix mistakes without starting over. When users correct errors and continue working, you can learn exactly where the system fails.
  3. Keep your user interface clean and simple. Don't show technical tool options. Instead, focus on improving how the system works behind the scenes.

Related resources
No items found.