Senior Product ManagerQ1 2026 (V0) → Q3 2026 (V3)

Content Studio — Co-Pilot

Transform Content Studio from a manual, grid-driven UI into an intelligent conversational workspace — a"force multiplier" for tax researchers that reduces cognitive load, speeds discovery, prevents mistakes, and ultimately enables agentic, end-to-end content curation with human approval.

TL;DR

I led the product definition and phased rollout plan for Content Studio — Co-Pilot, a conversational assistant embedded into the Content Studio UI. Co-Pilot replaces complex grid filtering and documentation lookups with natural-language interactions (V0), evolves to pre-fill and draft creation (V1), expands into validation & impact analysis (V2), and culminates in agentic, autonomous workflows that draft and propose content changes requiring only human sign-off (V3). Success criteria include ~40% faster time-to-locate, measurable adoption targets, reduced form errors, and faster onboarding for new researchers.

The Problem

Researchers spend too much time navigating dense grids, hunting for jurisdictional records, and switching contexts between documentation, change tickets and the editor.

High cognitive load

Complex filters, many columns (jurisdictions, tax regions, spatial IDs), and nested lists make retrieval slow.

Repetitive data entry

Frequent, error-prone form work (wrong country/state combinations, bulk rate edits).

Context switching

Finding definitions, ticket statuses and previous research requires leaving the current screen.

Quality gaps

Duplicate entries, missed impacts, and late detection of related rules create compliance risk.

Opportunity:

A conversational assistant that understands intent, performs complex UI actions, pre-fills forms, explains domain concepts, and surfaces impacts reduces friction and increases accuracy.

Success Metrics (Primary Outcomes)

~40%
Reduction in 'Time to Locate' specific jurisdiction data
30%
Daily active researchers using Co-Pilot 5+ times/day within 3 months
25%
Reduction in form-entry errors via AI pre-fill and validation
Faster
Time-to-proficiency for new researchers with in-context definitions

My Remit & Constraints

  • Define product vision and phased roadmap (V0 → V3).
  • Deliver V0 (retrieval/navigation/context) in Q1 2026 and execute the follow-on phases with measurable acceptance criteria.
  • Integrate Co-Pilot with content IM, authorization, telemetry, and governance.
  • Maintain security, auditability and role awareness suitable for regulatory content workflows.

Phase-by-Phase Plan & Delivered Capabilities

V0

The Navigator

Reducing clicks — Q1'26

Focus: Retrieval, Navigation, and Context

Goal: Replace complex grid filtering and documentation lookups with a chat interface.

Key Capabilities

Natural Language Filtering'Show me active cities in Colorado' → agent applies grid filters and returns results.
Deep Linking / Navigation'Go to Vandenburgh county details' → agent navigates to the record, opening the details pane.
Contextual Definitions'What is a STJ?' → agent pulls and displays the definition from internal docs/wiki.
Role AwarenessAgent displays permission caveats ('I can show this, but you don't have edit rights.').

Acceptance Criteria

  • Chat accepts NL filter/navigation queries and applies grid/state changes correctly 90%+ of user test cases.
  • Documentation lookup returns correct definition pages/snippets for core IM terms.
  • Agent respects role/permission model and surfaces clear messages for restricted actions.
V1

The Action Assistant

Creating content — Q1'26

Focus: Pre-filling, Draft Creation, and Ticket Intelligence

Goal: Reduce repetitive data entry and context switching.

Key Capabilities

Smart Pre-fill (Creation)'Add a Special Jurisdiction for Denver' → opens Add Jurisdiction modal with Country/State/Type pre-filled.
Drafting'Draft a note for the Maryland update' → agent creates a standardized research-note draft inside the Change Set.
Ticket Intelligence'What's the status of changeset #123?' → pull live ticket status and summary.

Acceptance Criteria

  • Pre-fill accuracy ≥ 85% across common entity types.
  • Drafts created follow standard templates and surface helpful meta-data (source, suggested citations).
  • Real-time ticket lookups return correct status and link into the UI.
V2

The Proactive Analyst

Catching mistakes — Q2'26

Focus: Validation, Anomaly Detection, and Impact Analysis

Goal: Improve data quality and prevent compliance gaps.

Key Capabilities

Impact Analysis'If I expire this jurisdiction, what happens?' → lists connected regions, affected rules, and downstream consumers.
Duplicate DetectionInterrupts on duplicates and provides likely matches (e.g., 'Denver Local District looks like ID 2001 — review?').
Change SummariesDaily digest: '3 active jurisdictions were modified while you were offline.'

Acceptance Criteria

  • Impact analysis accuracy (complete set of downstream objects) validated in UAT scenarios.
  • Duplicate detection precision high enough to avoid nuisance interrupts (target > 90% precision).
  • Daily digest accuracy and usefulness validated by researcher feedback.
V3

The Autonomous Agent

Human approval workflows — Q3'26

Focus: Agentic Workflows and External Triggers

Goal: The agent performs end-to-end workflows requiring only human sign-off.

Key Capabilities

Ingestion-to-ActionAgent monitors RSS feeds, drafts necessary updates, and places a review-ready change set in front of researchers.
Self-HealingAgent detects missing effective dates or inconsistent mappings and proposes fixes.
Human-in-the-loop ApprovalAgent creates finished drafts; researchers review, adjust, and approve for publish.

Acceptance Criteria

  • Agent can autonomously create candidate change sets for a subset of low-risk update types with >80% correctness in initial pilot.
  • Governance workflows, audit trails and human approval gates are in-place and validated.

Design & Implementation Approach

Key Design Decisions

Incremental automation

Start with retrieval and pre-fill (low-risk), progressively add validation and agentic features.

Explainability

Agent displays the provenance of suggestions (which doc, which rule, which snippet).

Role awareness & governance

Agents annotate suggested edits with permission info and require explicit sign-off for write actions.

Fallback & undo

All agented actions are reversible and logged; pre-fills are editable before save.

UX Patterns

Chat pane + 'live' context

Chat lives in a side pane and keeps contextual state (current record, active filters).

Action cards

Agent returns actionable cards (e.g., filter applied, draft created) with buttons to undo or open details.

Confidence & provenance

Suggestions show confidence and source (e.g., 'suggested from: Regulation X, 2021, page 3').

Challenges & Trade-offs

Overtrust vs underutilization

Too much automation risks overtrust; too little hurts adoption. We mitigate with conservative defaults and clear provenance.

Context window & latency

Providing deep, correct context while keeping interactive latency low requires smart caching and prompt design.

Security & permissions

Agents must never bypass authorization; all actions must be audited.

Error handling & interruptions

Agents must gracefully handle ambiguous requests; design interruptible flows and easy undo.

Key Learnings

  • Start narrow & useful: Ship retrieval/useful pre-fill before heavy automation.
  • Balance confidence & control: Show confidence, but require human confirmation for writes.
  • Design for recoverability: Always make actions editable and reversible.
  • Make provenance visible: Researchers adopt faster when they know why an agent suggested something.
  • Measure adoption + task time: Success is behavioral — not just models or features.
"The ultimate vision: an autonomous Content Studio where the research team reviews and approves content, while the Agent handles everything else — from ingestion to validation to publishing."