Auctor Launch Checklist
Current State Snapshot (Live from Supabase)
| Area | Status |
|---|---|
| Site target | 1 configured |
| Competitor domains | 33 registered |
| Competitor pages | 805 crawled |
| Page snapshots | 250 captured |
| Documents | 346 indexed |
| Tracked keywords | 178 tracked |
| Keyword clusters | 0 (not yet clustered) |
| Keyword metrics | 0 (not yet enriched) |
| Domain rankings | 0 (not yet collected) |
| SERP snapshots | 0 |
| Content plan items | 1 |
| Content briefs | 1 |
| Content drafts | 0 |
| Published posts (consul) | 33 |
| Strategy directives | 1 active |
| Agent sessions | 4 |
| Activity log entries | 3 |
| Mastra workflow snapshots | 11 |
Environment Variables
| Variable | Status |
|---|---|
NEXT_PUBLIC_SUPABASE_URL | Set |
NEXT_PUBLIC_SUPABASE_ANON_KEY | Set |
SUPABASE_SERVICE_ROLE_KEY | Set |
DATABASE_URL | Set |
LANGEXTRACT_API_KEY | Set |
OPENAI_API_KEY | Set |
ENCRYPTION_KEY | Set |
ANTHROPIC_API_KEY | MISSING — required for Mastra agents |
FIRECRAWL_API_KEY | MISSING — required for competitor crawling |
DATAFORSEO_LOGIN/PASSWORD | MISSING — required for keyword metrics |
GOOGLE_GENERATIVE_AI_API_KEY | Missing (optional) |
GSC_CLIENT_EMAIL/PRIVATE_KEY | Missing (optional) |
Phase 1: Infrastructure — Get It Running
1.1 Add Missing API Keys
1# Edit .env and add:
2ANTHROPIC_API_KEY=sk-ant-... # REQUIRED — powers all Mastra agents
3FIRECRAWL_API_KEY=fc-... # Needed for competitor page crawling
4DATAFORSEO_LOGIN=your_login # Needed for keyword metrics/SERP data
5DATAFORSEO_PASSWORD=your_password- Add
ANTHROPIC_API_KEYto.env - Add
FIRECRAWL_API_KEYto.env - Add
DATAFORSEO_LOGIN+DATAFORSEO_PASSWORDto.env - Push updated env to vault:
npx dotenv-vault@latest push
1.2 Verify Database Migrations
All Supabase migrations are already applied (78 migrations through keyword_clusters_and_serp_features). All schemas exist (auctor, consul, generative, mastra). No migration work needed.
- Confirmed: All 78 Supabase migrations applied
- Confirmed:
auctorschema has all tables - Confirmed:
consul.postsexists (33 published posts) - Confirmed:
mastraschema exists (PostgresStore auto-created tables)
1.3 Install Dependencies & Build
1pnpm install
2cd backend && uv sync --extra dev && cd ..
3pnpm build-
pnpm install— installs frontend + packages/auctor-mcp -
cd backend && uv sync --extra dev— Python backend deps -
pnpm build— verify Next.js builds (ignoreBuildErrors: truesuppresses TS errors) - Investigate any runtime errors that surface
1.4 Start Services
1pnpm dev # Next.js on port 3000
2pnpm dev:backend # FastAPI on port 8000 (optional, LangExtract workbench only)-
pnpm dev— frontend starts on http://localhost:3000 - Verify Supabase connectivity:
curl http://localhost:3000/api/content-engine/site - Confirm site config returns (1 site target already exists)
1.5 Branch & Code Hygiene
Current branch: altonwells/fix-build-errors — 31 uncommitted changes, 4 deleted files.
- Review uncommitted changes (
git diff) - Commit or stash all changes
- Merge to
mainor create PR - Resolve divergence (1 ahead, 4 behind main)
Phase 2: Site Configuration — Verify Foundation
2.1 Verify Site Registration
The site target already exists (1 row in auctor.site_targets).
- Navigate to
/settings/site - Verify: site name, brand name, owned domain are correct
- Verify:
siteKeyis set (used as foreign key across all data) - API check:
GET /api/content-engine/sitereturns populated config with stats
2.2 Verify Agent Configuration
11 agent runtime configs already exist in auctor.agent_runtime_config.
- Navigate to
/settings/agents - Review model assignments per agent role
- Confirm
ANTHROPIC_API_KEYis set (agents won't work without it) - Test: start a chat session via the agent harness — verify it responds
2.3 Verify Strategy Directive
1 strategy directive already exists.
- Review the active directive — does it reflect current content goals?
- Update if needed: content types, keyword tiers, posting cadence, priority rules
- This directive feeds the strategy planning workflow with guardrails
Phase 3: Populate Competitors — Fill the Gaps
3.1 Review Existing Competitor Data
33 competitor domains registered, 805 pages crawled, 250 snapshots captured.
- Navigate to
/competitors - Review which domains are registered and their crawl status
- Identify gaps: domains with 0 pages crawled, stale crawl dates
- Verify competitor tiers (1-3) and classifications are set
3.2 Crawl Missing Pages
Requires FIRECRAWL_API_KEY.
- For domains with low page counts, trigger discovery: mode=
discover - For blog/content sections, trigger crawl: mode=
crawl_sectionwith path patterns - Monitor via
/activitylog or/competitors/[id]/status - Target: all major competitor blog/content pages crawled with markdown content
3.3 Extract Signals from Crawled Pages
51 document extractions exist, but 346 documents are indexed — many pages lack extractions.
- Run page extraction workflow on un-extracted documents
- Extracts: keyword signals (high/medium/low prominence), topics, entities, schema.org
- Syncs competitor pages →
auctor.documentsfor unified access - Captures SEO metadata: title tags, meta descriptions, heading structure
3.4 Collect Domain Rankings (Requires DataForSEO)
Currently 0 rows in domain_rankings — no ranking data collected yet.
- Ensure
DATAFORSEO_LOGIN/PASSWORDare set - Fetch
ranked_keywordsper competitor domain →domain_rankings(~$0.05/call) - Fetch
domain_intersectionfor own domain vs competitors →keyword_domain_overlaps(~$0.02/call) - Monthly budget: $100/month default, 80% warning, 95% hard stop
Phase 4: Populate Keywords & Mappings
4.1 Review Existing Keywords
178 tracked keywords exist, but 0 clusters and 0 keyword metrics.
- Navigate to
/keywords - Review keyword list — check lifecycle stages, intents, priorities
- Identify keywords lacking metrics or cluster assignments
4.2 Derive Additional Keywords from Competitor Data
- Run keyword derivation:
deriveKeywordsFromCompetitors()- Sources: LangExtract keyword_signal extractions from competitor pages
- Scoring: frequency across sources, prominence weighting, intent detection
- Filters: 3-120 character length, deduplication
- Review new candidates, approve/reject
4.3 Enrich Keywords with Metrics (Requires DataForSEO)
0 rows in keyword_metrics — no enrichment done yet.
- Refresh metrics for all 178+ tracked keywords (~$0.01/keyword)
- Data collected: search volume, competition, difficulty, CPC, monthly trends
- AI search data:
hasAiOverview,aiSearchVolume - Budget: ~$1.78 for current keyword count
4.4 Cluster Keywords
0 rows in keyword_clusters — clustering hasn't been run.
- Run keyword clustering algorithm (Union-Find on SERP URL overlap)
- Requires SERP data — may need to fetch SERPs first via
serp_live(~$0.02/keyword) - Output: clusters with pillar keyword + satellites
- Each cluster gets:
totalVolume,avgDifficulty,opportunityScore - Review via
/keywordsUI — adjust pillars and groupings
4.5 Map Keywords → Content
- Keywords connect to content via
clusterKeyon plan items and documents - Each plan item targets a
targetKeywordand optionally links to a cluster - Briefs receive
keywordMapping(primary + secondary → outline sections) - Lifecycle auto-tracks: discovery → research → targeting → ranking → decline
4.6 Google Search Console (Optional — Free)
- If
GSC_CLIENT_EMAIL/GSC_PRIVATE_KEYare available:- Pull query performance: clicks, impressions, CTR, position
- Stored in
gsc_performance— free, no budget impact - Enriches keyword intelligence with real ranking data from your own domain
Phase 5: Verify MCP Tool Exposure
5.1 Start MCP Server
1# MCP config in .mcp.json — auto-registered for Claude Code
2# Server: npx tsx scripts/mcp-server.mts
3# Auto-starts Next.js dev server if not running- Ensure
pnpm devis running (MCP proxies to Next.js API) - Claude Code should auto-discover all auctor tools via
.mcp.json
5.2 Test Each MCP Tool
| # | Test Command | Expected Result |
|---|---|---|
| 1 | auctor_context(resource='strategy') | Returns 1 active strategy directive |
| 2 | auctor_context(resource='landscape') | Returns 33 competitor domains with threat tiers |
| 3 | auctor_context(resource='pipeline') | Returns content stage counts (1 plan, 1 brief, 0 drafts) |
| 4 | auctor_context(resource='schema') | Returns valid parameter schemas for all tools |
| 5 | auctor_intel(entity_type='competitor', entity_id=<pick one>) | Returns competitor deep-dive with pages |
| 6 | auctor_intel(entity_type='keyword', entity_id=<pick one>) | Returns keyword details (metrics empty until Phase 4) |
| 7 | auctor_list_content(stage='plan') | Returns 1 plan item |
| 8 | auctor_list_content(stage='published') | Returns published posts from consul.posts |
| 9 | auctor_manage_plan(action='create', ...) | Creates a new plan item |
| 10 | auctor_discover(type='web', query='test query') | Returns web search results |
| 11 | auctor_run_workflow(action='start', workflow='strategy') | Starts strategy workflow (requires ANTHROPIC_API_KEY) |
- All read tools return populated data
- Write tools (manage_plan, run_workflow) execute successfully
- Activity log captures MCP mutations with
source: 'mcp'
5.3 MCP Authentication (Production)
- For local dev: endpoints are open (no auth needed)
- For deployed access: set
AUCTOR_MCP_SECRETbearer token
Phase 6: Full Content Creation Lifecycle
6.1 Strategy → Plan Items
Three paths to create plan items:
Path A — AI Strategy Workflow:
1auctor_run_workflow(action='start', workflow='strategy')- Competitive intelligence → search landscape → content strategy
- Generates 3-6 prioritized plan items
- Suspends at calendar review gate → resume with
approval='approve'
Path B — Manual via MCP:
1auctor_manage_plan(action='create', title='...', target_keyword='...', content_type='blog_post', priority=1)Path C — Manual via UI:
-
Navigate to
/strategy/calendar→ create and schedule items -
Create 2-3 plan items via any path
-
Verify they appear in
/strategy/calendar -
Verify
auctor_list_content(stage='plan')returns them
6.2 Plan Item → Brief
1auctor_run_workflow(action='start', workflow='brief', plan_item_id=<id>)- Agent generates brief: angle, outline, keyword mapping, competitor differentiation, voice
- Workflow suspends at brief approval gate
- Resume:
auctor_run_workflow(action='resume', workflow='brief', run_id=<id>, approval='approve') - Verify:
content_briefsrow withstatus='approved'
6.3 Brief → Draft
1auctor_run_workflow(action='start', workflow='draft', brief_id=<id>)- Writer agent generates full markdown (title, meta, body, FAQ, schema, internal links)
- Editor agent reviews and refines (iterative cycles)
- Workflow suspends at human review gate — deep-edit capability
- Final cleanup agent normalizes without changing editorial voice
- SEO validation runs (20+ criteria, blocks if score < 10)
- Verify:
content_draftsrow created
6.4 Interactive Draft Editing
1auctor_read_draft(draft_id=<id>) # Read section map
2auctor_apply_draft_patch(draft_id=<id>, operations=[...]) # Edit sections
3auctor_undo_draft_patch(draft_id=<id>) # Undo last edit- Read draft section map
- Apply a test patch (edit a section heading or content)
- Verify patch history is tracked
- Chat-based editing: bind draft to thread via
boundThreadId
6.5 Publish
1auctor_publish(draft_id=<id>, dry_run=true) # Validate first
2auctor_publish(draft_id=<id>, dry_run=false) # Publish to consul.posts- Dry run returns validation report (SEO score, warnings, blockers)
- Publish writes to
consul.poststable - Activity log records
content.draft_published - Indexing notification fires (currently stub — wire Google Indexing API later)
6.6 Monitor the Pipeline
-
/activity— verify unified activity log captured every step -
/strategy/calendar— verify pipeline view shows items at each stage - Keyword lifecycle: verify auto-updates (discovery → targeting → ranking)
Phase 7: Operational Readiness
7.1 End-to-End Smoke Test
1Register site (already done)
2 → Add competitor → Crawl pages → Extract signals
3 → Derive keywords → Enrich metrics → Cluster
4 → Create plan item → Generate brief → Approve
5 → Generate draft → Review → Publish- Complete one full cycle from competitor crawl to published post
- Verify activity log has entries for every step
- Verify MCP tools return correct data at each stage
- Verify
/strategy/calendarreflects accurate pipeline state
7.2 Automations (Optional)
1 automation already exists in auctor.automations.
- Review existing automation — is it configured correctly?
- Consider adding: weekly competitor crawl, monthly keyword refresh, daily pipeline digest
- Set
humanInTheLoop: truefor critical automations (publishing, strategy changes)
7.3 Cost Guardrails
- DataForSEO: $100/month default cap (80% warning, 95% hard stop)
- Firecrawl: ~$0.01/page crawled — monitor total page count
- Anthropic: agent token costs tracked per workflow in activity log
- Monitor:
costUsd,tokenCount,durationMsfields in activity log
7.4 Known Gaps (Non-Blocking)
| Gap | Impact | When to Address |
|---|---|---|
ignoreBuildErrors: true in next.config.mjs | TS errors suppressed at build time | Fix before production deploy |
| Indexing provider is a stub | Published posts don't ping Google | Wire Google Indexing API when ready |
| Analytics integration adapter exists but not wired | No GA data flowing in | After core pipeline is validated |
| Trigger.dev scheduler hooks | Cron automations may not fire | Configure when automations are needed |
| SEO metadata extraction | 0 rows in page_seo_metadata | Run after competitor pages are crawled |
| SERP feature ownership | 0 rows in serp_feature_ownership | Populates via SERP snapshot processing |
Quick Reference
Key Files
| Area | File |
|---|---|
| Site config | frontend/src/content-engine/db/repositories.ts |
| Competitor crawl | frontend/src/content-engine/services/competitor-crawl.ts |
| Competitor page actions | frontend/src/content-engine/services/competitor-page-actions.ts |
| Keyword derivation | frontend/src/content-engine/services/keyword-derivation.ts |
| Keyword clustering | frontend/src/content-engine/services/keyword-clustering.ts |
| Content workspace | frontend/src/content-engine/services/content-workspace.ts |
| Draft patches | frontend/src/content-engine/services/draft-patches.ts |
| Strategy workspace | frontend/src/content-engine/services/strategy-workspace.ts |
| MCP tools | frontend/src/mastra/tools/content-engine-tools.ts |
| MCP contracts | packages/auctor-mcp/src/contracts.ts |
| MCP server | scripts/mcp-server.mts |
| Mastra agent | frontend/src/mastra/agents/auctor-agent.ts |
| Workflows | frontend/src/mastra/workflows/ |
| Activity log | frontend/src/content-engine/db/activity-log.ts |
| DB connections | frontend/src/content-engine/db/supabase.ts |
Commands
1pnpm dev # Frontend (port 3000)
2pnpm dev:backend # Backend (port 8000)
3pnpm dev:all # Both
4pnpm build # Production build
5pnpm lint # ESLint
6pnpm test:frontend # VitestMCP Tools Summary
| Tool | Reads/Writes | Cost |
|---|---|---|
auctor_context | Read | Free |
auctor_intel | Read | Free |
auctor_discover | Read (external API) | $0.01-0.05/call |
auctor_list_content | Read | Free |
auctor_get_content | Read | Free |
auctor_manage_plan | Write | Free |
auctor_run_workflow | Write (triggers agents) | Agent token costs |
auctor_publish | Write | Free |
auctor_read_draft | Read | Free |
auctor_apply_draft_patch | Write | Free |
auctor_undo_draft_patch | Write | Free |