MDX Limo
ContentEngine — Technical Specification v3

ContentEngine — Technical Specification v3

Author: Alton Wells
Date: March 2026
Status: Final Architecture Specification


Executive Summary

ContentEngine is an autonomous AI content production system built on Mastra (TypeScript agent framework), LangExtract (structured document extraction), and Firecrawl (web crawling/sitemap intelligence). It replaces the conventional RAG/vector embedding approach with a three-layer memory architecture: structured extraction via LangExtract, hierarchical document summaries, and an explicit content relationship graph — all stored in PostgreSQL.

The system operates across three workflow layers — Strategy, Content, and Production — with human control points at strategic decision-making (content calendar approval), editorial review (draft quality gate), and final publication (image placement, last-mile polish). AI handles everything between those gates: competitive intelligence gathering, search landscape analysis, content planning, brief generation, writing, editing, SEO optimization, and publishing.

Core architectural principles:

  • Humans set strategy and approve output. AI executes everything in between.
  • No vectors. No embeddings. Structured extractions + hierarchical summaries + graph relationships replace RAG. Stanford's 2025 research shows embedding precision collapses 87% beyond 50K documents. Our approach scales without dimensional decay.
  • Programmatic SEO validation is non-negotiable. Every published piece must pass a 10/10 deterministic SEO check. No exceptions.
  • Context is navigated, not stuffed. Agents traverse a hierarchy (Domain → Cluster → Page → Entity) to load only what they need. Total context per planning session: ~13,000 tokens instead of millions.
  • Everything is traceable. Every extraction maps to its source location. Every graph edge has provenance. Every agent decision can be audited.

Table of Contents

  1. Frameworks, Libraries & Integrations
  2. Data Architecture
  3. Master Workflow
  4. Strategy Layer — Detailed Specification
  5. Content Layer — Detailed Specification
  6. Production Layer — Detailed Specification
  7. Content Calendar & Application Interface
  8. Programmatic SEO Validation Engine
  9. Cost Model
  10. Risk Matrix
  11. Future: Filesystem-as-Context Architecture

1. Frameworks, Libraries & Integrations

Core Framework

ComponentTechnologyRole
Mastra@mastra/core (TypeScript)Agent definitions, workflow orchestration, tool system, suspend/resume, Hono server generation, Mastra Studio debugging
Vercel AI SDKFoundation layer under MastraUnified model routing (anthropic/claude-sonnet-4-20250514), streaming, structured output, tool calling protocol
ZodSchema validation throughoutInput/output schemas for every agent, tool, and workflow step. Compile-time type safety.

Extraction & Intelligence

ComponentTechnologyRole
LangExtractPython library (Google, Apache 2.0)Structured extraction from unstructured text. Maps every entity to exact source location. Multi-pass extraction for high recall on long documents. Runs as FastAPI sidecar.
FirecrawlWeb crawling API/SDKCompetitor sitemap discovery, page crawling, content extraction. Handles JavaScript-rendered pages, rate limiting, and anti-bot bypassing. Replaces custom sitemap crawlers.
Gemini 2.5 FlashLLM (via LangExtract)Extraction model. Fast, cheap ($0.15/1M tokens), high quality for structured extraction tasks.

LLM Providers

ModelUse CaseWhy
Claude Sonnet 4 (anthropic/claude-sonnet-4-20250514)All Mastra agents (strategy, writing, editing, briefs)Best quality-to-cost ratio for complex reasoning, long-form writing, and multi-step planning
Gemini 2.5 FlashLangExtract extraction pipelines, hierarchical summary generationFast + cheap for structured extraction and summarization. LangExtract's recommended default

Data & Storage

ComponentTechnologyRole
PostgreSQLPrimary database (no pgvector)All structured data, extraction entities, graph adjacency table, summaries, content plans. JSONB for flexible extraction attributes.
Drizzle ORMTypeScript ORMType-safe database access from Mastra tools and API routes

Application & Deployment

ComponentTechnologyRole
Next.js 15+Web frameworkApp UI (calendar, editor, dashboards), API routes, SSR
Trigger.devDurable job schedulingScheduled crawls, extraction jobs, summary regeneration, post-publish monitoring. Retry on failure.
Slack API + EmailNotificationsHuman review alerts, competitor change digests, ranking alerts
VercelApp hostingNext.js app, serverless API routes
RailwayWorker hostingMastra agent workers, LangExtract FastAPI service, Trigger.dev jobs

External APIs

APIPurposeIntegration Method
Semrush or AhrefsKeyword data, search volume, difficulty, SERP features, competitor rankingsREST API via Mastra tool
Google Search ConsoleOur impressions, clicks, CTR, average position per queryOAuth2 via Mastra tool
FirecrawlCompetitor sitemap crawling, page content extractionSDK/API via Mastra tool + scheduled jobs
Google Indexing API / IndexNowFast crawl requests for newly published contentREST API via Publishing Agent
CMS (WordPress REST / Sanity / Contentful)Content publication endpointAdapter pattern — Mastra tool per CMS

Integration Architecture

1┌─────────────────────────────────────────────────────────┐ 2│ Next.js Application │ 3│ (Calendar, Editor, Dashboards) │ 4└────────────────────────┬────────────────────────────────┘ 5 │ API Routes 67┌─────────────────────────────────────────────────────────┐ 8│ Mastra Agent Server │ 9│ (Hono HTTP, auto-generated endpoints) │ 10│ │ 11│ Agents ←→ Tools ←→ PostgreSQL │ 12│ ←→ LangExtract Service (HTTP) │ 13│ ←→ Firecrawl API │ 14│ ←→ Semrush/Ahrefs API │ 15│ ←→ Google Search Console API │ 16│ ←→ CMS API │ 17└────────────────────────┬────────────────────────────────┘ 1819 ┌──────────────┼──────────────┐ 20 ▼ ▼ ▼ 21┌──────────────┐ ┌──────────────┐ ┌──────────────┐ 22│ PostgreSQL │ │ LangExtract │ │ Trigger.dev │ 23│ (all data) │ │ (FastAPI) │ │ (scheduled │ 24│ │ │ Python 3.11 │ │ jobs) │ 25└──────────────┘ └──────────────┘ └──────────────┘

2. Data Architecture

2.1 Memory Model: Three Layers Replacing RAG

Layer 1 — LangExtract Structured Extraction: Every document entering the system (competitor pages, our content, SERPs, brand voice samples) is processed through LangExtract extraction pipelines. Raw text becomes structured, source-grounded entities in typed Postgres tables. Agents query structured data, not fuzzy similarity scores.

Layer 2 — Hierarchical Document Summaries: A four-level summary tree where agents navigate from broad (domain-level) to specific (entity-level) context. Each level is an LLM-generated summary of the level below it. Agents start at Level 0 and drill down only into relevant branches.

1Level 0: Domain Summary (~500 tokens) 2 └── Level 1: Cluster Summaries (~300 tokens each, one per topic pillar) 3 └── Level 2: Page Summaries (~150 tokens each, one per page) 4 └── Level 3: LangExtract Entities (structured rows per page)

Layer 3 — Content Relationship Graph: An adjacency table in Postgres with typed edges connecting content entities. Replaces vector similarity for all "find related content" operations.

1CREATE TABLE content_relationships ( 2 id UUID PRIMARY KEY DEFAULT gen_random_uuid(), 3 source_type VARCHAR(50) NOT NULL, -- 'our_page', 'competitor_page', 'keyword', 'topic' 4 source_id UUID NOT NULL, 5 relationship_type VARCHAR(50) NOT NULL, 6 target_type VARCHAR(50) NOT NULL, 7 target_id UUID NOT NULL, 8 metadata JSONB DEFAULT '{}', 9 confidence FLOAT DEFAULT 1.0, 10 created_by VARCHAR(50) NOT NULL, -- 'system', 'agent:competitive-intel', 'human' 11 created_at TIMESTAMP DEFAULT now(), 12 last_validated TIMESTAMP DEFAULT now() 13); 14 15CREATE INDEX idx_rel_source ON content_relationships(source_type, source_id); 16CREATE INDEX idx_rel_target ON content_relationships(target_type, target_id); 17CREATE INDEX idx_rel_type ON content_relationships(relationship_type);

Relationship types:

Edge TypeSource → TargetWhat It Means
covers_topicpage → topicThis page covers this topic (with depth: shallow/deep)
targets_keywordpage → keywordThis page targets this keyword (with current_rank)
competes_withour_page → competitor_pageThese pages compete for the same keyword/topic
same_topic_ascompetitor_page → (null or our_page)Competitor covers topic we may or may not have
links_toour_page → our_pageActual internal link exists
should_link_toour_page → our_pageAgent-recommended linking opportunity
cannibalizesour_page → our_pageBoth target same primary keyword
outperformscompetitor_page → our_pageCompetitor ranks higher for shared keyword
gaptopic → (null)Topic with competitor coverage but zero ours
child_oftopic → topic_clusterHierarchical topic relationship
refreshesour_page → our_pageNewer version should replace older

2.2 Database Schema

Competitor Intelligence:

TableKey Fields
competitorsid, name, domain, industry_vertical, notes, created_at
competitor_pagesid, competitor_id, url, title, meta_description, h1, word_count, published_at, last_modified, content_hash, raw_text, last_crawled_at
competitor_extractionsid, page_id, extraction_class, extraction_text, attributes (JSONB), source_location, extraction_run_id, created_at
competitor_changesid, page_id, change_type (new/updated/removed), detected_at, diff_summary

Search & Keywords:

TableKey Fields
keywordsid, keyword, search_volume, difficulty, cpc, intent, cluster_id, last_refreshed
keyword_clustersid, name, primary_keyword_id, topic, priority
serp_snapshotsid, keyword_id, snapshot_date, raw_data
serp_extractionsid, snapshot_id, extraction_class, extraction_text, attributes (JSONB)
ai_overview_trackingid, keyword_id, detected_at, our_site_cited, cited_sources (JSONB), summary_text

Our Content:

TableKey Fields
our_pagesid, url, title, slug, content_type, status, published_at, last_updated, word_count, content_body, raw_text
our_page_extractionsid, page_id, extraction_class, extraction_text, attributes (JSONB), source_location
our_page_seoid, page_id, meta_title, meta_description, h1, h2s (JSONB), canonical_url, schema_markup, internal_links_out (JSONB), internal_links_in (JSONB), seo_score
our_page_performanceid, page_id, date, impressions, clicks, ctr, avg_position, sessions, bounce_rate

Content Calendar & Strategy:

TableKey Fields
content_strategyid, name, description, target_audience, brand_voice_guidelines, content_pillars (JSONB), priorities (JSONB), active
content_plan_itemsid, strategy_id, title, target_keyword_id, content_type (enum: blog_post, guide, landing_page, comparison, case_study, product_page), status (enum: planned, brief_pending, brief_approved, writing, editing, review, revision, published), scheduled_date, priority (1-3), notes, source (enum: ai_generated, human_added), created_at, updated_at
content_briefsid, plan_item_id, outline (JSONB), target_word_count, target_keywords (JSONB), competitor_references (JSONB), internal_link_targets (JSONB), research_notes, approved, approved_by, approved_at
content_draftsid, plan_item_id, version, content_body, seo_score, editor_notes, status (enum: draft, edited, review, approved, rejected, published)
brand_voice_extractionsid, strategy_id, extraction_class, extraction_text, attributes (JSONB)

Hierarchical Summaries:

TableKey Fields
domain_summariesid, scope (ours/competitor/combined), summary_text, metrics_snapshot (JSONB), generated_at
cluster_summariesid, cluster_id, scope, summary_text, page_count, performance (JSONB), gap_analysis, competitor_comparison, generated_at
page_summariesid, page_id, page_type (ours/competitor), summary_text, topics (JSONB), keywords (JSONB), quality_score, generated_at

3. Master Workflow

Human touchpoints (red nodes):

GateWhat Human DoesEst. Time
Calendar ReviewReview AI-generated plan, approve/reject/edit items, add own items, set schedule15–30 min per cycle
Brief ApprovalReview outline, confirm direction, adjust scope5–10 min per brief
Draft ReviewDeep edit, add personal experience/insights, place images, final voice check15–30 min per piece

4. Strategy Layer

4.1 Competitive Intelligence Agent

Purpose: Continuously analyzes the competitor database (structured LangExtract data, not raw HTML) and produces actionable competitive insights.

Agent Definition:

1const competitiveIntelAgent = new Agent({ 2 id: "competitive-intelligence", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: `You are a competitive intelligence analyst for a content marketing team. 5 You analyze structured competitor data to identify strategic threats, opportunities, 6 and content gaps. Be specific — cite competitor names, URLs, and extracted data points. 7 Prioritize findings by business impact.`, 8 tools: { 9 readDomainSummary, 10 readClusterSummaries, 11 queryCompetitorChanges, 12 queryCompetitorExtractions, 13 traverseCompetitionGraph, 14 webSearch, 15 }, 16 maxSteps: 12, 17});

Tools:

ToolInput SchemaWhat It Does
readDomainSummary{ scope: "ours" | "competitor" | "combined" }Returns the Level 0 domain summary (~500 tokens of high-level competitive landscape)
readClusterSummaries{ clusterId?: string, scope?: string }Returns Level 1 cluster summaries, optionally filtered. Each ~300 tokens with competitor comparison data
queryCompetitorChanges{ competitorId?: string, since: Date, changeType?: string }Queries competitor_changes table for recent content moves. Filterable by competitor and change type
queryCompetitorExtractions{ extractionClass: string, keyword?: string, competitorId?: string, since?: Date }Searches competitor_extractions table by entity class, keyword match, competitor, and date range
traverseCompetitionGraph{ startNodeType: string, startNodeId: string, edgeTypes: string[], maxDepth: number }Walks the content_relationships graph following specified edge types. Returns connected nodes with relationship metadata
webSearch{ query: string }Live web search for validation and fresh intelligence

Output Schema:

1const CompetitiveIntelOutput = z.object({ 2 competitorMoves: z.array(z.object({ 3 competitor: z.string(), 4 action: z.enum(["new_content", "content_update", "new_topic", "new_feature"]), 5 details: z.string(), 6 relevanceToUs: z.enum(["high", "medium", "low"]), 7 suggestedResponse: z.string(), 8 sourceExtractionIds: z.array(z.string()), 9 })), 10 contentGaps: z.array(z.object({ 11 topic: z.string(), 12 competitorsCovering: z.array(z.string()), 13 ourCoverage: z.enum(["none", "weak", "adequate"]), 14 opportunity: z.string(), 15 estimatedImpact: z.enum(["high", "medium", "low"]), 16 graphEdgeId: z.string(), 17 })), 18 positioningInsights: z.string(), 19 generatedAt: z.date(), 20});

Schedule: Weekly full analysis. Daily change digest (lightweight — only queryCompetitorChanges + readDomainSummary).


4.2 Search Landscape Agent

Purpose: Monitors keyword performance, SERP composition, AI Overview appearances, and search trends using structured SERP data (LangExtract-processed, not raw HTML parsing).

Agent Definition:

1const searchLandscapeAgent = new Agent({ 2 id: "search-landscape", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: `You are a search landscape analyst. You monitor keyword rankings, 5 SERP features, AI Overview appearances, and search trends for our content. 6 Identify ranking wins, losses, emerging opportunities, and threats from AI search. 7 Always include specific keywords, positions, and URLs in your analysis.`, 8 tools: { 9 queryKeywordPerformance, 10 querySerpExtractions, 11 queryAiOverviewTracking, 12 semrushKeywordResearch, 13 gscPerformanceQuery, 14 }, 15 maxSteps: 10, 16});

Tools:

ToolInput SchemaWhat It Does
queryKeywordPerformance{ keywordId?: string, minPositionChange?: number, dateRange: DateRange }Our ranking data from our_page_keywords with change deltas
querySerpExtractions{ keywordId: string, extractionClass: string }Structured SERP features from serp_extractions (featured snippets, PAA, AI Overviews, etc.)
queryAiOverviewTracking{ keywordId?: string, ourSiteCited?: boolean, since?: Date }AI Overview presence and citation status from ai_overview_tracking
semrushKeywordResearch{ keywords: string[], market: string }Fresh keyword data from Semrush API (volume, difficulty, CPC, trends)
gscPerformanceQuery{ urls?: string[], queries?: string[], dateRange: DateRange }Real-time data from Google Search Console

Output Schema:

1const SearchLandscapeOutput = z.object({ 2 rankingChanges: z.array(z.object({ 3 keyword: z.string(), 4 url: z.string(), 5 previousPosition: z.number(), 6 currentPosition: z.number(), 7 trend: z.enum(["rising", "falling", "stable"]), 8 })), 9 aiOverviewAlerts: z.array(z.object({ 10 keyword: z.string(), 11 ourSiteCited: z.boolean(), 12 topCitedSources: z.array(z.string()), 13 recommendation: z.string(), 14 })), 15 emergingKeywords: z.array(z.object({ 16 keyword: z.string(), 17 volume: z.number(), 18 difficulty: z.number(), 19 relevance: z.string(), 20 opportunity: z.string(), 21 })), 22 decliningContent: z.array(z.object({ 23 url: z.string(), 24 keyword: z.string(), 25 positionDrop: z.number(), 26 suggestedAction: z.string(), 27 })), 28 serpFeatureOpportunities: z.array(z.object({ 29 keyword: z.string(), 30 feature: z.string(), 31 currentHolder: z.string(), 32 ourEligibility: z.string(), 33 })), 34 generatedAt: z.date(), 35});

Schedule: Daily for ranking changes and AI Overview monitoring. Weekly for full landscape analysis.


4.3 Content Strategy Agent (The Planner)

Purpose: The brain. Synthesizes Competitive Intelligence output, Search Landscape output, our content inventory (via hierarchical summaries), our strategy directives (human-set), and graph relationships to produce a prioritized, scheduled content plan.

Agent Definition:

1const contentStrategyAgent = new Agent({ 2 id: "content-strategy", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: async ({ threadId }) => { 5 const strategy = await db.getActiveStrategy(); 6 return `You are the content strategy director. Your job is to create a prioritized 7 content plan that maximizes organic search impact. 8 9 CURRENT STRATEGY: 10 Pillars: ${strategy.contentPillars.join(", ")} 11 Priorities: ${strategy.priorities} 12 Target Audience: ${strategy.targetAudience} 13 14 RULES: 15 - Every plan item MUST have a content_type (blog_post, guide, comparison, etc.) 16 - Every plan item MUST target a specific keyword with volume + difficulty data 17 - Score candidates by: strategic_alignment × search_opportunity × competitive_urgency × gap_severity 18 - Check graph for cannibalization before recommending new content 19 - Suggest scheduling dates based on priority and current calendar capacity 20 - Human-added calendar items are FIXED CONSTRAINTS — plan around them`; 21 }, 22 tools: { 23 readContentStrategy, 24 readDomainSummary, 25 readClusterSummaries, 26 readPageSummaries, 27 queryExtractions, 28 queryContentPlan, 29 traverseContentGraph, 30 webSearch, 31 addContentPlanItem, 32 updateContentPlanItem, 33 }, 34 maxSteps: 20, 35});

How the agent navigates context (the hierarchy in action):

1Step 1: readContentStrategy() 2 → Human-set pillars, priorities, brand voice (~500 tokens) 3 4Step 2: readDomainSummary({ scope: "combined" }) 5 → "We have 847 pages, competitors have X. Strongest/weakest areas." (~500 tokens) 6 7Step 3: [Ingest Competitive Intelligence Agent output] 8 → Competitor moves, content gaps, positioning (~2,000 tokens) 9 10Step 4: [Ingest Search Landscape Agent output] 11 → Ranking changes, AI Overview alerts, emerging keywords (~2,000 tokens) 12 13Step 5: readClusterSummaries({ scope: "ours" }) 14 → Per-pillar coverage depth, performance, gaps (~3,000 tokens for 10 clusters) 15 16Step 6: queryContentPlan({ status: ["planned", "in_progress"] }) 17 → What's already scheduled (avoid duplication) (~1,000 tokens) 18 19Step 7: traverseContentGraph({ edgeTypes: ["gap", "cannibalizes", "outperforms"] }) 20 → Structural opportunities and conflicts (~1,500 tokens) 21 22Step 8: For top candidates → readPageSummaries() for specific clusters 23 → Drill into relevant pages only (~2,000 tokens) 24 25Step 9: For specific competitive comparisons → queryExtractions() 26 → Entity-level detail only when needed (~1,500 tokens) 27 28TOTAL: ~14,000 tokens of precisely relevant context 29(vs. impossible: stuffing 847 full pages into context)

Tools:

ToolInput SchemaWhat It Does
readContentStrategy{}Returns active strategy directives
readDomainSummary{ scope }Level 0 summary
readClusterSummaries{ clusterId?, scope? }Level 1 summaries
readPageSummaries{ clusterId?, pageType?, limit? }Level 2 summaries, filtered
queryExtractions{ extractionClass, keyword?, pageType?, since? }Level 3 entity queries against any extraction table
queryContentPlan{ status?, contentType?, dateRange? }Current calendar items
traverseContentGraph{ startNodeType, startNodeId?, edgeTypes, maxDepth }Graph traversal
webSearch{ query }Topic viability research
addContentPlanItemContentPlanItem schemaWrites new item to calendar
updateContentPlanItem{ id, updates }Modifies existing item

Output Schema:

1const ContentPlanOutput = z.object({ 2 planItems: z.array(z.object({ 3 title: z.string(), 4 targetKeyword: z.string(), 5 contentType: z.enum([ 6 "blog_post", "guide", "landing_page", 7 "comparison", "case_study", "product_page" 8 ]), 9 rationale: z.string(), 10 competitiveContext: z.string(), 11 suggestedScheduleDate: z.date(), 12 priority: z.enum(["1", "2", "3"]), 13 estimatedImpact: z.string(), 14 researchLinks: z.array(z.string()), 15 internalLinkTargets: z.array(z.string()), 16 graphEvidence: z.array(z.string()), 17 })), 18 strategyNotes: z.string(), 19 calendarSummary: z.string(), 20});

→ Workflow SUSPENDS here. Plan items are written to content_plan_items with source: "ai_generated". Human reviews in the Calendar UI, approves/rejects/edits items, adds their own items. On approval, workflow resumes and approved items are queued for brief generation.


4.4 Content Brief Agent

Purpose: For each approved content_plan_item, generates a detailed content brief that the Writer Agent executes against.

Agent Definition:

1const contentBriefAgent = new Agent({ 2 id: "content-brief", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: `You generate detailed content briefs for approved content plan items. 5 Each brief must include a complete outline with H2/H3 structure, keyword mapping 6 per section, competitor differentiation strategy, internal linking targets (from graph), 7 external resource recommendations, and specific SEO requirements. 8 9 Use competitor extraction data to identify what top-ranking pages cover and where 10 they fall short. Your brief should give the Writer a clear path to creating content 11 that outperforms the current top results.`, 12 tools: { 13 queryCompetitorExtractions, 14 traverseCompetitionGraph, 15 readPageSummaries, 16 queryOurExtractions, 17 queryBrandVoiceExtractions, 18 webSearch, 19 }, 20 maxSteps: 15, 21});

Tools:

ToolInput SchemaWhat It Does
queryCompetitorExtractions{ keyword, extractionClass, limit }Gets structured entities from top-ranking competitor pages for the target keyword
traverseCompetitionGraph{ startNodeType: "keyword", startNodeId, edgeTypes: ["competes_with", "same_topic_as"] }Finds direct competitor pages and coverage gaps
readPageSummaries{ pageType: "competitor", keyword }Level 2 summaries of relevant competitor pages
queryOurExtractions{ extractionClass: "topic", keyword }What we already cover (avoid repetition)
queryBrandVoiceExtractions{ strategyId }Extracted tone markers, vocabulary preferences, sentence patterns
webSearch{ query }Find resources, data sources, expert references

Output Schema:

1const ContentBriefOutput = z.object({ 2 title: z.string(), 3 targetKeyword: z.string(), 4 secondaryKeywords: z.array(z.string()), 5 searchIntent: z.enum(["informational", "navigational", "transactional", "commercial"]), 6 targetWordCount: z.number(), 7 contentFormat: z.string(), 8 outline: z.array(z.object({ 9 heading: z.string(), 10 level: z.enum(["h2", "h3"]), 11 keyPoints: z.array(z.string()), 12 targetKeywords: z.array(z.string()), 13 suggestedWordCount: z.number(), 14 competitorGap: z.string(), 15 })), 16 competitorAnalysis: z.object({ 17 topPages: z.array(z.object({ 18 url: z.string(), 19 strengths: z.array(z.string()), 20 weaknesses: z.array(z.string()), 21 })), 22 differentiators: z.array(z.string()), 23 }), 24 internalLinkTargets: z.array(z.object({ 25 url: z.string(), 26 anchorTextSuggestion: z.string(), 27 contextNote: z.string(), 28 })), 29 externalResources: z.array(z.object({ 30 url: z.string(), 31 description: z.string(), 32 useCase: z.enum(["cite_as_source", "link_for_reader", "reference_for_accuracy"]), 33 })), 34 toneAndStyle: z.string(), 35 seoRequirements: z.object({ 36 metaTitleGuideline: z.string(), 37 metaDescriptionGuideline: z.string(), 38 schemaType: z.string(), 39 featuredSnippetTarget: z.boolean(), 40 }), 41});

→ Workflow SUSPENDS here. Brief written to content_briefs with approved: false. Human reviews in the app, approves or requests changes. On approval, workflow resumes and brief is passed to the Content Layer.


5. Content Layer

5.1 Writer Agent

Purpose: Receives an approved content brief and produces a complete first draft that follows the outline, hits word count targets, incorporates keywords naturally, includes internal and external links, and matches brand voice.

Agent Definition:

1const writerAgent = new Agent({ 2 id: "writer", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: async ({ briefId }) => { 5 const brief = await db.getBrief(briefId); 6 const brandVoice = await db.getBrandVoiceExtractions(brief.strategyId); 7 8 return `You are an expert content writer. Produce a complete, publication-ready 9 draft following the brief below. 10 11 WRITING RULES: 12 - Follow the outline exactly. Hit the word count targets per section (±10%). 13 - Integrate target keywords naturally — never stuff. 14 - Primary keyword MUST appear in: first paragraph, at least one H2, and naturally throughout. 15 - Include all specified internal links with contextual, varied anchor text. 16 - Include external resource links where specified in the brief. 17 - Mark image placement opportunities as [IMAGE: description of what should go here] 18 — a human will place actual images later. 19 - Write in markdown with proper heading hierarchy (H1 → H2 → H3, no skips). 20 - Short paragraphs (2-4 sentences). Mix sentence lengths. 21 22 BRAND VOICE: 23 Tone markers: ${brandVoice.toneMarkers.join(", ")} 24 Vocabulary preferences: ${brandVoice.vocabularyPreferences.join(", ")} 25 Avoid: ${brandVoice.wordsToAvoid.join(", ")} 26 27 CONTENT BRIEF: 28 ${JSON.stringify(brief, null, 2)}`; 29 }, 30 tools: { 31 webSearch, 32 queryOurExtractions, 33 traverseInternalLinks, 34 }, 35 maxSteps: 8, 36});

Tools:

ToolWhat It Does
webSearchReal-time fact verification during writing
queryOurExtractionsCheck consistency with existing content (structured queries)
traverseInternalLinksFind additional linking opportunities via graph

Output: Full markdown content body with frontmatter, internal links, external links, and [IMAGE: ...] placement markers for human image insertion.


5.2 Editor Agent

Purpose: Reviews the Writer's draft for language correctness, verbal consistency, brand voice adherence, factual grounding, structural quality, link integrity, and keyword optimization. Produces specific edits and a revised draft.

Agent Definition:

1const editorAgent = new Agent({ 2 id: "editor", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: async ({ briefId }) => { 5 const brandVoice = await db.getBrandVoiceExtractions(briefId); 6 7 return `You are a senior content editor. Review the draft against the content brief 8 and brand voice standards. Your job is precision, not rewriting. 9 10 CHECK EACH OF THESE: 11 1. LANGUAGE: Grammar, spelling, punctuation, sentence structure errors 12 2. VERBAL CONSISTENCY: Same term used throughout (don't switch "users"/"customers" randomly), 13 consistent formatting, consistent active/passive voice 14 3. BRAND VOICE: Compare against these extracted patterns: 15 Tone: ${brandVoice.toneMarkers.join(", ")} 16 Vocabulary: ${brandVoice.vocabularyPreferences.join(", ")} 17 Flag any sections that drift from established voice. 18 4. FACTUAL GROUNDING: Flag any claims, statistics, or attributions that aren't 19 supported by the brief's source material or web-verifiable 20 5. STRUCTURE: Heading hierarchy compliance, section length balance, transition quality 21 6. LINKS: All internal links point to real pages? Anchor text natural and diversified? 22 7. KEYWORDS: Primary keyword in title/H1/first paragraph/H2s? Density 0.5-2.5%? 23 24 For each issue: specify location, type, severity (critical/suggested), and fix. 25 If overall assessment is "needs_revision" with critical issues, provide revised content.`; 26 }, 27 tools: { 28 queryOurPages, 29 queryBrandVoiceExtractions, 30 webSearch, 31 }, 32 maxSteps: 8, 33});

Output Schema:

1const EditorOutput = z.object({ 2 overallAssessment: z.enum(["pass", "needs_revision"]), 3 revisionCount: z.number(), 4 edits: z.array(z.object({ 5 location: z.string(), 6 type: z.enum(["grammar", "voice", "factual", "structural", "keyword", "link"]), 7 severity: z.enum(["critical", "suggested"]), 8 original: z.string(), 9 suggested: z.string(), 10 rationale: z.string(), 11 })), 12 voiceConsistencyScore: z.number().min(0).max(100), 13 readabilityScore: z.number(), 14 revisedContent: z.string().optional(), 15});

5.3 Content Layer Workflow

1const contentLayerWorkflow = createWorkflow({ 2 id: "content-layer", 3 inputSchema: z.object({ briefId: z.string() }), 4 outputSchema: z.object({ draftId: z.string() }), 5}) 6 .then(loadApprovedBriefStep) // Load brief from DB 7 .then(writerAgentStep) // Writer produces draft 8 .then(editorAgentStep) // Editor reviews 9 .branch({ 10 condition: ({ editorOutput }) => 11 editorOutput.overallAssessment === "needs_revision" 12 && editorOutput.revisionCount < 2, 13 trueStep: writerRevisionStep, // Back to writer with edit context 14 falseStep: finalizeDraftStep, 15 }) 16 .then(saveDraftToDbStep) // Persist to content_drafts 17 .commit();

Revision loop: If the Editor returns needs_revision with critical edits, the draft goes back to the Writer with the edit list as additional context. Maximum 2 revision cycles. After 2 cycles, the draft proceeds to human review regardless (humans catch what agents miss).


6. Production Layer

6.1 Human Review & Editing

This is the most important step in the entire system. The workflow suspends via .waitForEvent("human-review-complete") and the human reviewer performs all of the following in the app's editor interface:

What the human does:

ActionDetail
Read & assessFull draft review against the brief (shown side-by-side)
Add personal experienceOriginal insights, firsthand accounts, expert commentary — the irreplaceable 20%
Place imagesSelect/create images, position them in content, write or refine alt text. Images are human-curated, not AI-generated.
Edit for voiceAdjust tone, phrasing, personality to match brand
Fact-checkVerify statistics, claims, attributions against source material
Approve or rejectApprove sends to SEO validation. Reject sends back to Content Layer with notes.

Why image placement is manual: Image selection requires brand aesthetic judgment, rights verification, and contextual sensitivity that current AI image generation doesn't handle reliably at production quality. The [IMAGE: ...] markers from the Writer Agent serve as placement suggestions — the human decides what actually goes there.


6.2 Final Cleanup Agent

Purpose: A lightweight pass after human edits to ensure formatting consistency, link integrity, and proper markdown structure. Not a creative agent — strictly a technical cleanup.

1const finalCleanupAgent = new Agent({ 2 id: "final-cleanup", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: `You are a technical proofreader. The content has been human-edited. 5 Check ONLY for: 6 - Markdown formatting validity (no broken syntax) 7 - Image tags have alt text and dimensions 8 - All internal links still resolve (you'll verify via tool) 9 - No orphaned heading hierarchy (H3 without parent H2) 10 - Consistent list formatting 11 12 Do NOT change tone, wording, or content. Only fix technical issues.`, 13 tools: { verifyInternalLinks }, 14 maxSteps: 4, 15});

6.3 SEO Validation → Publishing Flow

After cleanup, the content enters the programmatic SEO validation engine (defined in detail in Section 8). If it scores 10/10, it proceeds to the Publishing Agent. If it fails any check, the Final Cleanup Agent attempts auto-fixes for the specific failures, then revalidates. Maximum 3 fix-revalidate cycles. If still failing, escalate to human with a specific failure report.

Publishing Agent (tool-driven, minimal LLM reasoning):

1const publishingAgent = new Agent({ 2 id: "publishing", 3 model: "anthropic/claude-sonnet-4-20250514", 4 instructions: `Execute the publication pipeline. Each tool must succeed before proceeding 5 to the next. Log every action to the audit trail.`, 6 tools: { 7 formatForCms, 8 uploadToCms, 9 setMetadata, 10 updateXmlSitemap, 11 pingIndexingApi, 12 updateOurPagesDb, 13 triggerLangExtractPipeline, 14 triggerSummaryRegeneration, 15 triggerGraphRelationshipBuilder, 16 triggerBidirectionalLinking, 17 schedulePostPublishMonitoring, 18 logToAuditTrail, 19 }, 20 maxSteps: 15, 21});

Post-publish pipeline (critical — this closes the data loop):

1Content published to CMS 23LangExtract processes the new page → our_page_extractions 45Page summary generated → page_summaries (Level 2) 67Cluster summary regenerated → cluster_summaries (Level 1) 89Domain summary regenerated → domain_summaries (Level 0) 1011Graph edges built: 12 - covers_topic edges (from extracted topics) 13 - targets_keyword edges (from keyword data) 14 - links_to edges (from actual links in content) 15 - should_link_to analysis: find existing pages that should link TO the new page 16 - Execute bidirectional linking: update existing pages with new internal links 1718Post-publish monitoring scheduled: 19 - 24h: Verify page is indexed (GSC) 20 - 7d: Initial rankings and impressions 21 - 30d: Full performance review against projected targets 22 - Auto-flag underperformers for refresh queue

7. Content Calendar & Application Interface

7.1 Content Calendar View

The calendar is the primary human control surface. It shows planned, in-progress, and published content on a timeline.

Calendar item data model:

1interface ContentPlanItem { 2 id: string; 3 title: string; 4 targetKeyword: string; 5 contentType: "blog_post" | "guide" | "landing_page" | "comparison" | "case_study" | "product_page"; 6 status: "planned" | "brief_pending" | "brief_approved" | "writing" | "editing" | "review" | "revision" | "published"; 7 scheduledDate: Date; 8 priority: 1 | 2 | 3; 9 notes: string; 10 source: "ai_generated" | "human_added"; 11 assignedWorkflowRunId?: string; 12 createdAt: Date; 13 updatedAt: Date; 14}

Calendar capabilities:

FeatureDetail
Monthly/weekly/list viewsStandard calendar views with color-coded content types and status indicators
Drag-and-drop reschedulingMove items between dates. Constraint: can't schedule past today for unpublished items
Add item manuallyHuman creates a new content_plan_item with source: "human_added". These are treated as fixed constraints by the Strategy Agent
AI-generated vs. human-addedVisually distinguished (e.g., AI items have a subtle indicator). Both are equal in the system once approved
Content type badgesEach item shows its type (Blog, Guide, Comparison, etc.) as a color-coded badge
Status pipelineVisual indicator showing where each item is in the pipeline (planned → brief → writing → editing → review → published)
Click-throughClick any item to see its brief, current draft, SEO score, and workflow status
"Generate Plan" buttonTriggers the Strategy Agent to analyze current data and propose new items
Bulk approve/rejectMulti-select AI-generated items for batch approval

7.2 Content Editor / Review Interface

FeatureDetail
Side-by-side viewBrief on left, draft on right
Inline editingFull rich text editor with change tracking
Image placementDrag-and-drop image upload at [IMAGE: ...] marker positions
SEO score panelLive-updating 10-point SEO check as human edits
Comment/annotationLeave notes for future reference or AI revision context
Approve / Request Changes / RejectAction buttons that resume or restart the workflow

7.3 Additional Views

ViewPurpose
DashboardPipeline status, today's publications, competitor alerts, ranking movers, AI Overview tracking
Competitor MonitorCompetitor list, new/changed page feed, per-competitor content analysis, side-by-side comparison
Keyword & SearchKeyword tracker with ranking history, SERP feature tracking, AI Overview monitoring, GSC integration
SEO Audit10-point check results per piece, historical scores, site-wide health, internal link map
Strategy SettingsBrand voice config, content pillars, competitor list management, target keywords, workflow config

8. Programmatic SEO Validation Engine

This is deterministic code, not an LLM. Every check has binary pass/fail logic. All 10 must pass for publication.

Check 1: Meta Title

1{ 2 name: "Meta Title", 3 validate: (content) => { 4 const title = content.metaTitle; 5 const checks = [ 6 { pass: title.length >= 50 && title.length <= 60, reason: `Length ${title.length}, need 50-60` }, 7 { pass: containsKeyword(title, content.primaryKeyword), reason: "Missing primary keyword" }, 8 { pass: await isUnique("meta_title", title), reason: "Duplicate meta title exists" }, 9 { pass: !willTruncate(title), reason: "Will truncate in SERPs" }, 10 ]; 11 return { passed: checks.every(c => c.pass), failures: checks.filter(c => !c.pass) }; 12 } 13}

Check 2: Meta Description

  • Length: 150–160 characters
  • Contains primary keyword
  • Includes call-to-action or value proposition
  • Unique across site

Check 3: Heading Hierarchy

  • Exactly one H1 containing primary keyword
  • H2s use secondary keywords
  • No skipped levels (H1 → H3 without H2)
  • Logical nesting throughout

Check 4: Keyword Optimization

  • Primary keyword in: title, H1, first 100 words, at least one H2, meta description
  • Keyword density: 0.5%–2.5%
  • Secondary keywords present naturally
  • No keyword stuffing patterns (3+ identical phrases in sequence)

Check 5: Internal Linking

  • Minimum 3 internal links
  • All resolve to real published pages (verified against our_pages table)
  • Anchor text is descriptive (no "click here", "read more")
  • Anchor text is diversified (not all exact-match keyword)
  • Links are contextually placed (not dumped in a footer list)

Check 6: External Linking

  • At least 1 external link to authoritative source
  • No links to competitor domains (checked against competitors table blocklist)
  • External links are contextually relevant
  • Proper rel attributes on new-tab links

Check 7: Content Quality Metrics

  • Word count within ±10% of brief target
  • Readability score within configured range
  • No duplicate content (checked against our_page_extractions for same primary keyword via cannibalizes graph edge — not vector similarity)
  • No paragraph exceeds 300 words
  • Sentence length variety present

Check 8: Technical SEO

  • Valid JSON-LD schema markup present and parseable
  • Canonical URL set correctly
  • Open Graph tags: og:title, og:description, og:image
  • Twitter Card tags present
  • All images have alt text
  • All images have width/height dimensions

Check 9: URL & Slug

  • URL-friendly (lowercase, hyphens, no special characters)
  • Contains primary keyword or close variant
  • Under 60 characters
  • No duplicate slug in our_pages table

Check 10: Mobile & Performance

  • All images have explicit width/height (prevents Cumulative Layout Shift)
  • Images use loading="lazy" (except above-the-fold hero)
  • No inline styles that break mobile viewport
  • Tables have responsive handling
  • No excessively large embedded content

Validation output:

1const SeoValidationResult = z.object({ 2 score: z.string(), // "10/10", "8/10", etc. 3 passed: z.boolean(), 4 checks: z.array(z.object({ 5 id: z.number(), 6 name: z.string(), 7 passed: z.boolean(), 8 details: z.string(), 9 failureReason: z.string().optional(), 10 autoFixable: z.boolean(), 11 })), 12});

9. Cost Model

CategoryMonthly Estimate (50 pieces)Notes
Claude Sonnet 4 (all agents)$200–400~$4–8 per piece across strategy, writing, editing, briefs, cleanup
Gemini 2.5 Flash (LangExtract + summaries)$50–100Continuous extraction of competitor + our pages + SERPs + summary generation
Firecrawl$40–80Competitor sitemap crawling + page scraping (depends on competitor count)
Semrush API$119–229Business plan for keyword/SERP API access
Image generation$0Human-placed — no API cost
Hosting (Vercel + Railway)$50–100App + workers + LangExtract service
PostgreSQL (managed)$25–50Neon, Supabase, or Railway
GSC APIFree
Total$485–960/month

10. Risk Matrix

RiskLikelihoodImpactMitigation
LangExtract extraction quality inconsistentMediumMediumMulti-pass extraction (3 passes), high-quality few-shot examples, validation checks, prompt iteration
Hierarchical summaries drift from source dataMediumMediumSummaries regenerated daily from fresh extractions; timestamped and versioned
Graph relationship stalenessMediumLowWeekly re-validation; confidence scores decay over time; stale edges flagged in agent context
LangExtract Python ↔ Mastra TypeScript bridge failureLowHighHealth check endpoint, auto-restart, fallback to direct Mastra LLM extraction
Firecrawl rate limiting / anti-bot blocksMediumMediumRespectful crawl scheduling, Firecrawl's built-in evasion, fallback to cached content
LLM output quality varianceHighMediumMulti-agent review pipeline + human gate + programmatic SEO checks
Google algorithm targeting AI contentMediumHigh80/20 human-AI method ensures genuine Experience + Expertise in every piece
Hallucination in published contentMediumHighFact-check via extracted claims + human review + LangExtract source grounding
Content cannibalization at scaleMediumMediumGraph cannibalizes edges + Strategy Agent explicitly checks before planning

11. Future: Filesystem-as-Context Architecture

The Problem This Solves

Even with the hierarchical summary approach, there's an architectural ceiling: summaries are pre-generated snapshots. As the content library grows to thousands of pages and the competitive landscape shifts daily, keeping summaries fresh and relevant becomes a continuous compute cost. More fundamentally, pre-computing what context an agent might need is inherently wasteful — you're guessing ahead of time which summaries will matter for which tasks.

The filesystem-as-context pattern, articulated by Andrej Karpathy's "context engineering" framework and demonstrated by Anthropic's Skills system, offers a potentially superior approach: don't pre-load context. Let agents navigate to it on demand.

The Core Idea

Instead of generating hierarchical summaries that agents read passively, you structure all system knowledge as a navigable filesystem. Agents use ls, grep, glob, and file reading to pull exactly the context they need for the current task, building their context window incrementally with only signal, never noise.

1/contentengine/ 2├── strategy/ 3│ ├── STRATEGY.md ← Current pillars, priorities, audience 4│ ├── brand-voice/ 5│ │ ├── VOICE_GUIDE.md ← Extracted tone markers, vocabulary rules 6│ │ └── samples/ 7│ │ ├── best-blog-post.md 8│ │ └── best-guide.md 9│ └── calendar/ 10│ ├── 2026-03.md ← March calendar in structured markdown 11│ └── 2026-04.md 1213├── competitors/ 14│ ├── INDEX.md ← Competitor list with domains, last crawled 15│ ├── competitor-a/ 16│ │ ├── OVERVIEW.md ← LangExtract summary of their content strategy 17│ │ ├── recent-changes.md ← Last 30 days of content changes 18│ │ └── pages/ 19│ │ ├── their-fine-tuning-guide.md ← Extracted entities as structured MD 20│ │ └── their-deployment-guide.md 21│ └── competitor-b/ 22│ └── ... 2324├── our-content/ 25│ ├── INDEX.md ← Page inventory with URLs, types, performance 26│ ├── by-cluster/ 27│ │ ├── ai-ml/ 28│ │ │ ├── CLUSTER_OVERVIEW.md ← Performance, gaps, competitor comparison 29│ │ │ ├── fine-tuning-guide.md ← Extracted entities + performance data 30│ │ │ └── lora-explained.md 31│ │ └── devops/ 32│ │ └── ... 33│ └── by-status/ 34│ ├── needs-refresh/ ← Pages flagged for updating 35│ └── underperforming/ ← Pages below performance threshold 3637├── keywords/ 38│ ├── INDEX.md ← Keyword clusters with priority 39│ ├── cluster-ai-ml.md ← Keywords, volumes, our ranks, competitor ranks 40│ └── cluster-devops.md 4142├── search/ 43│ ├── ai-overviews.md ← AI Overview tracking for priority keywords 44│ ├── serp-features.md ← Featured snippet, PAA tracking 45│ └── trends.md ← Emerging/declining search trends 4647└── graph/ 48 ├── gaps.md ← Topics competitors cover that we don't 49 ├── cannibalization.md ← Pages targeting same keywords 50 ├── linking-opportunities.md ← should_link_to edges as structured list 51 └── competitive-overlaps.md ← competes_with edges with rank comparison

How an Agent Would Navigate

When the Content Strategy Agent needs to plan next month's content:

11. Agent reads /strategy/STRATEGY.md (~500 tokens) 2 → Understands pillars, priorities, audience 3 42. Agent runs: ls /competitors/ (~100 tokens) 5 → Sees competitor directories 6 73. Agent reads /competitors/INDEX.md (~300 tokens) 8 → Gets competitor overview and recent activity summary 9 104. Agent reads /graph/gaps.md (~800 tokens) 11 → Sees all content gaps as structured list 12 135. Agent reads /keywords/cluster-ai-ml.md (~600 tokens) 14 → Sees keyword opportunities in the priority cluster 15 166. Agent runs: grep -l "edge deployment" /competitors/*/pages/ (~50 tokens) 17 → Finds which competitors have edge deployment content 18 197. Agent reads /competitors/competitor-a/pages/edge-deploy.md (~400 tokens) 20 → Gets structured extraction of their specific page 21 228. Agent reads /strategy/calendar/2026-04.md (~300 tokens) 23 → Sees what's already scheduled for April 24 259. Agent reads /our-content/by-status/needs-refresh/ (~400 tokens) 26 → Sees which existing content needs updating 27 28Total context built: ~3,500 tokens of precisely relevant data

Compare this to the hierarchical summary approach (~14,000 tokens, some of which may be irrelevant to this specific planning task). The filesystem approach lets the agent decide what to load based on the actual task, not pre-generated summaries that try to anticipate what might be needed.

What It Would Take to Implement

This is a significant architectural change but builds cleanly on top of the LangExtract + Graph foundation already specified in this document. The core work:

  1. Filesystem generation pipeline. A scheduled job that reads from PostgreSQL (extractions, summaries, graph edges, performance data) and writes structured markdown files to a mounted filesystem. Each file follows a consistent schema: frontmatter with metadata, then structured content. This is the bridge between the database and the agent's navigable context. Estimated effort: 2–3 weeks for the generation logic, templates, and scheduling.

  2. Sandbox environment per agent session. Each agent invocation gets a read-only mounted view of the filesystem. Mastra's tool system exposes ls, cat, grep, and glob as tools the agent can call. The agent navigates the filesystem using bash-like commands it already knows from training data. This is simpler than building custom SQL-backed query tools — the filesystem IS the query interface. Estimated effort: 1–2 weeks for the sandbox tooling and Mastra integration.

  3. Filesystem-aware agent prompts. Agent instructions are updated to describe the filesystem structure and navigation patterns. Instead of "use the readClusterSummaries tool," the prompt says "the competitor data is in /competitors/. Start by reading the INDEX.md, then drill into specific competitor directories as needed." This leverages the model's existing training on filesystem navigation. Estimated effort: 1 week of prompt engineering and testing.

  4. Hybrid SQL + filesystem approach. Not everything moves to the filesystem. High-frequency queries (keyword rankings, performance metrics, real-time GSC data) stay in PostgreSQL with dedicated tools. The filesystem handles the slower-changing strategic context: content inventories, competitive analysis, brand voice, editorial plans. The agent has both filesystem tools and database tools available and chooses the right one for the task. Estimated effort: included in items 1–3 above.

  5. Write-back pattern. When agents need to create outputs (content plan items, briefs), they write to specific directories (e.g., /strategy/calendar/drafts/) which a sync job picks up and persists to PostgreSQL. This keeps the database as the authoritative source while giving agents a natural write interface. Estimated effort: 1 week.

Total estimated implementation effort: 5–7 weeks on top of the base system specified in this document.

The tradeoff is clear: The filesystem approach produces tighter, more relevant context windows (3,500 tokens vs. 14,000) because agents load only what they actually need for the current task. The cost is an additional generation pipeline and the operational complexity of keeping the filesystem in sync with the database. For a system operating at scale (500+ pages, 10+ competitors, 50+ pieces per month), the context efficiency gains likely justify the investment. For smaller operations, the hierarchical summary approach specified in the main architecture is sufficient.

The recommended path: build the base system using the hierarchical summary + graph approach first (Sections 2–8 of this spec), validate it works at your current scale, then implement the filesystem layer as a context optimization when agent context quality becomes a bottleneck.


This is a living specification. Update as architectural decisions are made during implementation.