cpg-agents-2.0
Drop in a raw Nielsen, SPINS, or Circana export. Get a competitive analysis deck themed to your brand. Install in five minutes.
Problem
CPG brand teams pay six figures a year for syndicated retail data and still spend weeks turning a single data export into a presentation. The work is mechanical, repeated every quarter, for every brand. AI tools that promise to help mostly produce generic analysis that ignores the brand’s actual competitive context.
Solution
A portable analytics package that turns a raw retail data export into a brand-configured competitive analysis with a generated slide deck. The system loads the data, runs an intake interview, builds a semantic metrics layer, runs a seven-dimension competitive analysis, mines ranked talking points, and assembles a MARP deck. No servers. No frameworks. No API keys beyond Claude.
Impact
The same architecture I built into Daasity’s production platform — pre-approved templates, semantic layer, knowledge-first generation — reduced to its essential form. Where Daasity was a 50-engineer codebase, this is a portable skill pack a single brand analyst can install on their laptop.
How it works
Three-stage pipeline:
DATA PIPELINE → Detect source, map columns, load DuckDB,
validate quality, [HUMAN GATE]
SEMANTIC LAYER → Build 12 DuckDB views with 80+ metrics
(distribution, velocity, share, pricing)
APPLICATION → Intake interview → competitive grid →
story mining → chart generation → MARP deckThe intake pattern — the differentiator
The non-obvious skill in this pack is intake. It runs at the start of every brand engagement and turns a cold dataset into a configured analysis target through a structured interview.
Step 1 — Show the top 20 brands by dollar sales. User picks the focus brand. Validated against actual database rows. Step 2 — Show top brands in the category, user picks 2-4 competitors. Step 3 — Pick priority markets. Step 4 — Capture distribution goals. Step 5 — Save config. Step 6 — Human gate.
This is the pattern most LLM analytics tools get wrong. They either skip the intake and produce generic analysis, or do free-form chat and produce a config that drifts from what the database actually contains. The intake skill enforces a contract: every input is validated against the database, every config field maps to a downstream skill, and the user signs off before any analysis runs.
The semantic layer — the lesson from Daasity
The metrics-engine skill builds 12 DuckDB views on top of the raw fact table — 80+ named, reusable metrics covering distribution, velocity, volume, share, promotion, comparison, decomposition, and pricing. Skills query these views; they never embed SQL calculations directly.
The semantic layer is not optional, even at small scale. Every team that skips this step ends up with three different definitions of the same metric scattered across their analysis, and no one can tell which is right.
Story mining
story-miner produces ranked talking points for sales conversations. Three scenarios — new distribution, expand existing, defend at-risk — each with scenario-specific queries against the semantic layer.
Every finding is scored on a four-factor rubric out of 40: magnitude, direction, contrast, and clarity. Top 5-7 findings are surfaced; the rest are available on request.
cpg-agents-2.0.zip
Full skill pack. Includes 10 skills, 3 workflows, semantic layer SQL, MARP templates, sample anonymized data, and evals framework.
Requires: Claude Code, Python 3.11+ with duckdb/pandas, MARP CLI
Other projects