Master Data Expert Agent | ZMDM
ZMDM on ZFlow Agent 5 of 5

Master Data Expert Agent

An assessment and recommendation engine that scores your master data the way your business consumes it — Sales, Supply Chain, Manufacturing, Marketing, Finance — at the single-object scope and at the population scope. The other four agents build and run the platform; the Expert Agent tells you whether the data inside it is ready for the work that depends on it.

Most "data quality" reports talk past the business

The standard quality output reads like a database report. True, useful to a steward, and entirely opaque to the business owner whose work depends on that data.

TaxID is null on 14% of suppliers. UPC is malformed on 2.3% of SKUs. Country code fails ISO validation on 380 records.

The conversation a business owner actually wants is shaped differently:

What the business is actually asking

  • "Is this SKU ready to launch on Amazon next week?"
  • "How much of our finished-goods catalog is ready for the Q3 sales push?"
  • "Which suppliers can I source from for the regulated chemicals program, today?"
  • "What single fix would unblock the most SKUs for digital commerce?"

What the agent produces, on real data

Two example assessments. The single-object one evaluates one SKU. The aggregate one rolls the same evaluation across ten thousand SKUs. Both linked in full — this is what the agent's output looks like, not a mockup.

Single Object · SKU
PainAway Extra Strength Acetaminophen
Target: 85%·Nine weighted dimensions·1,200–1,500 word write-up
78%
Overall
View full assessment →

One product, evaluated against nine functional dimensions: Sales Readiness (15%), Category Management (40%), Finance Readiness (20%), Supply Chain (18%), Manufacturing & Production (15%), Logistics (12%), Digital Commerce (12%), Search Marketing (8%). Each dimension has weighted sub-metrics with percentage scores, specific data values, and gap callouts.

Excerpt — the agent doesn't just emit a number, it puts the values on the table:

Supply Chain Readiness
weight 18% · dimension score
72%
Lead Time Management
Procurement: 45d (Estimated) · Manufacturing: 7d · Last Updated: 6 mo ago
35% below
Supplier Coverage
Primary: ChemCorp · Secondary: Missing · Emergency: Not defined
60% below
Inventory Policy
Safety stock: configured · ROP: configured · ABC class: A
95% above
Aggregate · Finished-Goods Master Data
10,000 SKUs · Population Assessment
Target: 90%·Seven dimensions·65 components
84%
Overall
View full assessment →

The same dimensional framework, rolled up across the whole finished-goods population. Adds two things you can't get from a single object: systemic gaps (which sub-components are dragging which dimensions down) and priority actions ranked by how many SKUs each one unblocks.

1
Refresh procurement lead times Stale on more than 6 months across the affected SKUs.
3,200
SKUs unblocked
Supply Chain +14pp
2
Backfill SEO keywords Missing on a large portion of the digital catalog.
4,100
SKUs unblocked
Search Marketing +22pp
3
Resolve secondary-supplier coverage Single-sourced SKUs at risk if primary supplier is disrupted.
1,800
SKUs unblocked
Supply Chain +9pp

Three jobs, one agent

Job 1

Assess a single object

One SKU, one supplier, one customer, one material. The agent evaluates it against the consumer-domain framework, produces a weighted score, drills into each dimension's components, calls out specific gaps, and recommends fixes ranked by which consumer the fix unblocks. The single-SKU example above is the artifact.

Used by: stewards reviewing high-value records, business owners auditing new launches, suppliers checking their own readiness before submission.
Job 2

Assess a population

All SKUs in a category, all suppliers in a region, all customers above a credit threshold. The same framework rolls up. Systemic gaps surface. Priority actions are ranked by impact — how many records they unblock and how many dimension points they add. The aggregate example above is the artifact.

Used by: business owners deciding what to fund, MDM leads sequencing remediation work, executives asking "are we ready for Q3?"
Job 3

SME on call

Both kinds of assessment ground a chat surface where business and MDM teams ask questions against the live data, the lineage, and the assessments. The questions look like this:

You
You
Why is PainAway Extra Strength at 78% and the category average at 82%?
EA
Expert Agent
The gap is concentrated in Supply Chain Readiness (72% vs. 84% category avg) and Search Marketing (54% vs. 71%). Three component-level issues:
  1. Procurement lead time is marked "Estimated" and last updated 6 months ago — the rest of the category has confirmed values updated within 90 days.
  2. No secondary supplier is configured — 78% of the category has at least one.
  3. SEO keyword set is at 4 entries — category median is 12.
Fixing #1 and #3 would put this SKU at 84%, matching category average. #2 is a sourcing decision, not a data fix — do you want me to flag it to the Steward Agent for routing to Procurement?
You
You
Show me the ten SKUs in the analgesics category most at risk of failing the Q3 launch.
EA
Expert Agent
Filtering 412 analgesics SKUs by Sales Readiness < 60% and Search Marketing < 50% — the two consumer dimensions Q3 launch depends on. Top 10 attached, with their dominant gap and the single fix that would lift each above the launch threshold.

The dimensional framework

The framework is metadata, not code — same as everything else in ZMDM. Each domain ships with a default framework that you tune to your business. The finished-goods framework that produced the two example assessments looks like this.

weight 40%

Category Management

7 components

Primary category, sub-category, product family, hierarchy depth, classification consistency, replacement / substitute links.

weight 20%

Finance Readiness

6 components

Standard cost, BOM cost, transfer price, regional pricing, tax classification, GL account assignment.

weight 18%

Supply Chain Readiness

9 components

Lead times (procurement / manufacturing / total), supplier coverage (primary / secondary / emergency), inventory policy, MOQ, ROP, safety stock.

weight 15%

Sales Readiness

8 components

List price, MSRP, channel availability, regional restrictions, launch date, lifecycle status, description quality.

weight 15%

Manufacturing & Production

7 components

BOM completeness, routing, work centers, capacity, batch / lot policy, regulatory certifications.

weight 12%

Logistics Readiness

6 components

Dimensions, weight, pallet config, hazmat class, storage requirements, handling instructions.

weight 12%

Digital Commerce

8 components

Images, descriptions, attribute coverage for marketplace feeds, content localization, channel-specific overrides.

weight 8%

Search Marketing

5 components

SEO keywords, search synonyms, A+ content, structured data, category-relevance scores.

How the agent gets to a score

The assessment is not a black-box LLM hallucination. It is a deterministic rollup with an LLM commentary layer on top.

  1. Record-level signals

    Field-level completeness, validation pass/fail, freshness, distribution-fit, duplicate confidence, lineage events — produced by the Quality Agent and the workflow engine.

  2. Component scores

    Each component (e.g. "Lead Time Management") is a deterministic rule combining a handful of record-level signals: required fields present, values within plausible ranges, last-updated recency, source-of-record confidence.

  3. Dimension scores

    Weighted average of component scores, with dimension-specific bonuses for cross-component consistency.

  4. Overall score

    Weighted average of dimension scores.

  5. Narrative

    An LLM step grounded on the scored output writes the bulleted "key insights", the recommended actions, and the conversational answers. It does not compute scores — the numbers are deterministic and citable.

Numbers from rules. Words from the LLM.

The split is deliberate. The number you act on is never the one the LLM made up.

30 minutes, end to end

From the top-10 SKUs through a category aggregate, to a routed integration fix and a steward task — in the time of a standard meeting.

Minute 1
You open the Expert Agent. The catalog browser shows the frameworks for every domain you have stood up. You pick the finished-goods framework, then narrow to your top-10 grossing SKUs.
Minutes 2 – 8
Single-object pass. The agent renders the SKU-level scorecard for each. Three score in the high 80s, four in the 70s, three below 60. You scan the priority actions for the bottom three; one of them is "no images uploaded since 2023" — you knew that was a problem and now you have it quantified.
Minutes 9 – 14
Aggregate pass on the full analgesics category. 412 SKUs, overall 71% vs. target 85%. The Priority Actions panel surfaces "Refresh procurement lead times" as the top-ranked action — 3,200 SKUs affected, +14pp to Supply Chain dimension if resolved.
Minutes 15 – 22
You ask the chat panel: "Why is procurement lead time so stale across analgesics specifically?" The agent answers from the lineage: a 2023 supplier consolidation moved the upstream procurement system; the integration that refreshes lead times still points at the old source. That's the Architect Agent's problem, not a steward problem.
Minutes 23 – 30
You flag the integration issue to the Architect Agent and the missing-images problem to the Steward Agent for routing. Both pickups appear in their respective work queues. You move on to the supplier domain.

What the Expert Agent doesn't do (yet)

Honest scope. The first two are deliberate scope decisions; the second two are short roadmap items.

  • Historical trending The two example assessments are point-in-time. Time-series views of "Supply Chain readiness over the last six quarters" are on the roadmap, gated on the lineage store landing field-level history.
  • Forecasting The agent scores what is, not what will be. It does not predict that lead times are about to drift; it tells you they already have.
  • Dollar-impact ranking Priority actions are ranked by SKU count and dimension-point impact. Dollar-weighted ranking (revenue at risk, COGS impact) requires consuming Sales and Finance data the agent doesn't yet read.
  • Cross-domain assessments A SKU's readiness depends on its supplier's readiness, which depends on that supplier's compliance data. Cross-domain rollups are designed but not yet shipping.

How it fits with the rest of ZMDM

The Expert Agent is the fifth of five — the one that tells the business whether the whole thing is ready for the work that depends on it.

1
Design

Canonical model + lifecycle workflows

2
Architect

Workflow templates + integration wiring

3
Quality

Continuous perception + scored findings

4
Steward

Activity execution alongside human stewards

5
Expert

Consumer-weighted assessment + recommendation

  • It consumes record-level signals from the Quality Agent — completeness, validation, duplicates, freshness, lineage events.
  • It reads the canonical model from the Design Agent — knows which attributes are mandatory, which valuesets exist, which fields are display-required vs. operational-required.
  • It reads workflow history from the Architect Agent's output — lifecycle events, approval timestamps, who-did-what.
  • It feeds the Steward Agent — priority actions become workflow assignments. High-confidence fixes get agent-completed; nuanced ones escalate.
  • It speaks MCP — any compliant agent (yours, a vendor's, a partner's) can ask the Expert Agent for the readiness of a SKU or a population and get back a cited, scored answer. Master data becomes the AI context layer the rest of your AI surface grounds itself in.
The Design Agent shapes the data. The Architect Agent shapes how it moves. The Quality Agent watches what flows through. The Steward Agent acts on what's been raised. The Expert Agent tells the business whether the whole thing is ready for the work that depends on it. That is the question every MDM program was asked to answer in the first place.

Start Your Success Story

Join the growing list of manufacturers who have transformed their master data management with ZMDM.