From Fixing Bad Master Data to Preventing It: The New MDM Playbook

Master data teams and solutions are so focused on fixing bad data, they’ve lost sight of the one thing that would actually solve it — preventing it from being created in the first place.

And it’s failing everyone: the master data teams drowning in remediation work, the business users fighting bad data every day, and the organizations paying dearly when operational processes breakdown due to bad master data.


Master Data Stewardship Can Look like a Repair Lot

For decades, car manufacturers ran what the industry called a “repair lot” — a section at the end of the assembly line where defective vehicles were parked, fixed, and reworked before they could be shipped. It was accepted as a cost of doing business.

We’ve built the data equivalent of a repair lot and called it Master Data Management.

Deduplication engines. Exception queues. Stewardship teams correcting records that entered the system wrong. Mass remediation projects that run every 12–18 months because the data degraded again. These aren’t solutions. They’re solutions to perpetuate a flawed process.


The Costs Are Real — and Most Organizations Only See Half of Them

The direct costs are visible: MDM platforms, implementation projects, and stewardship teams whose primary job is fixing what should have been right the first time. Gartner estimates poor data quality costs the average organization $12.9 million per year.

But indirect costs are where the real damage happens.

A bad material master flows into procurement, manufacturing, finance, and logistics. A wrong unit of measure causes a mis-shipment. A missing classification blocks a product launch. A duplicate vendor generates a duplicate payment. These failures don’t show up in the MDM budget. They show up as operational rework, expediting costs, compliance failures, and lost revenue — often five to ten times the direct cost.

If the cars keep arriving at the repair lot faster than you can fix them, the answer isn’t more mechanics. It’s fixing what’s wrong on the line.


Manufacturing Figured This Out Decades Ago

Something is fundamentally wrong when fixing bad master data becomes a permanent and primary objective of master data management.

Toyota’s Production System isn’t built around better defect correction. It’s built around defect prevention. Jidoka. Poka-yoke. The Andon cord. The philosophy is simple: stopping a defect at the source is always cheaper than correcting it downstream.

Toyota didn’t build a repair lot. They engineered quality into the process itself — and eliminated the need for one.

Master data has the opposite architecture. We put the repair lot at the end and called it governance.

Instead of being a quality management system, MDM has become a quality control department.


The New Playbook: Move Master Data Governance Upstream

The companies actually winning at master data aren’t better at cleaning it up. They’ve moved governance upstream — to the point of creation. Validated workflows. Business-owned approval gates. Intelligently generated master data, Duplicate detection before a record is saved, not after. Integration validation before bad data enters the system, not after it already has.

The goal isn’t a bigger repair lot. It’s a production line that doesn’t produce defects in the first place.

Here’s what that looks like in practice:

1. Error-Proof at the Point of Entry

This is non-negotiable. The more you can error-proof at the point of data entry, the fewer downstream steps are needed. Field-level validation, mandatory attributes, and context-aware rules catch problems the moment they’re introduced — not weeks later when they’ve already propagated.

2. Build Governance into Existing Business Workflows

Master data doesn’t live in a vacuum. New product introductions, supplier onboarding, customer setup — these are business processes, and master data governance should be embedded in them, not bolted on after. Many master data fields are context-dependent: certain values only make sense given prior selections. Intelligent, dynamic workflow logic reduces the chance of errors and makes it easier for stakeholders to get it right the first time.

3. Validate Before You Create

This is the last line of defense before data enters ERP, CRM, or planning systems. Simple to sophisticated validation rules can catch inconsistencies — missing classifications, conflicting values, failed cross-system checks — before a record is committed. Not after it’s already live in five downstream systems.

4. Support End-to-End Processes, Not Just the Primary Record

There’s a common misconception that master data only encompasses the primary domain object — the material, the customer, the vendor. But for master data to be usable by the business, the complete scope includes many related elements: units of measure, plant data, purchasing info, sales conditions, classification hierarchies. Supporting the end-to-end process to create all these elements properly — in the right order, with the right dependencies — is essential for master data that’s actually ready to use.

5. Correct Errors Continuously, Without Disruption

In an ideal world, you wouldn’t need this step at all. But if your existing systems already carry incorrect data, you need a way to correct it continuously and surgically — without mass data loads, system outages, or six-month remediation projects. Targeted correction workflows, not batch cleanses, are the path forward.


From Reactive to Proactive

Most MDM programs were designed to manage the aftermath of bad data. The shift to prevention requires rethinking where governance happens — not at the end of the process, but at the beginning of it.

That means treating master data as a business process problem, not a data management problem. It means giving business users the tools to get data right the first time, rather than flagging errors for IT teams to fix later. And it means measuring success not by how fast you can clean data up, but by how rarely bad data gets in.

The repair lot had its era. The companies building durable master data quality today aren’t trying to fix faster — they’re preventing the defect.


At Cambrian Lab, we built ZMDM around the philosophy of preventing errors at the source, inspired by the discipline and effectiveness of the Toyota Production System. If you’re ready to move governance upstream, we’d like to show you how it works.