Back to blogStandards

Your product data is not ready for the Digital Product Passport — and here is why that matters now

April 2026 8 min read
Overview

ISO and IEC are advancing a Joint Technical Committee on Digital Product Passports — and at its core, this is not a sustainability initiative but a recognition that most manufacturers' product data is structurally unfit for cross-ecosystem exchange. This article unpacks what DPP readiness actually requires, why the ISO 8000 family already defined the answer, and where K:spec fits into closing the gap before regulatory and commercial pressure arrives in force.

What the DPP proposal actually requires

ISO/TC 154 has commenced work on the first draft of ISO 25534-1 Digital product passport — Part 1: Overview and fundamental principles. The work programme identifies six categories of core component a conformant DPP system shall address — and that list will be immediately recognizable to anyone with a master data background, because it describes what a well-governed master data architecture should already provide.

Unique identifiers

Globally resolvable IDs that link physical products to their digital representation.

Data carriers & lookup

QR codes, RFID, and resolution mechanisms for retrieving authoritative product data.

Access rights management

Granular control over who can read or update which parts of a passport.

Exchange protocols & formats

Standardised, machine-readable schemas that travel across systems and jurisdictions.

SO 25534-1 is concerned with horizontal interoperability: ensuring a DPP created in one system, sector or jurisdiction can be understood and trusted by actors operating in a completely different one. This is precisely where the gap between current “product data” and what a DPP requires becomes most visible.

The fundamental problem: data that was never designed to travel

Most manufacturers’ product data were created to serve internal purposes. The data live in ERP systems configured for procurement, in PDM systems configured for engineering, in PIM systems configured for e-commerce, and in spreadsheets that sit somewhere in between. The data use internal part numbers, proprietary attribute names, free-text descriptions written by individuals who no longer work for the organization, and classification codes that made sense at the time of initial implementation but have never been systematically maintained.

These data can serve their immediate operational purpose tolerably well — up to the point where they need to cross a boundary. When a manufacturer’s product data must be understood by a customer’s procurement system, a regulator’s surveillance platform, a recycler’s material recovery process, or — as the DPP framework envisions — all of these simultaneously, the structural weaknesses become impossible to manage around.

The problems are predictable and consistent across industries. Attribute names are ambiguous: “length” without a unit and without a reference to what is being measured is not a machine-readable data point. Classification codes are inconsistently applied: the same physical product may be classified differently across different markets or customer relationships. Properties are unmeasured or unmeasurable: values recorded as ranges, approximations or qualitative descriptions cannot be used in automated interoperability scenarios. And identifiers are not unique in any globally resolvable sense: an internal part number is meaningful only within the system that issued it.

The DPP framework requires machine-readable, semantically unambiguous, globally resolvable product data. Most manufacturers are not close to that.

What ISO 8000 understood before the DPP was invented

The foundational standard for data quality in industrial contexts, ISO 8000, made a critical distinction that much of the industry has still not fully absorbed. It separates the question of what data say from the question of what data mean. A product attribute can be syntactically correct — it has a value, a unit, a name — while being semantically meaningless, because the concept to which it refers has not been defined in a way that any system other than the one that created it can interpret without human intervention.

ISO 8000-110 and its associated family of standards established the principle that master data conformity requires not just syntactic correctness but semantic provenance: data shall be traceable to a defined concept, expressed using a controlled property identifier, and accompanied by sufficient metadata to allow any receiving system to interpret them without ambiguity. This is not an abstract ideal. It is a practical requirement for any system that shall exchange data across organizational or sectoral boundaries.

The DPP framework is, in structural terms, an extension of this principle to the entire supply and value chain. This includes “data interoperability” — covering reliability, quality, verification and certification of data — and this is rooted firmly in ISO 8000 territory. The workstream on semantic dictionaries and terminology standards is ISO 29002 territory. The related work programmes in the various standards bodies are, to a significant degree, a recognition that the intellectual infrastructure of industrial data standards already exists. What is missing is its adoption.

Where KOIOS K:spec fits into this picture

The KOIOS K:spec platform was designed, from first principles, around the ISO 8000 architecture. That is not a marketing claim — it is a description of how the system works at its foundation. Every attribute recorded in K:spec is traceable to a defined concept in an ISO 29002-conformant concept dictionary. Every property identifier is expressed as an IRDI — an International Registration Data Identifier — that is globally unique and resolvable. Every data exchange is governed by a structured schema that makes the semantic provenance of the data explicit, not implicit.

This architecture is directly relevant to DPP readiness, and the relevance is not coincidental.

The DPP interoperability problem — how a manufacturer’s product data become interpretable to any actor anywhere in the DPP Ecosystem — has exactly one durable solution: the data shall be anchored to concepts that are defined independently of any single organization’s internal terminology. When a product attribute in K:spec records that a component has a particular tensile strength, the attribute is not labelled “tensile strength” in the way a column header in a spreadsheet is labelled. It is linked to a concept definition that specifies precisely what tensile strength means in that material context, what measurement method is referenced, what unit of measure applies, and what source of authority defines the concept. That definition exists independently of the KOIOS platform and independently of the manufacturer who populated the data. It is, in the language of the DPP framework, semantically interoperable by construction.

The identifier question

One of the most underestimated DPP readiness challenges for manufacturers is the identifier problem. The proposed JTC’s core component list places unique identifiers at the top, and for good reason. A DPP that cannot be reliably linked to the physical product or pre-product it describes is not a DPP — it is a document. The linkage between a physical object and its digital representation, via a resolvable identifier, is the technical foundation on which the entire ecosystem depends.

Most manufacturers do not currently issue product identifiers that are globally unique and resolvable. They issue part numbers, SKUs, catalogue numbers and EAN barcodes — none of which, in isolation, satisfies the requirement for a resolvable identifier that any actor in the DPP Ecosystem can use to retrieve authoritative product data without prior knowledge of the manufacturer’s internal systems.

K:spec’s use of IRDI-based concept identifiers, and KOIOS’s registered status as an ICD 0194 issuing organization under ISO/IEC 6523, means that the identifier architecture in K:spec is already aligned with the globally unique, resolvable identifier requirement that the DPP framework will mandate. Manufacturers onboarding their product data into K:spec are, in the process of doing so, resolving the identifier problem that will otherwise need to be addressed separately and retrospectively when DPP conformity becomes a commercial or regulatory requirement.

The data onboarding imperative

There is a common misunderstanding about what it means to prepare data for a DPP. Many organizations frame it as a mapping exercise: take the data that are already held, map them to the DPP schema, and the job is done. This misunderstands the nature of the problem. A mapping exercise assumes that the source data are semantically coherent. For most manufacturers, they are not.

The work of making product data DPP-ready begins with a structured onboarding process that interrogates source data — from ERP extracts, supplier data sheets, engineering specifications and catalogue content — and resolves them against a defined concept framework. This means identifying which properties are actually being described, confirming that the values recorded are accurate and traceable, establishing the units and measurement conditions, and expressing the result in a form that carries its own semantic provenance.

This is not a trivial exercise, and it cannot be fully automated. But it is the exercise. The alternative — attempting to create a DPP from data that have not been through this process — produces a DPP that is syntactically conformant and semantically unreliable. It will pass schema validation and fail interoperability tests, which is arguably worse than failing both, because it creates a false impression of readiness.

K:spec was designed to support this onboarding process at scale, including for complex multi-supplier product portfolios. The platform’s ISO 8000-110 conformity means that data accepted into K:spec have, by definition, passed the semantic quality threshold that DPP interoperability requires.

The competitive case for acting now

The DPP framework makes clear that DPP is not a single-sector phenomenon. The listed stakeholder categories span large industry, SMEs, government, consumers, research bodies and NGOs. The relevant committees include organizations working on batteries, construction, lifecycle assessment, circular economy, blockchain and the Internet of Things. The DPP is being designed to be horizontal — applicable across products, sectors and jurisdictions — precisely because the supply chains it shall serve are horizontal.

This means that the competitive advantage from being DPP-ready early will accrue not just in sectors where DPP regulation arrives first, but across every supply chain relationship where a customer, certifier or regulator needs to trust product data, including cross-border trade where many customs authorities are looking at the DPP programme and seeing the advantages of accurate, machine-readable data in tracking goods entering their jurisdiction. Manufacturers who have their master data in order will be able to respond to DPP data requests rapidly and reliably. Those who have not will face either a costly and disruptive remediation project or the reputational and commercial consequences of being unable to demonstrate product data quality at the point where it is demanded.

The window to address this proactively — before the regulatory and commercial pressure arrives — is not indefinitely open. The standards machinery is in motion. The work programme is defined. Standards will be published, and conformity requirements will follow.

A practical starting point

For manufacturers who recognize the challenge and want to understand where they stand, the starting point is an assessment of their current product data against the properties that DPP interoperability requires. Are the data semantically unambiguous? Are they traceable to defined concepts? Are the identifiers globally resolvable? Are the data expressed in a form that any receiving system can interpret without human mediation?

For most organizations, the answer to at least some of those questions will be no. That reflects the reality that industrial product data were not designed with cross-ecosystem interoperability in mind. The point is to understand the gap and to begin closing it in a systematic and standards-aligned way, so that when the DPP requirement arrives, the data foundation is already in place.

That is what K:spec is for.

 

Close the DPP readiness gap

K:spec provides the ISO 8000-conformant data foundation that Digital Product Passport interoperability will require — IRDI-based identifiers, ISO 29002 concept dictionaries, and structured onboarding at scale.