Why you should launch a data management program – A (data) message to the C-suite

Why you should launch a data management program – A (data) message to the C-suite

Why you should launch a data management program – A (data) message to the C-suite

How do you create quality information from data silos

If you are an executive in a rapidly growing organization whose stated mission is “to be the most successful business in the world” you will be aware that rapid growth brings a lot of excitement, and that this very growth changes the nature of the way the company operates.

Instinctive understanding of the business becomes more challenging as more operating sites and businesses are added to the group. Executives can no longer rely solely on their knowledge of individual sites, and as the business grows, they rely more on reports compiled from the different businesses to keep a grip on the business metrics.

Any significant increase in new data brings new dynamics. Data silos multiply making it more difficult to aggregate data across departments; across sites; across regions; and across national borders. Recently acquired business units will inevitably lack a common language. Some common phrases may even have a different meaning in different parts of the organization. This lack of a common understanding makes it more likely that business opportunities will be missed.

Good quality master data is the foundation for making sound business decisions, and also for identifying potential risks to the business. Why is master data important? Master data is the data that identifies and describes individuals; organizations; locations; goods; services; processes; rules; and regulations, so master data runs right through the business.

All departments in the business need structured master data. What is often misunderstood is that the customer relationship management (CRM) system; the enterprise resource management system (ERP); the payroll system; and the finance and accounting systems are not going to fix poor master data, the software performance always suffers because of poor quality data., but the software itself will not cure that problem.

The key is to recognise what master data is not; master data is not an information technology (IT) function, master data is a business function. Improving master data is not a project. Managing master data is a program and a function that should be at the heart of the business process.

Foundational master data, that is well structured, and good quality is a necessity if your business is going to efficiently and effectively process commercial data; transaction reporting; and business activity. The reality is that well-structured, good quality, master data is also the most effective way to connect multiple systems and processes, both internally and externally.

Good quality data is the pathway to good quality information. With insight from good quality information, business leaders can identify – and no longer need to accept – the basic business inefficiencies that they know exist, but cannot pin down. For a manufacturing company making 5% profit on sales, every $50,000 in operational savings is equivalent to $1,000 000 sales. It follows then, that if you have $50,000 of efficiency savings that you can identify and implement as a result of better quality information, you have solved a million-dollar problem.

If you, as an executive, are unsure of the integrity and accuracy of the very data that is the foundation for the reports you rely on in your organization, launching a master data management program is the best course of action you can take.

Contact us

Give us a call and find out how we can help you.

+44 (0)23 9387 7599

info@koiosmasterdata.com

About the author

Peter Eales is a subject matter expert on MRO (maintenance, repair, and operations) material management and industrial data quality. Peter is an experienced consultant, trainer, writer, and speaker on these subjects. Peter is recognised by BSI and ISO as an expert in the subject of industrial data. Peter is a member ISO/TC 184/SC 4/WG 13, the ISO standards development committee that develops standards for industrial data and industrial interfaces, ISO 8000, ISO 29002, and ISO 22745. Peter is the project leader for edition 2 of ISO 29002 due to be published in late 2020. Peter is also a committee member of ISO/TC 184/WG 6 that published the standard for Asset intensive industry Interoperability, ISO 18101.

Peter has previously held positions as the global technical authority for materials management at a global EPC, and as the global subject matter expert for master data at a major oil and gas owner/operator. Peter is currently chief executive of MRO Insyte, and chairman of KOIOS Master Data.

KOIOS Master Data is a world-leading cloud MDM solution enabling ISO 8000 compliant data exchange

Data quality: How do you quantify yours?

Data quality: How do you quantify yours?

Data quality: How do you quantify yours?

Being able to measure the quality of your data is a vital to the success of any data management programme. Here, Peter Eales, Chairman of KOIOS Master Data, explores how you can define what data quality means to your organization, and how you can quantify the quality of your dataset.

In the business world today, it is important to provide evidence of what we do, so, let me pose this question to you: how do you currently quantify the quality of your data?

If you have recently undertaken an outsourced data cleansing project, it is quite likely that you underestimated the internal resource that it takes to check this data when you are preparing to onboard it. Whether that data is presented to you in the form of a load file, or viewed in the data cleansing software the outsourced party used, you are faced with thousands of records to check the quality of. How did you do that? Did you start by using statistical sampling? Did you randomly check some records in each category? Either way, what were you checking for? Were you just scanning to see if it looked right?

The answer to these questions lies in understanding what, in your organization, constitutes good quality data, and then understanding what that means in ways that can be measured efficiently and effectively.

The Greek philosophers Aristotle and Plato captured and shaped many of the ideas we have adopted today for managing data quality. Plato’s Theory of Forms tells us that whilst we have never seen a perfectly straight line, we know what one would look like, whilst Aristotle’s Categories showed us the value of categorising the world around us. In the modern world of data quality management, we know what good data should look like, and we categorise our data in order to help us break down the larger datasets into manageable groups.

In order to quantify the quality of the data, you need to understand, then define the properties (attributes or characteristics) of the data you plan to measure. Data quality properties are frequently termed “dimensions”. Many organizations have set out what they regard as the key data quality dimensions, and there are plenty of scholarly and business articles on the subject. Two of the most commonly attributed sources for lists of dimensions are DAMA International, and ISO, in the international standard ISO 25012.

There are a number of published books on the subject of data quality. In her seminal work Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information™ (Morgan Kaufmann, 2008), Danette McGilvary emphasises the importance of understanding what these dimensions are and how to use them in the context of executing data quality projects. A key call out in the book emphasises this concept.

“A data quality dimension is a characteristic, aspect, or feature of data. Data quality dimensions provide a way to classify information and data quality needs. Dimensions are used to define, measure, improve, and manage the quality of data and information.
The data quality dimensions in The Ten Steps methodology are categorized roughly by the
techniques or approach used to assess each dimension. This helps to better scope and plan a project by providing input when estimating the time, money, tools, and human resources needed to do the data quality work.

Differentiating the data quality dimensions in this way helps to:
1) match dimensions to business needs and data quality issues;
2) prioritize which dimensions to assess and in which order:
3) understand what you will (and will not) learn from assessing each data quality dimension, and:
4) better define and manage the sequence of activities in your project plan within time and resource constraints”.

Laura Sebastian-Coleman in her work Measuring Data Quality for Ongoing Improvement, 2013 sums up the use of dimensions as follows:

“if a quality is a distinctive attribute or characteristic possessed by someone or something, then a data quality dimension is a general, measurable category for a distinctive characteristic (quality) possessed by data.

Data quality dimensions function in the way that length, width, and height function to express the size of a physical object. They allow us to understand quality in relation to a scale or different scales whose relation is defined. A set of data quality dimensions can be used to define expectations (the standard against which to measure) for the quality of a desired dataset, as well as to measure the condition of an existing dataset”.

Tim King and Julian Schwarzenbach in their work, Managing Data Quality – A practical guide (2020) include a short section on data characteristics, that also reminds readers that when defining a set of (dimensions) it depends on the perspective of the user; back to Plato and his Theory of Forms from where the phrase “beauty lies in the eye of the beholder” is derived. According to King and Schwarzenbach quoting DAMA UK, 2013, the six most common dimensions to consider are:

  • Accuracy
  • Completeness
  • Consistency
  • Validity
  • Timeliness
  • Uniqueness

The book also offers a timely reminder that international standard ISO 8000-8 is an important standard to reference when looking at how to measure data quality. ISO 8000-8 describes fundamental concepts of information and data quality, and how these concepts apply to quality management processes and quality management systems. The standard specifies prerequisites for measuring information and data quality and identifies three types of data quality: syntactic; semantic; and pragmatic. Measuring syntactic and semantic quality is performed through a verification process, while measuring pragmatic quality is performed through a validation process.

In summary, there is plenty of resource out there that can help you with understanding how to measure the quality of your data, and at KOIOS Master Data, we are experts in this field. Give us a call and find out how we can help you.

Contact us

In summary, there is plenty of resource out there that can help you with understanding how to measure the quality of your data, and at KOIOS Master Data, we are experts in this field. Give us a call and find out how we can help you.

+44 (0)23 9387 7599

info@koiosmasterdata.com

About the author

Peter Eales is a subject matter expert on MRO (maintenance, repair, and operations) material management and industrial data quality. Peter is an experienced consultant, trainer, writer, and speaker on these subjects. Peter is recognised by BSI and ISO as an expert in the subject of industrial data. Peter is a member ISO/TC 184/SC 4/WG 13, the ISO standards development committee that develops standards for industrial data and industrial interfaces, ISO 8000, ISO 29002, and ISO 22745. Peter is the project leader for edition 2 of ISO 29002 due to be published in late 2020. Peter is also a committee member of ISO/TC 184/WG 6 that published the standard for Asset intensive industry Interoperability, ISO 18101.

Peter has previously held positions as the global technical authority for materials management at a global EPC, and as the global subject matter expert for master data at a major oil and gas owner/operator. Peter is currently chief executive of MRO Insyte, and chairman of KOIOS Master Data.

KOIOS Master Data is a world-leading cloud MDM solution enabling ISO 8000 compliant data exchange

What is K:spir and how can it revolutionise the SPIR process?

What is K:spir and how can it revolutionise the SPIR process?

What is K:spir and how can it revolutionise the SPIR process?

The SPIR process urgently needs to enter the 21st century

At KOIOS Master Data we have a unique understanding of the difficulties caused by the current SPIR (Spare Parts Interchangeability Record) process. Through our team’s years of MRO consultancy work, we have first-hand experience of how damaging the poor-quality data supplied in SPIRs can be to oil and gas projects. It can have a profound effect on cost, time and resource – cost, time and resource that could be spent innovating and developing a competitive advantage. Not to mention, the unnecessary wastage it can lead to, in an industry that can hardly accommodate it in the current climate. In this age of Industry 4.0, digital transformation and international data standards such as ISO 8000, the question begs – why is data quality consistently letting the side down? When we struggled to find an effective SPIR solution, KOIOS Master Data was born and we set out to create one.

K:spir is the only SPIR software designed this century using ISO 8000 standard data. It creates machine-readable data that retains quality throughout the chain, enabling accurate decision making and resulting in reduced cost, time and resource.

Here, we look at the importance of master data management, the challenges created by the SPIR process, and how K:spir is uniquely positioned to resolve those challenges.

Why is data management so important to the SPIR process?

In this age of ‘data explosion’, most businesses are aware of how poorly-managed data can put them on the back foot. In Experian’s 2019 Global Management Data Research, they found that 95% of organizations surveyed see a negative impact from poor data quality.

Similarly, the Aberdeen Group’s Big Data Survey in 2017 found that the biggest challenges for Executives arise from data disparity, including inaccessible data, poor quality data informing decisions and the growing need for faster analysis. 

The overall effect is a lack of trust in data, to the great detriment of strategic decision making. And when you can’t trust your data to inform business decisions, then cost, time and resource will inevitably suffer.

In the context of the SPIR process, accurate decision making is everything. The SPIR exists as a tool for forecasting spares requirements for the life of a project, its sole purpose being to assist the Owner Operator (O/O) to make accurate decisions. Yet, as many will attest, the data supplied is often inaccurate, hard to access and sometimes supplied by the Engineering Procurement Contractor (EPC) at handover, by which time it is often too late to inform anything at all. 

Experts have raised the question – if you can’t trust SPIRs to make accurate procurement decisions, then are they worth the paper they’re written on?. The process is clearly out-of-date, yet it continues to blight the efficiency of many oil and gas upstream projects.

SPIRs dissected 

The shortcomings of the antiquated SPIR process can be summarised into three key areas:

1. DATA IS INACCURATE AND OVER-SIMPLIFIED

SPIRs are generated from paper forms and are transcribed many times, so part descriptions become distorted. Often, parts have multiple descriptions.

Solution: K:spir locks in data quality right at the start of the process, using ISO 8000 standard data. Part descriptions are consistent and safe from misinterpretation, providing confidence in forecasting and reordering. 

SPIRs are usually completed by an Original Equipment Contractor (OEM), who is not necessarily aware of the O/O’s operating and maintenance procedures. Therefore, they do not take into account equipment criticality or maintenance capability.

Solution: K:spir uses the maintenance and repair strategy to determine the spares requirement, reducing wastage and taking cost off of the bottom line.

2. DATA IS INACCESSIBLE AND DIFFICULT TO ANALYZE

SPIRs often provide information in spreadsheets or pdfs, which are impossible to extract data from quickly, if at all. To extract anything meaningful is very cost and time-intensive, and relies on support from IT specialists.

Solution: K:spir provides instant reporting on the completeness and cost of spares, allowing for accurate decision making. The information is fully configurable to the requirements of the O/O. It can also create a Maintenance Bill of Materials (BoM) and is interoperable with maintenance systems.

Information is not portable and has to be re-entered for different systems.

Solution: K:spir generates portable (machine-readable) data saving significant time spent re-keying information and unnecessary data handling costs.

Data exists on many platforms and is not available to all stakeholders, all of the time.

Solution: K:spir is cloud-based, providing simultaneous access to all stakeholders in the chain. This allows for more transparency and accountability at all stages of the project lifecycle.

3. DATA IS SUPPLIED TOO LATE

Sometimes even as late as handover, by which time it’s too late for the O/O to minimize the operating risk. There is no opportunity to make informed decisions, such as ordering spares with long lead times, or calculating warehouse space. This can lead to unnecessary wastage and operational difficulties along the line.

Solution: K:spir provides transparency right from the beginning of the project, allowing for critical decisions to be made early on. 

With its unique set of features and benefits, it’s clear that K:spir can relieve the symptoms of the current SPIR process with immediate effect, saving valuable cost, time and resource.

A SPIR – this is not what efficiency looks like!

SPIRs and effective MDM – who is responsible for getting it right?

As confident as we are in the KOIOS software suite to advance the world of Master Data Management (MDM), there are clearly other factors that need to be addressed, most notably, ownership. It is a thorny area, and one that is being more keenly contested as digital transformation rattles on apace. As the Aberdeen Group puts it, there is a “growing urgency for better data management”, as businesses see the shortfalls of their inability to harness data.

Experian’s report shows that in 84% of cases, data is still managed primarily by IT departments. Revealingly, 75% of their sample thought that ownership should lie within the business, with support from IT. They conclude that organizations should develop their MDM strategy to fulfill the needs of a much larger group of stakeholders, who wish to harness the power of their data to improve decision making and efficiency.

In the context of SPIRs and oil and gas projects, we believe that O/Os should become more demanding over the quality of data supplied to them by manufacturers. It is unrealistic for their IT experts to have sight of the broader operational requirements, with their own priorities being diverse and demanding. It is the Executives who suffer the consequences of the risk taken by ignoring poor data, and the operations and maintenance departments that will experience the pain. Clearly, they need to make their voices heard much earlier in the process. That said, manufacturers and EPCs also need a better understanding of the challenges faced by O/Os, and in our view should share the responsibility for getting the data right from the start.

It is, as previously stated, a tough subject, but we are constantly encouraged by the conversations we have with manufacturers and O/Os alike. More and more key stakeholders are waking up to the power that effective MDM can have in driving business forwards, by freeing up cost, time and resource and supporting strategic decision making. Not just to their own ends, but for industry as a whole to fully realize its digital transformation goals.

Join us in our vision to revolutionize the SPIR process

A radical change to the SPIR process and MDM as a whole is on the horizon. While there may be no silver bullet, we firmly believe that the right software is an essential move forward. The KOIOS software suite is geared towards this larger shift in MDM, but in the case of K:spir, the results can be felt immediately.

Our hope is that O/O’s and manufacturers alike will unite in becoming more discerning and demanding about data quality, working as one to create harmony along the chain. At KOIOS Master Data, we are committed to leading the conversation and driving better data quality.

Contact us

If you wish to become part of the change and join us in our vision to revolutionize the SPIR process, we would love to discuss it further with you. 

+44 (0)23 9387 7599

info@koiosmasterdata.com

SPIRS: Are they worth the paper they’re written on?

SPIRS: Are they worth the paper they’re written on?

SPIRS: Are they worth the paper they’re written on?

I have carried out numerous studies of MRO inventory for companies around the globe, and I question the existence of SPIRs in a spreadsheet form in the new era of digital data.

Peter Eales

Chairman - KOIOS Master Data

Context: Oil and gas upstream projects typically extract material requirements from Spare Parts Interchange Lists (SPILs), sometimes referred to as Spares Parts Lists and Interchangeability Records (SPIRs) or Recommended Spare Parts List (RSPL). These lists are supplied to the owner / operator (O/O) at handover by the Engineering Procurement Contractor (EPC) having been supplied to the EPC by the manufacturer or the vendor of the equipment.

What challenges do SPIRs on spreadsheets create?

There are a number of issues for plant operators that arise from the use of SPIR documents in oil and gas projects. The release of these documents by the EPC is often left until the very end of the project, or not at all, despite financial penalty clauses being inserted in the contracts. This is a real challenge to the operator who wants to reduce the operating risk by purchasing long lead items early enough, and those who want to calculate the size of warehouse they require in a greenfield project.

The format of the SPIR is frequently inconsistent; effectively being a paper form that has been recreated onto a spreadsheet and edited many times. In the end it resembles nothing much more than an optimistic vendor order form. Certainly, it is an incredibly difficult document to extract data from, and as no two forms are constructed in the same way and often have merged cells. Extracting a complete project worth of data is a costly exercise in terms of both manpower and time.

So, what is the solution? 

Case study: ever decreasing O-Rings…

If we look at these documents, it soon becomes clear that they have many drawbacks: they are not extractable; contain only a brief description of the product, often just a noun; they take no account of the equipment criticality; they take no account of the O/Os maintenance capability, or their spares and repair strategy, such as repair or replace. Data quality is extremely poor in these types of documents. In this example, from a single 62-line SPIR document, O-Ring is described four different ways. The shore hardness is also missing from the details, making it impossible to safely order the part from another supplier. For consumable items, it is common for the original part manufacturers name to be omitted from the document.

SPIR Document Example
“O” RING
OD: 18.5, ID: 15.5, t: 1.5
FKM
O RING
OD: 16.6, ID: 11.8, t: 2.4
NBR
ORING
OD: 66.5, ID: 62.5, t: 2
NBR
O-RING
OD: 6.5, ID: 3.5, t: 1.5
NBR

I would also challenge the spares actually listed on the form. Interpretation of what constitutes two years operating spares vary from manufacturer to manufacturer. Some list only basic consumables, as in their opinion, that is all that is required in the first two years, some list a full production BoM (Bill of Materials) that includes such items as pump casing. Neither approach is helpful to the analyst trying to decide what spares to stock in the plant or organize an on-demand local supply for.

The companies that design and manufacture equipment rarely operate them, and EPCs do not always have experience of operating and maintaining plants – so why would they know what spares you need? Asking the vendor how he calculated the failure rates in your application gets an interesting range of answers, although when you ask him if he will take back all his recommended spares that you have not used in five years’ time, you usually get the same answer! To be fair some major manufacturers do track component failure rates in the field, but they are few and far between.

I would strongly challenge the decision to list the “two years recommended spares” on the SPIR. How many plants are designed for a two-year life? As a materials manager working with the maintenance team, I simply want to know all the maintainable items required for the life of the equipment. My task is to determine the spares required to keep the revenue producing assets running for the life of the plant.

When is a spare no longer a spare? When it becomes waste

Commissioning spares is another column frequently found on SPIR documents. Before you buy the commissioning spares check with the EPC, they will probably be responsible for these spares during the commissioning period and will be leaving you a mountain of unused spares, and will usually be asking a hefty price for them on handover. As an owner operator, you will be in danger of overloading your warehouse with spares you might never use or have already purchased.

I do not want to spend subsequent years repeating the exercise to find out what spares I do not have, or which spares I will never use, you know, those spares that were purchased “just in case”.

When reviewing SPIR documents in order to determine the spares required for the operation, the criticality of the equipment, the maintenance capability, and an understanding of the planned consumption also need to take into account. Furthermore, a number of organisations have strategies to run certain non-critical equipment to failure and then replace the complete unit rather than repair the item using the recommended spares. This information is, understandably, not on the SPIR form but is vital in the decision-making process.

It never ceases to amaze me seeing a room full of people analysing SPIR forms and ordering the spares listed – using the column added by the vendor – without taking these factors into account.

I have carried out many studies of MRO inventory for companies around the globe, and the two most frequent causes of non-moving stock is spares purchased for equipment that is no longer exists, and more commonly, spares purchased for equipment where the plant maintenance capability does not exist to repair and item; motor and pump spares are the favourites. It should go without saying that there is no point in keeping a bearing for a motor in a zoned area for your repair shop to use if the repair shop is not approved to complete work to that standard, or does not have a facility to test the repaired equipment. Pump and motor spares are most frequently purchased and remain unused, as most often the maintenance strategy is to send these units out for refurbishment when they fail. So why were they purchased in the first place? Probably because they were listed on the SPIR and the buyer has taken the appealing route of taking the word of the manufacturer regarding the required spares or they have purchased the spares as part of a package.

Why are you still using spreadsheets? 

In the age of international digital data exchange standards such as ISO 8000, it is frankly mystifying that people are still using these outdated methods of creating and distributing the vast amounts of data required for large projects. 

I firmly believe the answer lies in a simple data exchange service.

There is a paradigm shift using new technology and international standards such as ISO 8000. The antiquated process of buyer-led templates is replaced with supplier-led specification delivered in a computer interpretable format. The success of this new method lies in the provenance of data, eliminating any ambiguity and making data easy to extract.

It can be achieved by a simple clause in the contract with the EPC:

“The supplier shall supply technical data for the products or services they supply. Each item shall contain an ISO 8000-115 compliant identifier that is resolvable to an ISO 8000-110 compliant record with free decoding of unambiguous, internationally recognized identifiers.”

This result? Guaranteed data quality leading to a reduction in costs and increased efficiency.

To find out more about the tools you need to unlock the power of ISO standard digital data, visit KOIOS Master Data.

About the author

Peter Eales is a subject matter expert on MRO (maintenance, repair, and operations) material management and industrial data quality. Peter is an experienced consultant, trainer, writer, and speaker on these subjects. Peter is recognised by BSI and ISO as an expert in the subject of industrial data. Peter is a member ISO/TC 184/SC 4/WG 13, the ISO standards development committee that develops standards for industrial data and industrial interfaces, ISO 8000, ISO 29002, and ISO 22745. Peter is the project leader for edition 2 of ISO 29002 due to be published in late 2020. Peter is also a committee member of ISO/TC 184/WG 6 that published the standard for Asset intensive industry Interoperability, ISO 18101.

Peter has previously held positions as the global technical authority for materials management at a global EPC, and as the global subject matter expert for master data at a major oil and gas owner/operator. Peter is currently chief executive of MRO Insyte, and chairman of KOIOS Master Data.

KOIOS Master Data is a world-leading cloud MDM solution enabling ISO 8000 compliant data exchange

MRO Insyte is an MRO consultancy advising organizations in all aspects of materials management

Contact us

KOIOS Master Data are experts in this field. Give us a call and find out how we can help you.

+44 (0)23 9387 7599

info@koiosmasterdata.com

Blockchain: Potential Uses – Incorporating International Standards – Part 1

Blockchain: Potential Uses – Incorporating International Standards – Part 1

Blockchain

Potential uses incorporating international standards (Part 1)

Presentation to the Industry Blockchain Expedition
Linz, Austria
26th November 2018

Introduction

This is the online version of a speech that I was asked to deliver at the ‘Industry Blockchain Expedition‘  hosted in Linz, Austria. We have included the visuals and video’s used to illustrate some of the concepts of Blockchain and its use. This is rather a lengthy blog so, I have included a shortcut menu on the right, so you can navigate to an area of interest or resume reading.

I would like to thank the organisers of this event for inviting me to speak with you today. When I received the invitation, initiated I believe by Paul Dietl, a contact I have been working with at SKF in Steyr, I was obliged to explain to the team that I am not a blockchain expert, although I am working on a potential use case for blockchain.

However, as I explained, that one interesting feature of this use case that was different to any others we had come across, was that we were incorporating international standards into the solution.

“Perfect”, said the organisers.

So here I am!

I have three aims to achieve in my talk today:

  1. To demystify blockchain;
  2. To show the role international standards will play in the growth of blockchain;
  3. To demonstrate how small businesses can find practical applications for blockchain technology and benefit from this technology.

Firstly, to give you some context. I am not an academic, I not employed by the UK government, nor am I employed by a global corporation, although I have worked for global corporations in the past.

I am a small business owner, I have two businesses; the longer established business is my consultancy business, through which I help companies with their materials management issues.

A large part of my time in that business is helping organisations resolve their materials management issues, the root cause of which is frequently poor data quality.

My efforts to find appropriate tools that incorporated international data quality standards to help solve these data quality issues that my clients were facing was frustrating, and so, a couple of years ago I decided to start my own software company to create the software that I felt the market needed.

I am recognised by the British Standards Institute and the International Organization for Standardization as an industrial data expert, I give up a lot of my time developing standards in that area, and I sit on two international working groups.

Following this meeting I am off to Houston for a weeks work on the oil and gas interoperability standard ISO 18101 that will be published next year.

This talk is presented from the perspective of my software company, KOIOS Master Data Limited.

Before you hear me speak on the subject, I would like to play you a short advertisement created by IBM to explain blockchain.

IBM Blockchain: The Blockchain built for smarter business

Building a Blockchain

The first block

  • The first block contains initial information;
  • This information could take the form of transactional data or master data;
  • This block represents the start of a blockchain

Blockchain was created to securely exchange transaction data; to record tangible and intangible assets; and to create an alternative to central bank controlled currencies.

One compelling feature of blockchain is that these records are immutable; that means that they are unchanged over time, and unable to be altered.

As you saw in the video, implementations of blockchain have moved beyond alternative currencies, and are being used to record master data as well as transactional data.

A block is essentially a data record, just like an individual record in a traditional ledger.

The second block

  • The user creates the second block and it links to the first block

When a second block of data is added it is linked to the first block, creating a chain.

Blocks record the time and the sequence of transactions. Each block contains a HASH key, which is a unique digital identifier.

The third block

      • The third block in the chain is created and links to the second block;
      • New blocks are always added to the latest block;
      • Blockchains store transactional data;
      • A hyperledger can contain both master data and transactional data

When a third block of data is added to the chain, it links to the second block, not the first block. All blocks are linked sequentially.

As I mentioned earlier, there are a number of ways to implement blockchain, and in this presentation we will be discussing examples where Hyperledger may be the appropriate technology.

Hyperledger is hosted by the Linux foundation, an open source community, whose vision is to be the facilitator for mainstream commercial applications.

Blockchain is decentralized

      • A blockchain can be thought of in terms of transaction data storage in the same way as a database;
      • The key difference to traditional databases is that blockchain is decentralized;

Another key feature of blockchain is the decentralised architecture. This decentralisation means that there is no single point of failure that would bring the network down. This is a key differentiator to the traditional single database model that is increasingly vulnerable in today’s world.

What is a Network Node?

      • Network nodes enable blockchain to be decentralized
      • The role of a node is to support the network by maintaining a copy of a blockchain.
      • All participants in a private, permissioned, system can be a part of the network

Decentralisation is achieved by the creation of network nodes. A network node is another term for a computer that maintains a copy of the database.

What happens when
a ‘Node’ is corrupted?

      • If a third party alters a part of the chain the network may determine that the blockchain on that node is no longer the longest chain and is potentially corrupt.

I explained earlier that each block contains a unique HASH key as well as the HASH key of the previous block. This architecture is designed to ensure it is impossible to insert a new block between two existing blocks, or to alter the contents of a block without detection.

Should there be a conflict, then protocols such as Practical Byzantine Fault Tolerance (PBFT) are used as a method of conflict resolution.

Terminology

One difficulty in understanding the topic is the bewildering array of terminology. One particular term that can cause confusion is ‘distributed’, which can lead to the misconception that because something is distributed there is therefore no overall controlling authority or owner.

This may or may not be the case — it depends on the design of the ledger. In practice, there is a broad spectrum of distributed ledger models, with different degrees of centralization and different types of access control, to suit different business needs.

These may be ‘unpermissioned’ ledgers that are open to everyone to contribute data to the ledger and cannot be owned; or ‘permissioned’ ledgers that may have one or many owners and only they can add records and verify the contents of the ledger.

In my efforts to demystify blockchain, I have already introduced a number of terms that may be unfamiliar to people new to the subject. In my standards work, terms and definitions are a vital element of the documents we produce.

In this slide pack, that will be distributed after this event, I have added an annex with explantations of some of the terms for you to study at a later date. I will also add a copy of this speech and the slides to the KOIOS website.

When blockchain is discussed, one of the areas of confusion is the term “distributed’ as in distributed ledger. The word distributed may imply to some people that there is no overall control or authority.

That may or may not be the case.

All distributed ledger applications are designed for the specific use case, and that use case will determine the degree of central control and other parties access control.

More from this presentation