Dodgy data dulls your senses and leads to poor decision-making. Why? The purpose of your information system is to help you do two things: make sense of what is happening in your environment and support effective decision-making.
Every transaction and every bit of data gathered from sensors gives a unique record, at any given moment, of the state of play of your enterprise. Sometimes what happens next is automated; sometimes it waits for a person to act, based on a process, a business rule, analysis or intuition.
For illustration, let’s look at a simple slice through an organisation, from CEO to the shop floor:
– The CEO presses an approval button for the establishment of a contract for the expansion project, and the workflow opens the floodgates to the raising of purchase orders.
– Engineering produces bills of material from the computer model of the project and collates them according to when they are due, how long their delivery will take, and then further batch them by vendor.
– Supply , under the umbrella of the contract, takes care of procurement for all the products and services the engineering project team has itemised. Finance keeps score of how much has been spent against what has been budgeted and approved.
– An RFID tag scans across a reader and inventory is recorded as received into a warehouse. The finance system raises an asset on the balance sheet and a matching liability against a vendor.
– The contractor engaged in helping out in the warehouse swipes in and immediately starts counting up the dollars earned for hours worked.
– The project coordinator requisitions a part from the warehouse as the schedule indicates it’s going to be needed the following day.
– The fitter calls over the rigger assigned by the system, and both of them go to fit the item according to the drawings they have from the engineering model. Once done, they scan the barcode on the item which updates the model, along with the finance, maintenance and work management systems.
– Operations turns some dials and flick some switches to start the process, and monitor an array of simulated dials displayed on a monitor.
– A temperature sensor detects that the safe set-point has been exceeded and a signal goes to open a valve to relieve pressure until the system can once again operate within limits.
– The system doesn’t return to a safe operating level and at the same time as the operator shuts it down, a manager tasks a planner with expediting delivery of a spare, as well as assembling an emergency crew to get it fitted.
– All the while, a sensor monitors the production volume and sends the results to the reporting system. Everyone knows the CEO has a particular interest in those production numbers, using them to determine whether or not the business is going to meet its goal.
Now, that’s a pretty simple scenario outlined above. It’s just a thin slice through a hypothetical day in a typical industrial environment. But what’s going on there besides my assertion in the first paragraph? Isn’t it all about sense-making and decision-making?
When all those systems were designed, built, tested and deployed, there was likely at least a business analyst, a developer and a business owner, amongst others, scoping out requirements based on business needs. Someone from the team would have been in charge of the data, determining what fields went in which tables. They’d be thinking about how the master data would be used with the transaction data to make sense of the processes the information system was managing. In the team’s scope were, for example, the process flows, the business rules, security settings, permissions as to who could create, read, update and delete records. You get the picture.
The system goes live, and a catalogue of errors cascades through the organisation:
– The wrong account is attached to the CEO’s press of the button, and all the resulting financial data goes walkabout.
– The engineers were in a rush to get the bills of material out and thought it unimportant to complete the delivery fields diligently, with the result that everything had a lead time of a week.
– The supply team was so busy negotiating the master contracts and raising purchase orders with their vendors that, even though they spotted the error on the lead times, they did what was expedient: increase them universally by a week.
– The materials managers were given a spreadsheet by one of the vendors which had the wrong mapping of the RFID fields. How were they to know they were capturing a code for a pump when the description was for a motor?
– The warehouse contractor swipes in, but the learning management system has him registered as a coded welder. For every hour he works, the system books it out at the stored rate for a coded welder, which is twice the dollars than should be the case. And although the guy works at the top of the warehouse racking, 10m above ground, the system fails to note that his ‘work from heights’ certification needs renewal.
– The schedule has not been updated in a week and the part requisitioned by the coordinator actually should have been there two days prior. An expeditor is sent to the stores to get back in time to be done during the day shift.
– The fitter and rigger wait in the crib until they’re notified that the part has been delivered to the work location. When they get there, they can’t quite believe that a pump has been provided instead of a motor.
– Finally, the right piece of equipment is fitted, and the ramp-up sequence can start. But, the commissioning team used the wrong specs to calibrate the IoT sensors and thus the system had to be shut down, despite them learning later that the instrumentation was giving a false negative.
– The false-negative doesn’t prevent the manager from tasking the planner with expediting the delivery of a spare and assembling an emergency crew to get it fitted. They don’t find out about the perfectly good, but supposedly faulty piece of equipment until it’s too late.
The CEO, meantime, sitting way above this comedy of errors, cannot have any real sense of what is going on in his business. Therefore any decision they makes would be as effective as the throw of a die would be to answering a multiple-choice exam question.
Processes and data seem to have a particular propensity for decay—it’s as if the gods maliciously choose to configure their quotient of entropy such that they cause the most harm.
There are two remedies for the problem.
The first involves leadership around the reason why it is essential to maintain true data. When you consider the enormous amount of effort people spend doing their work, wouldn’t it be the sensible thing to do to ensure that as little of that effort as possible is wasted on correcting the effects of bad data? Can you as a leader clearly articulate the opportunity cost of the wasted time not only on business outcomes, bit also on engagement, motivation and a sense of accomplishment?
The second, which can only be achieved once the first is done, is to commit to such a high level of operating discipline that these effects of false data become increasingly rare and less consequential.
In one of Eli Goldratt’s lesser—read books, The Haystack Syndrome, the fascinating introductory chapter talks about the difference between information and data. Specifically, how to find the needle of information in the haystack of data. Since its publication in 1990, the book’s subtitle, ‘sifting information out of the ocean of data’ has, besides mixing metaphors, become even more relevant in our age of big data and machine learning.
‘Information,’ wrote Goldratt, ‘is the answer to the question asked of the data.’ You have to ask good questions. Crucially, too, the data has to be reliable. From an operational perspective, I’ve found one need only ask about six measures :
• How much throughput is the system generating?
• What operating expense are we consuming to generate that throughput?
• What investment have we made in generating the profit arising from the difference between the throughput and the operating expense?
• How long is what the system’s doing going to take—that is, what’s the lead time or turn-around time?
• How reliable is system performance—that is, what is delivery to promise?
• What quality does the system produce—that is, how much product or service is either rejected or reworked?
Maintaining true data means having a reliable system of record that can provide timely answers to these questions. After all, the purpose of information is to assist in sensemaking within and across the enterprise. Having a reliable system of record ensures that we use the best possible sensemaking information to support effective decision-making.
Understanding and insight come from knowledge. We build knowledge from information. Information is the answer to the question you ask of the data.
If you want better understanding and insight, maintain true data.
This article is part of our series: Five commandments for high-performance execution
Part 1: Maintain True Data
Part 2: Work Fully Kitted
Part 3: Control Work Release
Part 4: Resolve Issues Rapidly
Part 5: Act by Priority
The change from standard thinking to Theory of Constraints (TOC) is both profound and exhilarating. To make it both fun and memorable, we use a business simulation we call The Right Stuff Workshop.
We’d love to run it with you. To learn more:
Who wouldn’t want more for less? More profit from less investment? More government services for fewer taxes? More charity work for less administrative overhead? But what measures will help us hit or exceed our productivity ambitions?(more…)
Zero harm is a universal aspiration. But you can be perfectly safe and go perfectly broke. How can we ensure the best chance of achieving zero harm while providing a sustainable and competitive return to our shareholders?(more…)
Discover better ways to do better work.
We alternate our own actionable articles with three relevant links from other authorities.We’ll only use your email address for this newsletter. No sales calls