Missing the Forest for the Trees

Recently, a manager for a global mining operation explained to me that the organization was paralyzed by the drain on time and resources associated with over 30 concurrent performance improvement initiatives. Apparently, the continuing pressure on commodity prices and margins had led to a panic, generating a cacophony of competing improvement initiatives. The manager referred to it as an “improvement initiative overload.”

There was no time to do or think about anything else. Managers were running from pillar to post trying to serve a dozen competing change program schedules. These internal resources were matched by dozens of senior external contractors and consultants, each of whom required personal liaison and data. The activity was frenetic and the cumulative workload largely uncoordinated across function.

Several initiatives were highly capital intensive. These required almost $200M in invested capital. Other initiatives were largely expense driven and smaller in scope. The cumulative effect was to add tens of millions of dollars to expenses each year.

Each initiative claimed success, subject to a quantitative evaluation of one local performance measure or another. However, enterprise revenue, expenses, costs, and profit are yet to register tangible improvement. Initiatives targeted everything from fragmentation and dilution, to recoveries, capacity utilization, and labor productivity.

There was a palpable tension developing between functional and support managers regarding which initiatives were worthy of participation and resource.

How could this dislocation occur?

Local vs. Enterprise Optimization

Local Optimization

When we look at individual operating assets in a mining operation, it is easy to see them as a collection of ‘stand-alone’ cost and production centers, removed from the context of the overall enterprise process. Functional managers tend to take a local perspective. That is legacy of an organizational structure focused on functional asset management and related asset performance.

For example, the manager of a truck and shovel operation is likely to focus on the local assets, expenses, and costs under his or her control.

All else held constant, one way of reducing costs in a truck and shovel operation is to lift productivity by various means, thereby improving production throughput. This results in improving the recovery of ‘sunk’ fixed expenses and the reduction of both variable and fixed costs. Alternatively, if there was an opportunity to release assets from the balance sheet by rationalizing excess capacity entirely, this would result in further reducing fixed unit costs.

Keep in mind, these outcomes would reduce costs only within the truck and shovel operation.

This same logic could apply to any mining and processing functions. The notion is simply that increased productivity results in the ability to reduce unit costs and lift profit by either increasing throughput or rationalizing capacity.

Enterprise Optimization

Moving from the local view of the truck and shovel operation, let’s say we learn that the primary crusher is the overall constraint (internal) of the enterprise production process. It has less capacity than the truck and shovel operation and less capacity than any other process step in there overall process.

Looking back on the local optimization initiatives within the truck and shovel operation, we recall there were two possible outcomes arising from improving productivity locally. The productivity initiatives would result in:

  • the ability to increase throughput using the same assets or;

  • they would allow management to remove excess capacity. 

One or the other option should result in an ability to reduce expenses…shouldn’t it?

Let’s look at each option in the context of the truck and shovel operation being merely one step in a dozen steps from drilling and blasting to refining and smelting.

First, what use can be made of newly found capacity in the truck and shovel operation to increase throughput in the overall enterprise process? The potential additional throughput cannot be processed at the constraint, which we now understand to be the primary crusher. Nothing about the productivity improvement in the truck and shovel operation has changed the throughput limit at the constraint.

Therefore, improved productivity would not result in any additional throughput for the mining operation as a whole, or any related financial benefit. It may result in a new ore stockpile. No new throughput amounts to no resulting cost reduction. It may only result in increasing inventory (WIP).

So the imagined reduction of local fixed and variable unit costs based on increasing local throughput is an illusion because it does nothing to lift throughput a the process constraint.

If we examine the remaining option to remove excess capacity to cut costs, we understand we can ‘park up’ or sell the excess trucks and shovels. Surely, that would realize a reduction in direct labor and possibly direct salaried costs? We would see lower variable unit cost due to a reduction in fuel, maintenance, parts consumption, etc. Better still, we could sell these excess assets and produce some income.

Maybe not.

Protecting the Enterprise Constraint and Protective Capacity

Many productivity initiatives begin with the notion that resource requirements at each stage of a process may be calculated by simply matching theoretical production capacities and non-constraints to the capacity of the process constraint. Then, they compensate along the process for historical performance losses by adding a buffer or ‘fat factor.’

This is usually the way that traditional Industrial Engineering (IE) approaches and resource planners ‘balance’ a production process. Continuous improvement programs then set about lifting the average performance level at every step to mitigate the need for buffering and reduce costs.

Unfortunately, the world of local optimization and average performances comes to a messy end when we consider the fact that any sufficiently large deviation in the performance of the truck and shovel operation (an unusually low availability for key equipment, for example) will act to starve the primary crusher of throughput. A measure of average performance at any process step or asset does not tell me much about the size of performance deviations.

Then again, it is not the size of performance deviations relative to the mean performance of the local asset or process step that matters. It is the deviation relative to the throughput capacity of the subsequent process step and finally, the process constraint.

Managers in the truck and shovel operation may have calculated average historical OEE fluctuations from history to determine the capacity buffer required to guarantee the average output of the truck and shovel fleet. However, these ‘average performances’ obscure a linear dependency of variables. Maximum performance deviations at one step will act periodically to starve the subsequent step of throughput.

High performance potential at the crusher is periodically choked by a lack of input from the previous process step (this can all happen at an interval of minutes), while low performance at the crusher deals out the same result to process steps down the line. In other words, the maximum deviations along the production process establish new variables in subsequent steps which, in turn, establish new maximum performance deviations.

Controlling these relationships is global or enterprise optimization

The Cost of Lost Throughput

Lost throughput at the constraint cannot be recovered. Worse, every unit of throughput lost at the constraint will result in the failure to recover a unit share of fixed cost along the entire length of the enterprise production process.

If we add the total fixed costs for the entire enterprise production process and divide it by the total units passing through the constraint, we can calculate the amount of unrecovered cost every time one unit of throughput is lost at the constraint.

These losses dwarf the financial gains secured by reducing costs locally. So why risk squeezing throughput at the enterprise constraint to grab limited local cost savings, especially when we have not calculated the amount of protective capacity required to offset the starving effect of local maximum deviations?

Most functional managers do not consider how their maximum local performance deviations may damage financial performance of the enterprise.

They generally are not seeking to determine local capacity with the goal of protecting throughput at the constraint by offsetting or buffering such maximum deviations. Typically, none of their performance measures address the problem because that would require an enterprise perspective.

Instead, their well-intentioned local initiatives are aimed at banking local cost reductions. Inadvertently, their initiatives may blindly reduce the local capacity currently acting to buffer the effect of local maximum deviations. These buffers are often interpreted as excess capacity.

The Agenda and Priorities for Continuous Improvement

We do not need to optimize average local performance to overcome this problem.

We do need to selectively buffer throughput losses caused by the worst of the maximum performance variations. We have established that this does not necessarily mean we need to improve average OEE outcomes at all process steps.

The worst causes of lost throughput are often located at process steps which have better average performance outcomes (higher OEE, for example). It is the size of performance deviations in one process step relative the capacity of dependent process steps that matters. All continuous improvement initiatives should be prioritized and funded relative to this criterion. All optimization initiatives should serve the goal of optimizing throughput at the constraint. As offending causes of performance loss are repaired, the constraint of the process certainly may move.

Basing decision making for improvement initiatives and capacity levels around lifting average OEE outcome at each local process step is a clear symptom of the local optimization mentality. Often this is accompanied by references to the standard deviation of performance outcomes at a process step. That is the local optimization disease in play.

The magnitude of the local performance deviation around the capacity of subsequent process steps and the enterprise constraint is the performance that needs to be managed. All initiatives should be subordinated to the goal of lifting and protecting the enterprise constraint.

Therefore, as I explained to the manager I referred to in the opening paragraph:

“The problem isn’t that you are struggling under the burden of an ‘improvement initiative overload.’ Rather, it is that most, if not, all of your improvement initiatives are poorly conceived exercises in local optimization. Many are doomed not only to fail, but to reduce throughput, increase costs, and generate unnecessary inventory and delays. 

You are designing improvement initiatives which target local optimization to solve problems you unwittingly created by targeting local optimization.”

Acknowledgement: Some ideas discussed here were originally expressed in various books written by Eliyahu Goldratt et al., notably ‘The Goal’ (first published in 1984). Goldratt introduced the Theory of Constraints. We seek to build upon some of the principles introduced by Goldratt in combination with our own approaches, and through practical applications in successful implementation programs.

 

Productivity Step Change (PSC) is a global Management Consulting Group based in the US dedicated to maximizing Capacity Utilization, improving Return on Invested Capital (ROIC), and increasing productivity for its clients across industry. Our enterprise-level, data-led, cross-functional business analysis typically requires 4-6 calendar weeks and is designed to provide evidence of your opportunities for overall growth. Please contact us if you would like to schedule time for a discussion and presentation.

Pin It on Pinterest

Share This