This may sound like incoherent theoretical rambling. For that is exactly what it is. Read at your own peril.

There are a couple of levers to improve the efficiency of any operational setup.

Scale effects and learning effects are two of these. Scale effects are fairly well understood: as more and more scale is added to a process, the cost of delivering that process reduces, largely due to an increase in utilization of resources delivering on the process.

Learning effects are a little more complex.

One reason for learning effects is a reduction in information search times and costs. The first time you had to fix that broken chair, the time that it took to find the carpenter, tools and supplies was the longest. The next time, you likely knew the carpenter and tools required.

Another reason for learning effects is a reduction in decision times. In general the decision making framework involves a set of inputs, deciding actor(s), and the process or rules for making decisions. When an operating process is first set up, there is often limited clarity on the inputs required, multiple deciding actor(s) playing interdependent roles and even issues exist around the process of decision making. Over time, inputs get standardized, the deciding actors get unified, and decision making rules get clearly determined. This reduces the time for decision making.

In addition, there are different types of decisions that often need to be made:
a. Decisions of choice between multiple paths, or even simply a choice between a go and a no-go.
b. Decisions of timing of effort, investment etc.
c. Decisions of allocations of scarce resources between multiple conflicting priorities.
etc.

Decisions particularly take time when they have to be made in the face of incomplete information, a common reality. In initial stages of such a situation, decision makers have to rely on a certain ‘leap of faith’ in making the decision. As they receive feedback on their decision over time, they tend to become faster and better at making such decisions.

It is interesting to see how the availability of information plays such a key role in learning effects.

So how do you design operational processes so that learning effects can be accelerated?

For one, design to minimize search. Provide as much targeted and relevant information as possible to resources, when they need it. Create a knowledge repository of information that people can contribute their early learning to. Search, collaboration and knowledge management tools are such a big hit these days for this reason.

Second, design to standardize inputs for each decision early. Even simple checklists work well to do this. Simplification of input factors is a big contributor.

Third, reduce the number of decision making actors. Today’s governmental processes are a great indicator of how things can go wrong here. Requiring five people to make simple decision is a sheer sign of bureaucracy.

The best way is to to codify roles and automating them to the extent possible. Rules, decision engines and straight through processing are a growing reality in the operational improvement field.

Finally, one needs to create that feedback loop for decision makers to learn from their leaps of faith. This is a commonly missed out factor in most operations. But feedback loops are essential to ensure learning and to improve decision making speeds and quality. Creating good feedback loops requires a few elements – a metric for measuring the quality of process outputs and decisions, a system for communicating this back to decision makers, and aligned incentives. Though I am yet to see a standardized way of doing this well.

Adieu.

vasantvothersThoughtsThis may sound like incoherent theoretical rambling. For that is exactly what it is. Read at your own peril.There are a couple of levers to improve the efficiency of any operational setup.Scale effects and learning effects are two of these. Scale effects are fairly well understood: as more and...Cataloging a wonderful world