What’s Wrong with Best Practices?

That is a question I often get from people who know that I keep away from prescriptive approaches. I’ve been giving some quick responses, but it would be better if next time I can point to a more elaborated answer. And here it is.

While I have a lot of sympathy for those who object to ‘Best Practice’ as a name — even more — to those who object to it as a claim, my uneasiness is somewhat different. I’ll focus here on that.

Some best practices are useful. In fact, most mature and well-applied best practices for carrying out a technical task, from taking a blood test, painting a wall or repairing an engine, to building a factory or a ship, are indeed valuable (as long as they don’t suffocate innovation).

The real problem is when best practices are applied to people and social systems. I call this a ‘problem’, but it is in fact a huge opportunity for many. Most contemporary non-fiction books, especially management and self-help texts, seize this opportunity extremely well. It’s not easy to find a best seller in this category or a popular article that doesn’t provide some sort of prescription and advice, often numbered, on how to achieve or avoid something. Maybe it is a best practice for best sellers. Let’s give it a try then:

 

Four Reasons Why You Should Be Cautious When Applying Best Practices:

 

1. Correlations.

How do Best Practices come about? Some individuals or organisations, become known (or are later made known by the actions of the best practice discoverers and proponents) as successful according to some norms. Let’s call these individuals or organisations ‘best practice pioneer’. Then one or more observers, the ‘best practice discoverer’, studies the pioneers to find out what made them successful. The discoverer first takes certain effects and then selects, by identifying commonalities, what she or he believes were the causes. That is followed by a generalisation of the commonalities, from which point they begin their life as prescriptions, regardless if they are called ‘best practices’, ‘methodologies’, ‘techniques’, ‘recipes’,’templates’, or something else. Later, they are tested and, based on feedback, reappear in more mature forms and variations. In some cases, they are even supplied with scalability criteria and conditions for a successful application. The successes and failures of those who apply them give birth to Best Practices on applying Best Practices.

The problem is, that the discoverers select common patterns among the observed successful pioneers, and infer causal relation between these commonalities and those that were the criteria to select that set of pioneers in the first place. Then, such correlation leads the discoverer to select what to pay attention to, and every pattern that supports the hypothesis based on the selected commonalities would be preferred over those that don’t.

2. The risk of over-simplification.

Best practices help us deal with external stimuli, when they are too many to handle, by prompting which ones to pay attention to and how to react to them. The only way to deal with a situation is when the number of responses is higher than the number of the stimuli, within a given goal set (Ashby’s law). Or in other words, best practices are tools for reducing external variety. But not only that, they also provide means for amplifying internal variety in a special way – coupled with those stimuli that you are advised to pay attention to. So if that assumption was wrong, and it often is, neither the external variety is reduced, nor the internal is amplified.

3. Assumptions about the application context.

I used to be a practitioner of PRINCE2. What I still appreciate is a few smart techniques and in fact – the name itself. The name is the important disclaimer I’m missing in most methodologies and other types of best practices: they only work in controlled environments. They work when most of the conditions of the design-time if you allow me the IT jargon, are unchanged in run-time. This is rarely the case and is increasingly less so. This brings another interesting phenomenon: the same conditions that make the world less predictable also help quickly productise and spread best practices. They come with better marketing and with more authority in a world in which less of what has happened could prepare you for what will.

4. The habits created by Best Practices.

The worst is when people hide behind the authority of best practices or their proponents. If not that, best practices create habits of first looking for best practices, instead of thinking. And then, there is the alternative cost: the more time people spend on learning best practices, the less time they have for developing their senses for detection of weak signals and for developing their capabilities for new responses.

In summary, if you are sure that certain best practice is useful, and it’s not based on wrong inference and does not lead you to dismiss important factors, and the situation you are in is not complex, and it doesn’t weaken your resilience, then go ahead, use it.

 

Requisite Inefficiency

In his latest article Ancient Wisdom teaches Business Processes, Keith Swenson reflects on an interesting story told by Jared Diamond. In short, the potato farmers in Peru used to scatter their strips of land. They kept them that way instead of amalgamating them which would seem like the most reasonable thing to do. This turned out to be a smart risk mitigating strategy. As these strips are scattered, the risk of various hazards is spread and the probability to get something from the owned land every year is higher.

I see that story as yet another manifestation of Ashby’s law of requisite variety. The environment is very complex and to deal with it somehow, we either find a way to reduce that variety in view of a particular objective, or try to increase ours. In a farming setting an example of variety reduction would be building a greenhouse. The story of the Peruvian farmers is a good example of the opposite strategy – increase of the variety of the farmers’ system. The story shows another interesting thing. It is an example of a way to deal with oscillation. The farmers controlled the damage of the lows by giving up the potential benefits of the highs.

Back to the post of Keith Swenson, after bringing this lesson to the area of business process, he concludes

Efficiency is not uniformity.  Instead, don’t worry about enforcing a best practice, but instead attempt only to identify and eliminate “worst practices”

I fully agree about best practices. The enforcement of best practices is what one can find in three of every four books on management and in nearly every organisation today. This may indeed increase the success rate in predictable circumstances but it decreases resilience and it is just not working when the uncertainty of the environment is high.

I’m not quite sure about the other advice: “but instead attempt only to identify and eliminate “worst practices”. Here’s why I’m uncomfortable with this statement:

1. To identify and eliminate “worst practice” is a best practice itself.

2. To spot an anti-pattern, label it as “worst-practice” and eliminate it might seem the reasonable thing to do today. But what about tomorrow? Will this “worst-practice” be an anti-pattern in the new circumstances of tomorrow? Or something that we might need to deal with the change?

Is a certain amount of bad practice necessarily unhealthy?

It seems quite the opposite. Some bad practice is not just nice to have, it is essential for viability. I’ll not be able to put it better than Stafford Beer:

Error, controlled to a reasonable level, is not the absolute enemy we have been thought to think of. On the contrary, it is a precondition for survival. […] The flirtation with error keeps the algedonic feedbacks toned up and ready to recognise the need for change.

Stafford Beer, Brain of the firm (1972)

I prefer to call this “reasonable level” of error requisite inefficiency. Where can we see this? In most – if not all – complex adaptive systems. A handy example is the way immune system works in humans and other animals having the so called adaptive immune system (AIS).

The main agents of the AIS are T and B lymphocytes. They are produced by stem cells in the bone marrow. They account for 20-40% of the blood cells which makes about 2 trillion. The way the AIS works is fascinating but for the topic here of requisite inefficiency, what is interesting is the reproduction of the B-cells.

The B-cells recognise the pathogen molecules, the “antigens”, depending on how well the shape of their receptor molecules match that of the antigens. The better the match, the better the chance to be recognised as antigen. And when that is the case, the antigens are “marked” for destruction. Then follows a process in which the T-cells play an important role.

As we keep talking of the complexity and uncertainty of the environment, the pathogens seem a very good model of it.

The best material model of a cat is another, or preferably the same, cat.

N. Wiener, A. Rosenblueth, Philosophy of Science (1945)

What is the main problem of the immune system? It cannot predict what pathogens will invade the body and prepare accordingly. How does it solve it? By generating enormous diversity. Yes, Ashby’s law again. The way this variety is generated is interesting in itself for the capability of cells DNA to carry out random algorithms. But let’s not digress.

The big diversity may increase the chance to absorb that of pathogens but what is also needed is to have match in numbers to have requisite variety. (This is why I really find variety, in cybernetic terms, such a good measure. It is relative. And it can account for both number of types and quantities of the same type.) If the number of matches between B-cell receptors and antigens is enough to register “attack”, the B-cells get activated by the T-cells and start to release antibodies. Then these successful B-cells go to a lymph node where they start to reproduce rapidly . This is a reinforcing loop in which the mutations that are good match with the antigens go to kill invaders and then back to the lymph nodes to reproduce. Those mutations that don’t match antigens, die.

That is really efficient and effective. But at the same time, the random generation of new lymphocytes with diverse shapes continues. Which is quite inefficient when you think of it. Most of them are not used. Just wasted. Until some happen to have receptors that are good match of a new invader. And this is how such an “inefficiency” is a precondition for survival. It should not just exist but be sufficient. The body does not work with what’s probable. It’s ready for what’s possible.

(Note: This is the mainstream explanation of how the immune system work. There are other theories, and some of them  – this one for example – I find way more convincing, especially when  comes to the self/non-self problem. However, in all explanations the phenomenon of requisite inefficiency is equally prominent. )

The immune system is not the only complex system having requisite inefficiency. The brain, the swarms, the networks are just as good examples. Having the current level of study, the easiest systems to see it in are ant colonies.

When an ant finds food, it starts to leave a trail of pheromones. When another ant encounters the trail, it follows it.  If it reaches the food, the second ant returns to the next leaving trail as well. The same reinforcing loop we saw with the B-cells, can be seen with ants. The more trails, the more likely the bigger number of ants will step on it, follow it, leave more pheromones, attract more ants and so on. And again, at the same time there always is a sufficient amount of ants moving randomly which can encounter new location with food.

The requisite inefficiency is equally important for social systems. Dave Snowden gave a nice example coincidently again with farmers but in that case experiencing high frequency of floods. Their strategy was to build their houses not in a way to prevent the water coming in but to allow the water to quickly come out. He calls that “architecting for resilience”:

You build your system on the assumption you prevent what can fail but you also build your system so you can recover very very quickly when failure happens. And that means you can’t afford an approach based on efficiency. Because efficiency takes away all superfluous capacity so you only have what you need to have for the circumstances you anticipate. […] You need a degree of inefficiency in order to be effective.

It seems we have a lot to learn from B-cells, ants and farmers about how to make our social systems work better and recover quicker. And contrary to our intuition, there is a need for some inefficiency. The interesting question is how to regulate it or how to create conditions for self regulation. For a given system, how much inefficiency is insufficient, how much is just enough and when it is too much? May be for the immune systems and ant colonies these regulatory mechanisms are already known. The challenge is to find them for organisations, societies and economies. How much can we use from what we already know for other complex adaptive systems? Well, we also have to be careful with analogies. Else, we might fall into the “best practice” trap.

(See also More on Requisite Inefficiency)