Essential Balances in Projects

These are part of the frames from the projects-flavour of the "Essential Balances" theme, delivered in a workshop format at a training event yesterday in Athens.

Posted on July 8, 2014. | Short Link
No Comments





Redrawing the Viable System Model diagram

I’ve been arguing repeatedly that trying to get the Viable System Model from overviews, introductions and writings based on or about it, can put the curious mind in a state of confusion or simply lead to wrong interpretations. The absolute minimum is reading at least once each of the three books explaining the model. But better twice. Why? There are at least two good reasons. The obvious one is to better understand some points and pay attention to others that have been probably missed during the first run. But there is also another reason. Books are of linear nature and when tackling non-linear subjects, a second reading gives the chance to better interpret each part of the text when having in the memory more less all other parts which relate to it.

Still, one of the things that are expected to be most helpful is in fact what brings about either confusion, aversion or misuse: the VSM diagrams. They clearly favour expected ease of understanding over rigour and yet in some important points they fail in both. Here is my short list of issues, followed by description of each:

  • Representation of the channels
  • Confusion about operations and their direct management
  • Notation and labelling of systems
  • They show something in between generic and example model
  • Hierarchical implication

Representation of the channels

Stafford Beer admitted several times in his books the “diagrammatic limitations” of the VSM representations. Some of the choices had to do with the limitation of the 2D representation and others, I guess, aimed to avoid clutter. Figure 26 of “The Heart of Enterprise” is a good example for both. It shows eleven loops but implies twenty one: 3 between environment, operations and management, multiplied by 3 for the choice of showing three operations, then another 9 = 3×3 for loops between same-type elements and finally 3 more between operation management and the meta-system.

Confusion about operations and their direct management

Depending on the context, System One refers either to operations or to their direct management. In some diagrams S1 is the label of the circles, and in others – of the squares linked to them. Referring to one or the other in the text, depending on which channels are described, only adds to the confusion. That is related to the general problem of

Notation and labelling of systems

All diagrams representing the VSM in the originals writings and all interpretations I’ve seen so far, suggest that circles represent System One, and triangles pointing up and down, represent System Two and Three* respectively. Additionally most VSM overviews state exactly that in the textual description. My assertion is almost the opposite:

What is labelled as S1 and what is shown as circles are both not representing S1.

That might come as a shock to many and yet, now citing Beer, System One is not the “circles” but:

The collection of operating elements (that is, including their horizontal and vertical connexions)

The Heart of Enterprise, page 132

Well, strictly speaking a system is a system because it shows emergent properties so it is more than the collection of its parts but even referring to it as collection reveals the serious misinterpretation when taking only one of its parts to represent the whole system.

They show something in between generic and example model

Communicating such matters to managers trained in business schools wasn’t an easy task. And it is even more challenging nowadays. There is a lot to learn and a even more to unlearn. It is not surprising then that even in the generic models typically three operations are illustrated (same for System 2).  Yet, I was always missing a true generic representation, or what would many prefer to call “meta-model”.

Hierarchical implication

It can’t be repeated enough that the VSM is not an hierarchical model, and yet it is often perceived and used as such or not used especially because of that perception. It seems that recursivity is a challenging concept, while anything slightly resembling hierarchy is quickly taken to represent one. And sadly, the VSM diagram only amplifies that perception, although the orthogonality of the channels serves an entirely different purpose. Stafford Beer rarely misses an opportunity to remind about that. Nevertheless, whatever is positioned higher implies seniority and the examples of mapping to actual roles and functions only help in confirming this misinterpretation.

 

There are other issues as well but my point was to outline the motivation for trying alternative approaches for modelling the VSM, without alternating the essence or the governing principles. Here is one humble attempt to propose a different representation (there is a less humble one which I’m working on, but it’s still too early to talk about it). The following diagram favours circular, instead of orthogonal representation which I hope achieves at least in destroying the hierarchical perception. Yet, from a network point of view the higher positioning of S3 is chosen on purpose as the network clearly shows that this node is a hub.

GenericCircularViewOfTheViableSystemModel

System One is represented by red colouring, keeping the conventional notation for the operations (S1.o) and their direct management (S1.m). As mentioned above, apart from solving this, the intention  as to have it as a generic model. If that poses a problem for those used to the hybrid representation, here’s how it would look if two S1s are shown:

 

Circular Netwrok View Of The Viable System Model-2operations

I hope this proposal solves fully or partially the five issues explained earlier and brings a new perspective that can be insightful on its own. In any case the aim is to be useful in some way. If not as it is now, then by triggering feedback that might bring it to a better state. Or, it can be useful by just provoking other, more successful attempts.

 

Posted on April 14, 2014. | Short Link
3 Comments





More on Requisite Inefficiency

The “slides” supporting my talk on Requisite Inefficiency a couple of months ago have been on Slideshare since then but I haven’t had the time to share them here. Which I do now.

The various manifestations of Requisite Inefficiency in both organisms and organisations can be understood by observing the maintenance of balances between homeostasis and heterostasis (as in the adaptive immune systems), exploration and exploitation (foraging of ants or curiosity-driven vs market-driver research) as well as various types of redundancy or shift of function. The latter can be elastic, as it is in degeneracy, or plastic, as it is in exaptation.
Having an underutilised structure/function that is capable of providing the deficit of variety to the utilised structures of  a system in order to match the complexity of an external stimulus, or that can be adapted in sufficiently short time to do so, is a prerequisite for survival.
Posted on March 30, 2014. | Short Link
No Comments





Variety – part 2

Can you deal with it? It is amazing how language evolved to adapt to reductionist mindset.  “Deal”, which originated from divide and initially meant only to distribute and then to trade, is now used as synonym for cope, manage and control. We manage things by dividing them. We eat elephants peace by peace, we start journeys of thousand miles with a single step, and we divide to conquer.

(This is the second part of a sequence devoted to the concept “variety” used as a measure of complexity. It’s a good idea to read previous part before this one but even doing it after, or not at all is fine.)

And that indeed proved to be a good way of managing things, or at least some things, and in some situations. But often it’s not enough. To deal with things, and here I use “deal” to mean manage, understand, control, we need requisite variety and when that is not the case initially, we could get it in three ways: by attenuating the variety of what has to be dealt with, by amplifying our variety, or by doing a bit of both when the difference is too big.

And how do we do that? (I use we/them instead of regulator/regulated or systemA/systemB or organisation/environment, because I find it easier to imply the perspective and the purpose this way) Let’s start with an attempt to put some very common activities in each of these groups. We attenuate external variety by grouping, categorising, splitting, standardising, setting objectives, filtering, reporting, coordinating, and consolidating. We amplify our variety by learning, trial-and-error, practising, networking, advertising, buffering, doing contingency planning, and innovating. And we can add a lot more to both lists, of course. We use activities from both list but when doing these activities we need requisite variety as well. That’s why we have to apply them at the lower level of recursion. We learn to split and we split to learn, for example.

Attenuate and amplify variety

It could be easy to put in the third group pairs from each list but now the task is to classify single types. Here are two suggestions that seem to fit: planning and pretending.

With planning we get higher variety by being prepared for at least one scenario, especially in the parts of what we can control, in contrast to those not prepared even for that. But then, we reduce different possibilities to one and try to absorb part of the deflected variety with risk management activities. Planning is important in operations and projects but somehow in the business setting we can get away with poor planning, at least for long enough to lose the opportunity to adapt. And that is the case in many systems with delayed feedback. That’s why I like the test of quick-feedback and skin-in-the-game situations, like sailing. In sailing, not having a good plan could be equality disastrous as sticking to a plan for too long and not adapting or replacing it quickly based on evidence against the assumptions of the initial one. And that’s valid at every level, no matter if the plan is for a week, day or an hour.

Pretending is even more interesting in its dual role. It can be so successful as to reinforce its application to the extreme. Pretending is so important for stick insects, for example, that they apply it 24/7. That proved to be really successful for their survival and they’ve been getting better at it for the last fifty million years. It turned out to be also so satisfactory that they can live without sex for one million years. Well, that’s for a different reason but nevertheless their adaptability is impressive. The evolutionary pressure to better resemble sticks made them sacrifice their organ symmetry so that they can afford thinner bodies. Isn’t it amazing: you give up one of your kidneys just to be able to lie better? Now, why do I argue that deception in general, and pretending in particular, has a duel role in the variety game? Stick insects amplify their morphologic variety and through this they attenuate the perception variety of their predators. A predator sees stick as a stick and stick insect as a stick, two states attenuated into one.

Obviously snakes are more agile than stick insects but for some types that agility goes beyond the capabilities of their bodies. Those snakes don’t pretend 24/7 but just when attacked. They pretend to be dead. And one of those types, the hognose snake, goes so far in their act as to stick its tongue out, vomit blood and sometimes even defecate. Now, that should be not just convincing but quite off-putting even for the hungriest of predators.

If pretending can be such a variety amplifier (and attenuator), pretending to pretend can achieve even more remarkable results. A way to imagine the variety proliferation of such a structure is to use an analogy with the example of three connected black boxes that Stafford Beer gave in “The Heart of Enterprise”. If the first box has three inputs and one output, each of them with two possible states, then the input variety is 8 and the output is 256. Going from 8 to 256 with only one output is impressive but when that is the input of a third black box, having only one output as well, then its variety reaches the cosmic number of 1.157×1077.

That seems to be one of the formulas of the writer Kazuo Ishiguro. As Margaret Atwood put it, “an Ishiguro novel is never about what it pretends to pretend to be about”. No wonder “Never Let Me Go” is so good. And the author, having much more variety than the stick insects, didn’t have to give his organs to be successful. He just made up characters to give theirs.

Posted on September 14, 2013. | Short Link
4 Comments





Variety – part 1

The cybernetic concept of variety is enjoying some increase in usage. And that’s both in frequency and in number of different contexts. Even typing “Ross Ashby” in Google Trends supports that impression.RossyAshby_as_seen_by_GoogleTrends In the last two years the interest seems stable, while in the previous six – non-existing, save for the lonely peak in May 2010. That’s not a source of data to draw some conclusions from, but it supports the impression coming from tweets, blogs, articles, and books. On one side, that’s good news. I still find the usage insignificant, compared to what I believe it should be, given its potential and the tremendously increased supply of problems of such nature that if not in solving, at least it can help in understanding. Nevertheless, some stable attention is better that none at all. On the other side, it attracts a variety of interpretations and some of them might not be healthy for the application of the concept. That’s why I hope it’s worth exchanging more ideas about variety, and those having more variety themselves would either enjoy wider adoption or those using them – more benefits, or both.

The concept of “variety” as a measure of complexity had been preceded and inspired by the “information entropy” of Claude Shannon, also known as the “amount of surprise” in a message. That, although stimulated by the development of the communication technologies in the first half of the twentieth century, had its roots in statistical mechanics and Boltzmann‘s definition of entropy. Boltzmann, unlike classical mechanics and thermodynamics, defined entropy as the number of possible microstates corresponding to the micro-state of a system. (These four words “possible”, “microstates”, “micro-states” and “system” deserve a lot of attention. Anyway, they’ll not get in this post.)

Variety is usually defined as the number of possible states in a system. It is also applied to a set of elements, as the number of different members determine the variety of a set, and to the members themselves which can be in different states and then the set of possible transitions has certain variety. This is the first important property of “variety”. It’s recursive. I’ll come back to this later. Now, to clarify what is meant by “state”:

By a state of a system is meant any well-defined condition or property that can be recognised if it occurs again.

Ross Ashby

Variety can sometimes be easy to count. For example, after opening the game in chess with pawn on D4, the queen has variety of three – not to move or move to one of the two possible squares. If only the temporary variety gain is counted, then choosing D2 as the next move would give variety of 9, and D3 would give 16. That’s not enough to tell if the move is good or bad, especially having in mind that some of that gained variety is not effective. However, in case of uncertainty, in games and elsewhere, moving to a place which both increases our future options and decreases those of the opponent, seems a good advice.

Variety can be expressed as a number, as it was done in the chess example, but in many cases it’s more convenient to use the logarithm of that number. The common practice, maybe because of the first areas of application, is to use binary logarithms. When that is the case, variety can be expressed in bits. It is indeed more convenient to say the variety of a four-letter code using the English alphabet is 18.8 bits instead of 456 976. And then, when the logarithmic expression is used, combining varieties of elements is done just by adding instead of multiplying. This has additional benefits when plotting etc.

Variety is sometimes referred to and counted as permutations. That might be fine in some cases but as a rule it is not. To use the example with the 4-letter code, it has 358 800 permutations (26 factorial divided by 22 factorial), while the variety is 456 976 (26 to the power of 4).

Variety is relative. It is dependant on the observer. That’s obvious even from the word “recognised” in the definition of “state”. If, for example, there is a clock with two hands that are exactly the same or at least to the extent that an observer can’t make the difference, then, from the point of view of the observer, the clock will have much lower variety than a regular one. The observer will not be able to distinguish for example 12:30 and 6:03 as they will be seen as the same state of the clock.

Clock with indistiguishable hands

This can be seen as another dependency. That of the capacity of the channel or the variety of the transducer. For example, it is estimated that regular humans can distinguish up to 10 million colours, while tetrachromats -  at least ten times more. The variety of the transducer and the capacity of the channel should always be taken into account.

When working with variety, it is important to study the relevant constraints. If we throw a stone from the surface of Earth, certain constraints, including those we call “gravity” and the “resistance of the air”, would allow a much smaller range of possible states than if those constraints were not present. Ross Ashby made the following inference out of this: “every law of nature is a constraint”, “science looks for laws; it is therefore much concerned with looking for constraints”. (Which is interesting in view of the recent claim of Stuart Kauffman that “the very concept of a natural law is inadequate for much of the reality” and that we live in a lawless universe and should fundamentally rethink the role of science…)

There is this popular way of defining a system as something which is more than the sum of its parts. Let’s see this statement through the lenses of varieties and constraints. If we have two elements, A and B, and each can be in two possible states on their own but when linked to each other A can bring B to another, third state, and B can bring A to another state as well. In this case, the system AB has certainly more variety than the sum of A and B unbound. But if, when linking A and B they inhibit each other, allowing one state instead of two, then it is clearly the opposite. That motivates rephrasing the popular statement to “a system might have different variety than the combined variety of its parts”.

If that example with A and B is too abstract, imagine a canoe sprint kayak with two paddlers working in sync and then compare it with the similar setting but one of the paddlers rowing while the other holds her paddle in the water.

And now about the law of requisite variety. It’s stated as “only variety can destroy variety” by Ashby and as “variety absorbs variety” by Beer, and has other formulations such as “The larger the variety of actions available to control system, the larger the variety of perturbations it is able to compensate”. Basically, when the variety of the regulator is lower than the variety of the disturbance, that gives high variety of the outcome. A regulator can only achieve desired outcome variety, if its own variety is the same or higher than that of the disturbance. The recursive nature, mentioned earlier, can now be easily seen, if we look at the regulator as channel between the disturbance and the outcome or if we account for the variety of the channels at the level of recursion with which we started.

To really understand the profound significance of this law, it should be seen how it exerts itself in various situations, which we wouldn’t normally describe with words such as “regulator”, “perturbations” and “variety”.

In the chess example, the power of each piece is a function of its variety which is the one given by the rules reduced by the constraints at every move. Was there a need to know about requisite variety to design this game? Or any other game for that matter? Or was that necessary to know how to wage war. Certainly not. And yet, it’s all there:

It is the rule in war, if our forces are ten to the enemy’s one, to surround him; if five to one, to attack him; if twice as numerous, to divide our army into two.

Sun Tzu, The Art of War

But let’s leave the games for a moment and remember the relative nature of variety. The light signals in ships should comply with the International Regulations for Preventing Collisions at Sea (IRPCS). Yes, here we even have the word “regulation”. Having the purpose in mind, to prevent collision, the signals have reduced variety to communicate the states of the ships but enough to ensure the required control. For example, if an observer sees one green light, she knows that another ship is passing from left to right, and  – from right to left – if she sees red light. There are lots of states – different angels of the course of the other ship -  that are reduced into these two but that is serving the purpose well enough. Now, if she sees both red and green that means that the ship is coming exactly towards her that’s an especially dangerous situation. That’s why the reduction of variety in this case has to be very low.

The relativity of variety is not only related to the observer’s “powers of discrimination”, or those of the purpose of regulation. It could be dependant also on the situation, on the context. An example that first comes to mind is the Easop’s fable “The Fox and the Stork”.

Fables, and stories in general, are interesting phenomenon. Their capability to influence people and survive many centuries is amazing. But why is that? Why do you need a story, instead of getting directly the moral of the story. Yes, it’s more interesting, there is this uncertainty element and all that. But there is something more. Stories are ambiguous, interpretable. They leave many things to be completed by the readers and listeners. And yes, to put in different words, they have much higher variety than morals and values.

That’s it for this part. Stay tuned.

Posted on September 11, 2013. | Short Link
1 Comment





The change of the change

Let’s have a variable V representing at this moment an aspect of interest I from the behaviour of a system S. This variable W is changed through transduction of a certain characteristic C of S using a transducer T. 

Now, which variable are we talking about, V or W?  Probably Z? It would be V only if I, S, C and T and the meaning of ‘variable’ remained the same at the moment of writing the second sentence and even that would require ‘only if’ to work as assumed by logic.

Yes, it can be that fast.

Posted on June 23, 2013. | Short Link
No Comments





Reasoning with Taskless BPMN

Was it Lisbon that attracted me so much or the word Cybernetics in the sub-title or the promise of Alberto Manuel that it would be a different BPM conference? May be all three and more. As it happened, the conference was very well organised and indeed different in a nice way. The charm of Lisbon was amplified by the nice weather, much appreciated after the long winter. As to Cybernetics, it remained mainly in the sub-title but that’s good enough if it would make more people go beyond the Wikipedia articles and other easily digestible summaries.

My presentation was about using task-free BPMN which I believe, and the results so far confirm, can have serious benefits for modelling of both pre-defined processes and those with some level of uncertainty. In addition, there is a nice way to execute such processes using reasoners and achieve transparency in Enterprise Architecture descriptions which are usually isolated from the operational data and neither the former is linked with what happens, nor the latter gets timely updates from the strategy. More on this in another post. Here’s the slidedeck:

Posted on April 24, 2013. | Short Link
4 Comments





Requisite Inefficiency

In his latest article Ancient Wisdom teaches Business Processes, Keith Swenson reflects on an interesting story told by Jared Diamond. In short, the potato farmers in Peru used to scatter their strips of land. They kept them that way instead of amalgamating them which would seem like the most reasonable thing to do. This turned out to be a smart risk mitigating strategy. As these strips are scattered, the risk of various hazards is spread and the probability to get something from the owned land every year is higher.

I see that story as yet another manifestation of Ashby’s law of requisite variety. The environment is very complex and to deal with it somehow, we either find a way to reduce that variety in view of a particular objective, or try to increase ours. In a farming setting an example of variety reduction would be building a greenhouse. The story of the Peruvian farmers is a good example of the opposite strategy – increase of the variety of the farmers’ system. The story shows another interesting thing. It is an example of a way to deal with oscillation. The farmers controlled the damage of the lows by giving up the potential benefits of the highs.

Back to the post of Keith Swenson, after bringing this lesson to the area of business process, he concludes

Efficiency is not uniformity.  Instead, don’t worry about enforcing a best practice, but instead attempt only to identify and eliminate “worst practices”

I fully agree about best practices. The enforcement of best practices is what one can find in three of every four books on management and in nearly every organisation today. This may indeed increase the success rate in predictable circumstances but it decreases resilience and it is just not working when the uncertainty of the environment is high.

I’m not quite sure about the other advice: “but instead attempt only to identify and eliminate “worst practices”. Here’s why I’m uncomfortable with this statement:

1. To identify and eliminate “worst practice” is a best practice itself.

2. To spot an anti-pattern, label it as “worst-practice” and eliminate it might seem the reasonable thing to do today. But what about tomorrow? Will this “worst-practice” be an anti-pattern in the new circumstances of tomorrow? Or something that we might need to deal with the change?

Is a certain amount of bad practice necessarily unhealthy?

It seems quite the opposite. Some bad practice is not just nice to have, it is essential for viability. I’ll not be able to put it better than Stafford Beer:

Error, controlled to a reasonable level, is not the absolute enemy we have been thought to think of. On the contrary, it is a precondition for survival. [...] The flirtation with error keeps the algedonic feedbacks toned up and ready to recognise the need for change.

Stafford Beer, Brain of the firm (1972)

I prefer to call this “reasonable level” of error requisite inefficiency. Where can we see this? In most – if not all – complex adaptive systems. A handy example is the way immune system works in humans and other animals having the so called adaptive immune system (AIS).

The main agents of the AIS are T and B lymphocytes. They are produced by stem cells in the bone marrow. They account for 20-40% of the blood cells which makes about 2 trillion. The way the AIS works is fascinating but for the topic here of requisite inefficiency, what is interesting is the reproduction of the B-cells.

The B-cells recognise the pathogen molecules, the “antigens”, depending on how well the shape of their receptor molecules match that of the antigens. The better the match, the better the chance to be recognised as antigen. And when that is the case, the antigens are “marked” for destruction. Then follows a process in which the T-cells play an important role.

As we keep talking of the complexity and uncertainty of the environment, the pathogens seem a very good model of it.

The best material model of a cat is another, or preferably the same, cat.

N. Wiener, A. Rosenblueth, Philosophy of Science (1945)

What is the main problem of the immune system? It cannot predict what pathogens will invade the body and prepare accordingly. How does it solve it? By generating enormous diversity. Yes, Ashby’s law again. The way this variety is generated is interesting in itself for the capability of cells DNA to carry out random algorithms. But let’s not digress.

The big diversity may increase the chance to absorb that of pathogens but what is also needed is to have match in numbers to have requisite variety. (This is why I really find variety, in cybernetic terms, such a good measure. It is relative. And it can account for both number of types and quantities of the same type.) If the number of matches between B-cell receptors and antigens is enough to register “attack”, the B-cells get activated by the T-cells and start to release antibodies. Then these successful B-cells go to a lymph node where they start to reproduce rapidly . This is a reinforcing loop in which the mutations that are good match with the antigens go to kill invaders and then back to the lymph nodes to reproduce. Those mutations that don’t match antigens, die.

That is really efficient and effective. But at the same time, the random generation of new lymphocytes with diverse shapes continues. Which is quite inefficient when you think of it. Most of them are not used. Just wasted. Until some happen to have receptors that are good match of a new invader. And this is how such an “inefficiency” is a precondition for survival. It should not just exist but be sufficient. The body does not work with what’s probable. It’s ready for what’s possible.

The immune system is not the only complex system having requisite inefficiency. The brain, the swarms, the networks are just as good examples. Having the current level of study, the easiest systems to see it in are ant colonies.

When an ant finds food, it starts to leave a trail of pheromones. When another ant encounters the trail, it follows it.  If it reaches the food, the second ant returns to the next leaving trail as well. The same reinforcing loop we saw with the B-cells, can be seen with ants. The more trails, the more likely the bigger number of ants will step on it, follow it, leave more pheromones, attract more ants and so on. And again, at the same time there always is a sufficient amount of ants moving randomly which can encounter new location with food.

The requisite inefficiency is equally important for social systems. Dave Snowden gave a nice example coincidently again with farmers but in that case experiencing high frequency of floods. Their strategy was to build their houses not in a way to prevent the water coming in but to allow the water to quickly come out. He calls that “architecting for resilience”:

You build your system on the assumption you prevent what can fail but you also build your system so you can recover very very quickly when failure happens. And that means you can’t afford an approach based on efficiency. Because efficiency takes away all superfluous capacity so you only have what you need to have for the circumstances you anticipate. [...] You need a degree of inefficiency in order to be effective.

It seems we have a lot to learn from B-cells, ants and farmers about how to make our social systems work better and recover quicker. And contrary to our intuition, there is a need for some inefficiency. The interesting question is how to regulate it or how to create conditions for self regulation. For a given system, how much inefficiency is insufficient, how much is just enough and when it is too much? May be for the immune systems and ant colonies these regulatory mechanisms are already known. The challenge is to find them for organisations, societies and economies. How much can we use from what we already know for other complex adaptive systems? Well, we also have to be careful with analogies. Else, we might fall into the “best practice” trap.

Posted on March 10, 2013. | Short Link
2 Comments





Frameworks and rigour

This is in response to the recent article of Richard Veryard “Arguing with Mendeleev”. There he comments on Zachman’s comparison of his framework with the periodic table of Mendeleev. And indeed there are cells in both tables with labelled columns (called “groups” in Mendeleev’s) and rows (“periods” respectively). Another similarity is that both deal with elements and not compounds. The same way the study of properties of oxygen and hydrogen will tell you nothing about the properties of water, the study of any two artefacts from Zachman framework will tell you nothing of how the real things they stand for work together. In fact you may not even get much about the separate properties of what the artefacts represent. Anyway, if there are any similarities, this is where they end.

I’ll not spend much time on differences. They are too many. But let me just mention two. The periodic table is a classification based on the properties of the elements. The order is determined by atom numbers and electron configuration. Both groups and periods have commonalities which make them an appropriate classification scheme. Once the rules are established, the place of each element can be confirmed by independent experiments and observations. That’s not the case with Zachman’s framework.

Richard comments on the statement that Zachman’s scheme is not negotiable:

What makes chemistry a science is precisely the fact that the periodic table is open to this kind of revision in the light of experimental discovery and improved theory. If the same isn’t true for the Zachman Framework, then it can hardly claim to be a proper science.

I haven’t heard the claim that Zachman’s framework is a “proper science” before. In my opinion,  Zachman’s main contribution is not his framework as such but the fact that it created a new discipline and new profession. The scheme itself is arbitrary. The columns, as we know, are based on six of the interrogatives: what, how, where, who, when, and why. Whether is missing, also how much. In the old English there is also whither, which is similar to where but has an important specifics – it is related to direction (whereto). But I’m not questioning the number of columns. I have doubts about their usefulness in general.

Let’s just take three of the of the interrogatives: what, how and why and test some questions:
1. What do you do now? Answer: B
2. Why do you do B?  Answer: because of A
3. How do you do B? Answer: by means of C

And now with a simple example:

B = buy food

A = I’m hungry

C = going to the supermarket

Now let’s focus on the answers and ask questions to learn more. First on C:
I’m going to the supermarket.
Why? Answer: to buy food
Why you need to buy food? Answer: because I’m hungry

Now let’s focus on A:
I’m hungry. Well, this is a problem. So we can ask:
How can I solve A? Answer: by doing B
How can I do B? Answer: by doing C

So if the relativity of the focus is ignored then what is equal to why is equal to how. (Speaking of focus, or perspective, this is where the rows in the framework come to play. This is a nice game itself which we’ll play another time).

In this example the answer of what is related to activities and not to their object (food) which by itself questions how appropriate is to label the data column “what”.

But of course rigour is not imperative. And neither is logic. After all it shifted its function from a tool to win arguments into a tool to seek truth. And then logic is quite materialistic, while EA seems a spiritual practice. Which reminds me also of the mind over matter one-liner:

If you don’t mind, it doesn’t matter.

Posted on March 3, 2013. | Short Link
No Comments





drEAmtime

In the Australian Aboriginal culture,  Dreamtime “is a complex network of knowledge, faith, and practices”. Both the word and thus cited definition invite vivid associations. The latter, with what is commonly referred to as Enterprise Architecture (EA), the former – with its current state.

Note: With this I’d like to depart from further analogies as I respect the culture of Aboriginal people in general in the part related to Dreamtime in particular. I’ll refer to drEAmtime in this article solely as to what I currently see is the state of play of EA.

The drEAm of the common language

It is believed for a long time now that there is a wide spread misalignment between ‘Business’ and ‘IT’. The IT in this context is used to refer to employees that carry out activities closely related to development, procurement and maintenance of information systems, and ‘Business’ – to those who don’t.

The division of labour, which is the predominant way of structuring organisations, naturally invites different specialists with different background (neatly matching the fragmentation of the educational system) and it is to be expected that they would use different terms, or the same but imply different meaning. It is interesting to note that even if this is the case between, say, accounting and production, the miscommunication between IT and the rest attracts the highest attention. Maybe it is due to the increasing dependency on IT combined with stories of spectacular failure, or because IT as an area with incredible rate of innovation, and thus the sense of leading, has to follow orders by those belonging to somewhat lagging ones. In any case, there is general agreement about the problem of miscommunication and the associated high cost.

Here comes Enterprise Architecture to the rescue. Doing what? Introducing a third language. What’s even worse, a language ostensibly similar to both ‘Business’ and ‘IT’. But just check some EA terms to see how people from ‘Business’ and ‘IT interpret them. Take for example the most common ones like service, capability, function, model, viewpoint and artefact. The resulting situation looks something like this:

How EA is helping Business and IT to understand better each other

 

It is quite absurd but let’s imagine for a moment that it’s not. At best we would see the following vicious circle. The strong IT smell of the EA language decreases the chance of the Business to believe that they would benefit from learning it and that explains the low motivation and buy-in. Even though the cognitive load for IT is much lower, seeing the willingness of the Business to ‘improve’ its literacy, IT people find better things to be busy with. And another funny point. When there is some buy-in by the Business, EA is put under IT, not between business and IT. Then EA people complain they can’t do much from there. But maybe they are put there just because they can’t do much.

Closely related to the dream of the common language is

The drEAm of bridging the silos

Some of this dream is part of the previous. But here is another. The EA people instead of building a bridge between the two silos, create a new one. And at certain point they find themselves in the position where neither Business nor IT regards them as a bridge any more. The Business people trust EA people even less than IT because they see them as cross-dressed IT. IT people loose trust to EA as well because they are not sure they understand the Business and if they still remember what was IT. Further, IT managers need support which EA does not have the authority to ensure.

And there is an additional effect. EA is often in the position to attract some serious budgets for reasons we’ll see in another dream, and this way the new island becomes a safe territory for people that have either failed or lost interest in the pure IT. This as a result further decreases the credibility of EA which slowly, in some organisations, gets the image of a place for people that are not good enough for IT and prefer to hide under EA labels where things are vague enough and much more difficult to measure. The lost credibility either undermines the work of the really good EA practitioners or pushes them out of the organisation or both.

But what part of the work of EA is quantifiable? One tangible result of successful EA seems to be the rationalisation of the application landscape. And as this brings efficiency, it is easier to measure. This I call

The drEAm of IT cost reduction

Big organisations in all sectors, especially in the service industries, tend to gather huge number of applications until they find themselves in a situation where there are far too many to manage. A good number of them are not used at all. Some other part is underutilised. Most of the critical applications have high maintenance or high replacement cost or both. Inevitably there are many which automate different parts of the same process but they don’t talk to each other. And this justifies new spending on building interfaces, or buying application integration packages first and then replacing them with BPMS and then probably with something better than BPMS. As a result – more spending and more applications to manage.

An addition to application integration problems are the redundancy ones. A number of existing functionalities, and I’ve seen in a few organisations this being over fifty per cent, are duplicated in one or more applications to some extent or quite completely.

Yet another common situation are patchwork applications. Those are applications that have certain utility but don’t quite well meet the needs, or just the needs change with the usage or as a result of some change in the business. In any case it is found better to add the missing functionality instead of replacing the whole application. And then again and again, layers of patches of additions and fixes until we have a real monster and the roles of the master and servant are swapped.

One day all these silo applications, functional redundancies, patchwork systems and suchlike create a serious enough mess and shocking numbers in the financial statements to convince the top management that something have to be done and rather soon.

But just when the situations seems really critical, the door opens with a kick and EA cowboys enter. They pull out frameworks and architecture tools from their holsters and in slow motion (a very slow motion), they shoot inefficiency after inefficiency until all of them lie dead on the floor. Then they walk out and go to shoot inefficiencies in some other town and when new inefficiencies appear in this town they come back again to kill them out.

Here is what happens in fact. Some attempts to achieve IT rationalisation fail spectacularly. I’m not going to list out the reasons for that. It is maybe sad that such failures discredit EA as management discipline as a whole. But sometimes Enterprise Architects are really able to find ways to discover what’s not needed and how to remove it, or what is underutilised and how to achieve better ROI for it. After all  most of them are smart people using good tools. And indeed they shoot inefficiencies and get all the glory and the money to shoot more. But as they rarely get to the cause of the inefficiencies or are in the position to influence the bigger system that produces these inefficiencies, the overall result is an oscillation or even increase in overall IT spending. The increase is because the success of the EA justifies bigger EA budget which is almost without exception a part of the IT budget.

The drEAm of dealing with complexity

This dream have specifics of its own but it can also be used to explain the whole drEAmtime.

If you are an Enterprise Architect, a popular way to deal with complexity is to arm yourself with a framework. With a good framework, it is believed, you can do two things. First, reduce the variety of the enterprise to just a few things that share the same properties, according to some classification theory and where things doesn’t fit, add more layers of abstraction. And second, reduce the things you can possibly do to just a few but well defined and in a specific order, with well prescribed inputs and outputs, because that was common for so many organisations that did well so that it became a best practice, and the chances are, if you follow this way, it will do you well as well. Now, because of the shared understanding of the beneficial role of the abstract layers, and the boundaryless imagination unconstrained by the reality, there is a serious number of frame-works and on top of them other-works on how to adapt and adopt them.

Then of course modelling itself is believed to help in dealing with complexity. But what kind of modelling? A very complicated architecture diagram does not show complexity. It just shows a lot of effort spent in denial of it.

The part related to complexity certainly deserves a separate post and maybe more than one. As this one got pretty long already, for my standards that is, let me just finish with the following: dealing with complexity is not reduced to finding ways to reduce it. It requires much different understanding of what happens when interactions are not linear. When there is dynamics, adaptation, self-organisation, irrational behaviour, politics and power.

 

In summary, more often than not, when contemporary mainstream EA is trying to introduce a common language, it creates confusion and additional work to deal with it. When trying to bridge the silos, it creates new silos instead. When trying to reduce the IT spendings, it in fact makes no change or increases them. When trying to deal with complexity,  it’s just pathetic.

Posted on January 28, 2013. | Short Link
11 Comments





Copyright © 2011-2014 Strategic Structures