Cohesion Forces and Tools

This article is part of the series on Autonomy and Cohesion. It is the second part of the basic overview of the balance. If you haven’t read the previous part, I’d recommend doing so before reading further.

Cohesion forces

Liquids and solids are in those states because there are cohesion forces bonding the molecules together. The main cohesion forces currently studied in physics are the van der Waals forces, dipole-dipole interactions, hydrogen bonding, and ionic bonding. In socio-technical systems, there are cohesion forces too. Those forces are way more complex and less studied. Cohesion in socio-technical systems is not only due to natural forces. Tools, technologies, and artifacts can significantly contribute too. They bring direct and also systemic effects. We’ll go to the cohesion tools and technologies after we briefly review some cohesion forces and factors. Cohesion forces and factors are difficult or impossible to influence, which is what makes them different from cohesion tools and technologies.

There are personal cohesion factors like the need for safety, the need to belong to a social group, to reduce uncertainty, and the need to increase self-esteem. Such needs make us form clubs, tribes, communities, organizations, and networks.

Shared values and beliefs are strong cohesion forces, and those can include the shared value of autonomy.

There are also social identity cohesion forces. We tend to identify, sometimes strongly, with sports clubs, ethnic groups, communities, professions, organizations, or religions. In some situations, compassion, loyalty, and empathy play a bigger role. In others, completely different forces. For example, typical personal cohesion forces in social networks are the need for self-expression, validation, and recognition, as well as the fear of missing out.

In every socio-technical system, there are internal and external cohesion factors and forces. The personal cohesion forces work both within organizations and networks, although they have different subsets and strengths. Typical internal organizational cohesion forces are organizational identity, internal operational dependencies, shared resources, synergy, and efficiency. In networks, cohesion forces and factors are proximity, transitivity, and preferential attachment, and in social networks, there are many additional ones, such as shared interest and shared aversion.

Which brings us to the external cohesion factors.
Continue reading

Essential Balances, Huizen, 2019

Metaphorum was established in 2003 as an NGO to develop Stafford Beer’s legacy and has organized a series of management cybernetics conferences and workshops over the years.

I attended three Metaphorum conferences so far.

In Hull, 2015, I shared ideas on how some of the findings of the enactive school of cognitive science can enrich the Viable System Model (VSM). Only about 8 minutes of the talk were recorded but the full Prezi and a PDF export of the frames are available.

The next Metaphorum event I attended was in Düsseldorf, 2018. The conference theme was Re-designing Freedom which was also embodied in the conference design, emphasizing different forms of self-organization1For example, this was the first Metaphorum conference to use BarCamp. and held at the premises of a company proud with their agile work hacks.

At both conferences, I tried to point to a promising new area of research. In Hull, it was towards enactivism as a rich source to draw from and fill some of the gaps in the VSM. In Düsseldorf, my invitation was to travel to another uncharted territory. There I tried to put and start answering the question “What can Social Systems Theory bring to the VSM?” You can check out the slides here.

The last day of the conference was a short hacked version of team synthegrity that worked pretty well. My topic was “Burst the VSM bubble” and in the following year, I demonstrated one way to do that.

The 2019 conference took place in Huizen, a beautiful village in The Netherlands, not far from Amsterdam. I decided, instead of delivering a typical management cybernetics talk, to present the Essential Balances.

Management cybernetics has its models and language. They are valuable when discussing with peers and for the advancement of the discipline. Yet, they limit the accessibility of the wider audience to these ideas. What’s more, they limit the spread of the mindset and skills needed for understanding and working with organizational complexity. There is no need to put off people with transducers, amplifiers, attenuators and algedonic alerts. Even using the word “system” is unnecessary. But the first essential balance, the one influenced by the VSM, does not differ only in avoiding the cybernetic jargon. It offers an observer-centric, non-mechanistic way of dealing with organizations, and – importantly – doing so without models and prescriptions.

The next Metaphorum conference is now open for registration. It’s planned for June this year.

  • 1
    For example, this was the first Metaphorum conference to use BarCamp.

Problem as Cylinder

Recently a friend of mine told me “I can’t get my head around the law of requisite variety”. I’ve heard that before. I have also heard the opposite and sometimes found it wasn’t the case. That’s why I wrote Variety – part 1 and part 2 back in 2013. Part 3 wasn’t that lucky to get published. But whatever was there and much more is now written and will be published as part of chapter 4, “Stimuli and Responses,” in the forthcoming book Essential Balances. Until then, here’s an elaboration of my response to “I can’t get my head around the law of requisite variety”.

First, as a reminder of the law, I’ll just reuse a paragraph from Variety – part 1 :

It’s stated as “variety can destroy variety” by Ashby and as “only variety can absorb variety” by Beer, and has other formulations such as “The larger the variety of actions available to control system, the larger the variety of perturbations it is able to compensate”. Basically, when the variety of the regulator is lower than the variety of the disturbance, that gives high variety of the outcome. A regulator can only achieve desired outcome variety if its own variety is the same or higher than that of the disturbance.

That sounds way too technical, so we need an example. As this medium is text, it will be easier to count the variety of words. The variety of cuckoo is four, as there are four different letters. If your goal is to count the variety of a word, as long as you can distinguish these letters, you have enough variety to achieve your goal. But what about the variety of melon and lemon? It’s five for both of them. If these are two hands of five playing cards, they are exactly equal in strength. But that doesn’t mean there won’t be a winner. The cards will be played in certain order. And it is also (and only!) the order of letters that helps us distinguish melon and lemon. The different type of letters, size and order all may participate in measuring the variety of a word. If the font or the colour of letters is different, that would be another criterion for distinction, depending on how it matters for the goal that you have. And then, there was the assumption that the elements of a word are the letters, which is a common and fair assumption. But if your goal is to print them clearly, then you would be more interested in the number of pixels1You won’t have this problem with the sketch below as it’s in vectors. You can scale it up as much as you want without loss of quality..

Like gravity, the Law of Requisite Variety is omnipresent. Whenever there are purpose and interaction, it’s there. To demonstrate this, I can use just the complaint that triggered this post: “I can’t get my head around the law of requisite variety”. It contains this popular idiom “I can’t get my head around” which can be seen as a conceptual metaphor2See Metaphors We Live By and Philosophy in the Flesh by Lakoff and Johnson. See also here QUTE and Human Resources? for some examples. where a certain problem or theory is seen as something physical, and the ability to wrap it is the ability to understand it.

What can Social Systems Theory bring to the VSM?

In 2015, when the Metaphorum was in Hull, I tried to kick off a discussion about potential contributions from cognitive science, and particularly from the Enactive school. I shared some insights and hinted at other possibilities. This year the Metaphorum conference was in Germany for the first time. It was organised by Mark Lambertz and hosted by Sipgate in Düsseldorf. I saw in the fact that the Metaphorum was in Germany a good opportunity to suggest another combination, this time with the Social Systems Theory of Niklas Luhmann.

These are the slides from my talk and here you can also watch them with all animations.

Related posts:

The Mind Of Enterprise

Redrawing the Viable System Model diagram

Productive Paradoxes in Projects

SASSY Architecture

Notes on Stability-Diversity

To be healthy, organisations – like human beings – have to operate in balance. Going temporarily out of balance is OK, but if this goes on for too long, it’s dangerous. Just like riding a bike, the balance is the minimum organisations need to be able to move forward.

What kinds of things need to be balanced? There are three essential balances. The first one is between autonomy and cohesion, the second is about maintaining both stability and diversity, and the third is balancing between exploration and exploitation. The important thing to recognise here is that the nature of each balance will differ between organisations. And what needs to be done to restore balance will change over time. So we can’t be prescriptive or learn “best practice” from others. We can only give people the glasses to see what is going on and the knowledge that will help them maintain the balances in their organisations.

I’ve been doing the Essential Balances workshop for four years now. During the workshop, all three of them seemed relatively easy to get yet a bit more difficult to work with and create a habit of.  Based on the feedback I received from people using in practice these glasses for organisational diagnosis and design, the first and the third balance, Autonomy-Cohesion and Exploitation-Exploration, come more naturally (with certain difficulties in the fractal dimension), while the second one, Stability-Diversity, creates problems. All three of them and a few more will be explained in detail in the forthcoming book Essential Balances, but until then, I’ll make some clarifications here. I hope it will also be of use for people who are not familiar with this practice.

Stability and Diversity. At first glance, it might be difficult to see it as a balance. In fact, it covers three dynamics. So, it might be easier to see it as three different balances. Different, yet somehow the same. And the key to it is exactly in these two words: different and same. Continue reading

SASSY Architecture

SASSY Architecture is a practice of combining two seemingly incompatible worldviews. The first one is based on non-contradiction and supports the vision for an ACE enterprise (Agile, Coherent, Efficient), through 3E enterprise descriptions (Expressive, Extensible, Executable), achieving “3 for the price of 1”: Enterprise Architecture, Governance, and Data Integration.

The second is based on self-reference and is a way of seeing enterprises as topologies of paradoxical decisions. Such a way of thinking helps deconstruct constraints to unleash innovation, reveal hidden dependencies in the decisions network, and avoid patterns of decisions limiting future options.

As a short overview, here are the slides from my talk at the Enterprise Architecture Summer School in Copenhagen last week.

Continue reading

The Mind Of Enterprise

I should have shared this presentation in November 2015 but anyway, better late than never. Here it is as static slides…

… and if your browser allows, you can play the original:

There is also a video, but due to a technical problem, only the first few minutes were recorded.

From Distinction to Value and Back

I tried to explain earlier how distinction brings forth meaning and then value. Starting from distinction may seem arbitrary. Well, it is. And while it is, it is not. That wouldn’t be very difficult to show, but let’s first take a closer look at distinction. As the bigger part of that work is done already by George Spencer-Brown, I’ll first recall the basics of his calculus of indications, adding my interpretations here and there. Then I’ll quickly review some of the resonances. Last, I’ll come back to the idea of re-entry and will apply it to the emergence of values.

Recalling

If I have to summarise the calculus of indications, it will come down to these three statements:

To re-call is to call.

To re-cross is not to cross.

To re-enter is to oscillate.

In the original text, the “Laws of Form”, only the first two are treated as basic, and the third one is coming as their consequence. Later on, I’ll try to show that the third one depends on the first two, just as the first two depend on the third one.

The calculus of indications starts with the first distinction, as “we cannot make an indication without drawing a distinction”. George Spencer-Brown introduces a very elegant sign to indicate the distinction:

mark

It can be seen as a shorthand of a rectangle, separating inside from outside. The sign is called “mark”, as it marks the distinction. The inside is called the “unmarked state”, and the outside is the “marked state”. The mark is also the name of the marked state.

This tiny symbol has the power to indicate several things at once:

  1. The inside (emptiness, void, nothing, the unmarked state)
  2. The outside (something, the marked state)
  3. The distinction as a sign (indication)
  4. The operation of making a distinction
  5. The invitation to cross from one side to the other
  6. The observer, the one that makes the distinction

That’s not even the full list, as we’ll see later.

Armed with this notation, we can express the three statements:

LoF

They can be written in a more common way using brackets:

()() = ()

(()) =   .

a  =  (a)

The sign “=” may seem like something we should take for a given. It’s not. It means “can be confused with”, or in other words: “there is no distinction between the value on the left side of the equation and the value on the right side”. Again, a form made out of distinction.

The first statement is called the law of calling. Here’s how it is originally formulated in the “Laws of Form”:

The value of a call made again is the value of the call.

If we see the left sign as indicating a distinction, and the right sign as the name of the distinction, then the right sign indicates something which is already indicated by the left sign.

That is a bit abstract, so maybe an example would help. If you are trying to highlight a word in a sentence and you do that by underlining it, no matter how many times you would draw a line below that word, at the end, the word will be as distinguished as it was after the first line. Or if you first underline it, then make a circle around it, and then highlight it with a yellow marker and so on, as long as each of them doesn’t carry special meaning, all these ways to distinguish it, make together just what each of them do separately – draw attention to the word.

Or when somebody tells you, “I like ice cream”, and then tells you that again in 10 minutes, it won’t make any difference unless you’ve forgotten it in the meantime. In other words, making the same announcement to the receiver will not change the already changed state of awareness. That has important implications for understanding information.

The second law is originally stated as follows:

The value of a crossing made again is not the value of the crossing.

One more way to interpret the mark is as an invitation to cross from the inside to the outside. As such, it serves as an operator and operand at the same time. The outer mark operates on the inner mark and turns it into the void.

If the inner mark turns its inside, which is empty, nothing, into outside, which is something, then the outer mark turns its inside, which is something, due to the operation of the inner mark, into nothing.

Picture a house with a fence that fully surrounds it. You jump over the fence and then continue walking straight until you reach the fence and then jump on the other side. As long as changing your state of being inside or outside is concerned, crossing twice is equal to not crossing at all.

The whole arithmetic and algebra of George Spencer-Brown are based on these two equations. Here is a summary of the primary algebra.

The third equation has a variable in it.

a = (a)

It has two possible values, mark or void. We can test what happens by trying out the two possible values on the right side of the equation.

Let a be void, then:

a = ( )

Thus, if a is void, then it is a mark.

Now, let a be mark, then:

a = (()) =   .

If a is a mark, then substituting a with a mark on the right side will bring a mark inside another, which according to the law of crossing, will give the unmarked state the void.

This way we have an expression of self-reference. It can be seen in numerical algebra in equations such as x = 1/x, which doesn’t have a real solution. It could only have an imaginary one. It can be traced in logic and philosophy with the Liar paradox, statements such as “This statement is false”, or the Russell’s set of all sets that are not members of themselves.

However, in the calculus of indications, this form lives naturally. The way a distinction creates space, a re-entry creates time.

There is no magic about it. In fact, all software programs not just contain self-referential equations; they can’t do without them. They iterate using expressions such as n = n + 1.

Recrossing

The Laws of Form resonate in religions, philosophies and science.

Chuang Tzu:

The knowledge of the ancients was perfect. How so? At first, they  did not yet know there were things. That is the most perfect knowledge; nothing can be added. Next, they knew that there were things, but they did not yet make distinctions between them. Next they made distinctions, but they did not yet pass judgements on  them. But when the judgements were passed, the Whole was destroyed. With the destruction of the Whole, individual bias arose.

The Tanakh (aka Old Testament ) starts with:

In the beginning when God created the heavens and the earth, the earth was a formless void… Then God said, ‘Let there be light’; and there was light. …God separated the light from the darkness. God called the light Day, and the darkness he called Night.

That’s how God made the first distinction:

(void) light

And then, in Tao Te Ching:

 The nameless is the beginning of heaven and earth…

Analogies can be found in Hinduism,  Buddhism and Islamic philosophy. For example, the latter distinguishes essence (Dhat) from attribute (Sifat), which are neither identical nor separate. Speaking of Islam, the occultation prompts another association. According to the Shia Islam, the Twelfth imam has been living in a temporary occultation, and is about to reappear one day.  Occultation is also one of the identities in the primary algebra:

c4-occultation-LoF

In it, the variable b disappears from left to the right and appears from right to left. This can be pictured by changing the position of an observer to the right until b is fully hidden behind a, and then when moving back to the left,  b reappears:

Occultation

Another association, which I find particularly fascinating, is with the ancient Buddhist logical system catuskoti, the four corners. Unlike the Aristotelian logic, which has the principle of non-contradiction, and the excluded middle, in catuskoti there are four possible values:

not being

being

both being and not being

neither being or not being

The first three correspond quite well with the void, distinction, and re-entry, respectively. That is in line with Varela’s view that apart from the unmarked and the marked state, there should be a third one, which he calls the autonomous state.

The fourth value would represent anything which is unknown. If we set being as “true”, and not-being as “false”, then every statement about the future is neither true nor false at the moment of uttering. And we make a lot of statements about the future, so it is common to have things in the fourth corner.

The fourth value also reminds me of the Open World Assumption, which I find very useful in many cases, as I mentioned here, here, and here. It also tempts to add a fourth statement to the initial three:

To not know is not to know there is not.

Catuskoti fits naturally into the Buddhist worldview, while being at odds with the Western one. At least until recently, when some multi-valued logics appeared.

George Spencer-Brown, Louis Kauffman, and William Bricken demonstrated that many of the other mathematical and logical theories could be generated using the calculus of indications. For example, in elementary logic and set theory, negation, disjunction, conjunction, and entailment, can be represented respectively with (A), AB, ((A)(B)), and (A)B, so that the classical syllogism ((A entails B) and (B entails C)) entails (A entails C), can be shown with the following form:

SyllogismInLoF

If that’s of interest, you can find explanations and many more examples in this paper by Louis Kauffman.

“Laws of Form” inspired extensions and applications in mathematics, second-order cybernetics, biology, cognitive and social sciences. It influenced prominent thinkers like Heinz von Foerster, Humberto Maturana, Francisco Varela, Louis Kauffman, William Bricken, Niklas Luhmann,  and Dirk Baecker.

Reentering

Self-reference is awkward: one may find the axioms in the explanation, the brain writing it’s own theory, a cell computing its own computer, the observer in the observed, the snake eating its own tail in a ceaseless generative process.

F. Varela, A Calculus for Self-reference

Is re-entry fundamental or a construct? According to George Spencer-Brown, it’s a construct. Varela, on the other hand, finds it not just fundamental but actually the third value, the autonomous state.  He brings up some quite convincing arguments. For Kauffman, re-entry is based on distinction, just as the distinction is based on re-entry:

the emergence of the mark itself requires self-reference, for there can be no mark without a distinction and there can be no distinction without indication (Spencer-Brown says there can be no indication without a distinction. This argument says it the other way around.). Indication is itself a distinction, and one sees that the act of distinction is necessarily circular.

That was the reason I presented three statements, and not only the first two, as a summary of the calculus of indications.

Similar kind of reasoning can be applied to sense-making. It can be seen as an interplay between autonomy and adaptivity. Autonomy makes the distinctions possible and the other way around. Making distinctions on distinctions is, in fact, sense-making, but it also changes the way distinctions are made due to adaptivity. At this new level, distinctions become normative. They have value in the sense that the autonomous system has an attitude. It has re-action determined by (and determining) that value. The simplest attitudes are those of attraction, aversion and neutrality.

This narrative may imply that values are of a higher order. First, distinctions are made, then sense, and then values, in a sort of linear chain. But it is not linear at all.

As George Spencer-Brown points out, a distinction can only be made by an observer, and the observer has a motive to make certain distinctions and not others:

If a content is of value,  a name can be taken to indicate this value.

Thus the calling of a name can be identified with the value of the content

Thus values enable distinctions and close the circle. Another re-entry. We can experience values due to the significance that our interaction with the world brings forth. This significance is based on making distinctions, and we can make distinctions because they have value for us.

But what is value and is it valuable at all? And if value is of any value, what is it that makes it such?

Ezequiel Di Paolo defines value as:

the extend to which a situation affects the viability of a self-sustaining and precarious network of processes that generates an identity

And then he adds that the “most intensely analysed such process is autopoiesis”.

In fact, the search for calculus for autopoiesis was what attracted Varela to the mathematics of Laws of Form in the first place. It was a pursuit to explain life and cognition. Autopoiesis was also the main reason for Luhmann and Baecker’s interest but in their case for studying social systems.

The operationally closed networks of processes in general, and the autopoiesis in particular show both re-entry, and distinction enabled by this re-entry and sustaining it. For the operationally closed system, all its processes enable and are enabled by other processes within the system. The autopoietic system is the stronger case, where the components of the processes are not just enabled but actually produced by them.

Both are cases of generating identity, which is making a distinction between the autonomous system and the environment. The environment is not everything surrounding it, but only the niche which makes sense to it. This sense-making is not passive and static. It is a process enacted by the system which brings about its niche.

Identity generation makes a distinction which is also what it is not, a unity. That is how living systems get more independent from the environment, which supplies the fuel for their independence and absorbs the exhaust of practising independence. And more independence would mean more fuel, hence bigger dependence. The phenomenon of life is a “needful freedom”, as pointed out by Hans Jonas.

Zooming out, we come back to the observation of George Spencer-Brown:

the world we know is constructed in order to see itself.[…] but in any attempt to see itself, […]it must act so as to make itself distinct from, and therefore false to, itself.

Closing the circle from distinctions to sense-making through value-making to (new) distinctions, solves the previous implication of linearity, but it may now be misunderstood to imply causality. First, my intention was to point them out as enabling conditions, not treating, for now, the question if they are necessary and sufficient. Second, the circle is enabled by and enables many others, the operationally closed self-generation of identity being of central interest so far. And third, singling out these three operations is a matter of distinctions made by me as an act of sense-making, and on the basis of certain values.

Language and meta-language for Enterprise Architecture

That was the topic of a talk I gave in October 2014 at an Enterprise Architecture event in London.

Most of the slides are available as PDF slidedeck on Slideshare.

They probably don’t tell the story by themselves, and I’m not going to help them here unless this post provokes a discussion. What I’ll do instead is clarify the title. “Language” refers to the means of describing organisations. They could be different. Given the current state of maturity, I have found those based on description logic to be very useful. What I meant by the “current state of maturity” is that a method in its theoretical development, application, the technologies supporting it and the experience with their application justifies investments in utilising them and helping in their further development. Although I find such a language clearly superior to the alternatives in use, that doesn’t mean there are no issues and that no approaches are showing convincing solutions to those issues. However, the practice with the latter or with the available tools doesn’t give me enough reason to stand behind them. The situation with the “meta-language” is similar, but let’s first clarify why I call it that.

Metalanguage is commonly defined as language about language. If that was the meaning I intended, these notes here could have been referred to as a mixture of another meta- and a meta-meta-language. That’s not the case. But to clarify the intended meaning of “meta,” I need to first clarify “language.”

I have found that there is a need to describe properly the “objects” that people in organisations are concerned with and how they relate to each other. It could be some way of representing physical things such as buildings, documents and servers or abstract concepts such as services, processes and capabilities. And although it relates also to abstract things, I sometimes call it “language for the substance”.

Organisations are autonomous and adaptive systems, continuously maintained by their interaction with their niche, the latter being brought forth from the background, by that very interaction. While a language such as the one proposed can be useful to understand the components of an organisation, it doesn’t help much in understanding the dynamics and viability. The language for the substance cannot be used to talk about the form. That’s why there is a need, maybe temporarily until we find a better solution and probably a single language, to have another language and that other language I called meta-language in the presentation.

As this is a language for the form, I keep looking for ways to utilise some proposals. One nominee is George Spencer-Brown’s Laws of Form (this post includes a brief introduction). Papers like this one of Dirk Baecker give me hope that it is possible. Until then, for the purposes of Enterprise Architecture, I find the Viable System Model, with the whole body of knowledge and practice associated with it, as the most pragmatic meta-language.

 

Related posts

drEAmtime

SASSY Architecture

Redrawing the Viable System Model diagram

I’ve been arguing repeatedly that trying to get the Viable System Model from overviews, introductions and writings based on or about it, can put the curious mind in a state of confusion or simply lead to wrong interpretations. The absolute minimum is reading at least once each of the three books explaining the model. But better twice. Why? There are at least two good reasons. The obvious one is to better understand some points and pay attention to others that have been probably missed during the first run. But there is also another reason. Books are of linear nature, and when tackling non-linear subjects, a second reading gives the chance to better interpret each part of the text when having in memory other parts which relate to it.

Still, one of the things that are expected to be most helpful is in fact, what brings about either confusion, aversion or misuse: the VSM diagrams. They clearly favour expected ease of understanding over rigour, and yet often they fail in both. Here is my short list of issues, followed by a description of each:

  • Representation of the channels
  • Confusion about operations and their direct management
  • Notation and labelling of systems
  • They show something between generic and example model
  • Hierarchical implication

Representation of the channels

Stafford Beer admitted several times in his books the “diagrammatic limitations” of the VSM representations. Some of the choices had to do with the limitation of the 2D representation and others, I guess, aimed to avoid clutter. Figure 26 of “The Heart of Enterprise” is a good example of both. It shows eleven loops but implies twenty-one: 3 between environment, operations and management, multiplied by 3 for the choice of showing three operations, then another 9 = 3×3 for loops between same-type elements and finally 3 more between operation management and the meta-system.

Confusion about operations and their direct management

Depending on the context, System One refers either to operations or to their direct management. In some diagrams, S1 is the label of the circles, and in others – of the squares linked to them. Referring to one or the other in the text, depending on which channels are described, only adds to the confusion. That is related to the general problem of

Notation and labelling of systems

All diagrams representing the VSM in the original writings and all interpretations I’ve seen so far suggest that circles represent System One, and triangles pointing up and down, represent System Two and Three* respectively. Additionally, most VSM overviews state exactly that in the textual description. My assertion is almost the opposite:

What is labelled as S1 and what is shown as circles are both not representing S1.

That might come as a shock to many, and yet, now citing Beer, System One is not the “circles” but:

The collection of operating elements (that is, including their horizontal and vertical connexions)

The Heart of Enterprise, page 132

Strictly speaking, a system is a system because it shows emergent properties so it is more than the collection of its parts1that is by itself a popular but problematic statement. but even referring to it as a collection reveals the serious misinterpretation of taking only one of its parts to represent the whole system.

They show something between generic and example model

Communicating such matters to managers trained in business schools wasn’t an easy task. And it is even more challenging nowadays. There is a lot to learn and even more to unlearn. It is not surprising then that even in the generic models typically three operations are illustrated (same for System 2).  Yet, I was always missing a true generic representation, or what would many prefer to call “meta-model”.

Hierarchical implication

It can’t be repeated enough that the VSM is not a hierarchical model, and yet it is often perceived and used as such or not used especially because of that perception. It seems that recursivity is a challenging concept, while anything slightly resembling hierarchy is quickly taken to represent one. And sadly, the VSM diagram only amplifies that perception, although the orthogonality of the channels serves an entirely different purpose. Stafford Beer rarely missed an opportunity to remind us about that. Nevertheless, whatever is positioned higher implies seniority and the examples of mapping to actual roles and functions only help in confirming this misinterpretation.

 

There are other issues as well but my point was to outline the motivation for trying alternative approaches for modelling the VSM, without alternating the essence of the governing principles. Here is one humble attempt to propose a different representation (there is a less humble one which I’m working on, but it’s still too early to talk about it). The following diagram favours circular, instead of orthogonal representation which I hope achieves at least destroying the hierarchical perception. Yet, from a network point of view, the higher positioning of S3 is chosen on purpose as the network clearly shows that this node is a hub.

GenericCircularViewOfTheViableSystemModel

System One is represented by red colouring, keeping the conventional notation for the operations (S1.o) and their direct management (S1.m). As mentioned above, apart from solving this, the intention is to have it as a generic model. If that poses a problem for those used to the hybrid representation, here’s how it would look if two S1s are shown:

 

Circular Netwrok View Of The Viable System Model-2operations

I hope this proposal solves fully or partially the five issues explained earlier and brings a new perspective that can be insightful on its own. In any case, the aim is to be useful in some way. If not as it is now, then by triggering feedback that might bring it to a better state. Or, it can be useful by just provoking other, more successful attempts.

 

  • 1
    that is by itself a popular but problematic statement.

More on Requisite Inefficiency

The “slides” supporting my talk on Requisite Inefficiency a couple of months ago have been on Slideshare since then, but I haven’t had the time to share them here. Which I do now.

The various manifestations of Requisite Inefficiency in both organisms and organisations can be understood by observing the maintenance of balances between homeostasis and heterostasis (as in the adaptive immune systems), exploration and exploitation (foraging of ants or curiosity-driven vs market-driver research) as well as various types of redundancy or shift of function. The latter can be elastic, as it is in degeneracy, or plastic, as it is in exaptation.
Having an underutilised structure/function that is capable of providing the deficit of variety to the utilised structures of a system in order to match the complexity of an external stimulus, or that can be adapted in a sufficiently short time to do so, is a prerequisite for survival.

Variety, Part 2

etyCan you deal with it?

Deal originates from divide. It initially meant only to distribute. Now it also means to cope, manage and control. We manage things by dividing them. We eat an elephant piece by piece, we start a journey of a thousand miles with a single step, and we divide to conquer.

(This is the second part of a series on the concept variety used as a measure of complexity. You may want to read the previous part before this one, but even doing it after or not at all is fine.)

That proved to be a good way to manage things, or at least some things, and in some situations. But often it’s not enough. To deal with things, and here I use deal to mean manage, understand, control, we need requisite variety. When we don’t have enough variety, we could get it in three ways: by attenuating the variety of what has to be dealt with, by amplifying our variety, or by doing a bit of both when the difference is too big1There yet another way: to change our goal..

And how do we do that? Let’s start by putting some common activities in each of these groups. We attenuate external variety by grouping, categorising, splitting, standardising, setting objectives, filtering, reporting, coordinating, and consolidating. We amplify our variety by learning, trial-and-error, practising, networking, advertising, buffering, doing contingency planning, and innovating. And we can add a lot more to both lists. We use such activities but when doing these activities we need requisite variety as well. That’s why we have to apply them at different scale2Some may prefer to put it more technically as “different level of recursion”.. We learn to split and we split to learn, for example.

Attenuate and amplify variety

What about the third group? What kind of activities can both amplify ours and attenuate the variety of what we need to deal with? It could be easy to put in that third group pairs from each list but aren’t there single types? There are. Here are two suggestions: planning and pretending.

With planning, we get higher variety by being prepared for at least one scenario, especially in the parts of what we can control, in contrast to those not prepared even for that. But then, we reduce different possibilities to one and try to absorb part of the deflected variety with risk management activities.

Planning is important in both operations and projects, and yet, in a business setting, we can get away with poor planning long enough to lose the opportunity to adapt. And that is the case in systems with delayed feedback. That’s also why I like the test of quick-feedback and skin-in-the-game situations, like sailing. In sailing, You are doomed if you sail off without a plan, or if you stick to the plan in front of unforseen events. And that’s valid at every planning level, week, day or an hour.

The second example of activity that both amplifies and attenuates variety is pretending. It can be so successful as to reinforce its application to the extreme. Pretending is so important for stick insects, for example, that they apply it 24/7. That proved to be really successful for their survival and they’ve been getting better at it for the last fifty million years. It turned out to be also so satisfactory that they can live without sex for one million years. Well, that’s for a different reason but nevertheless, their adaptability is impressive. The evolutionary pressure to better resemble sticks made them sacrifice their organ symmetry so that they can afford thinner bodies. Isn’t it amazing: you give up one of your kidneys just to be able to lie better? Now, why do I argue that deception in general, and pretending in particular, has a dual role in the variety game? Stick insects amplify their morphologic variety and through this, they attenuate the perception variety of their predators. A predator sees the stick as a stick and the stick insect as a stick, two states attenuated into one.

Obviously, snakes are more agile than stick insects but for some types that agility goes beyond the capabilities of their bodies. Those snakes don’t pretend 24/7 but just when attacked. They pretend to be dead. And one of those types, the hognose snake, goes so far in their act as to stick its tongue out, vomit blood and sometimes even defecate. That should be not just convincing but quite off-putting even for the hungriest of predators.

If pretending can be such a variety amplifier (and attenuator), pretending to pretend can achieve even more remarkable results. A way to imagine the variety proliferation of such a structure is to use an analogy with the example of three connected black boxes that Stafford Beer gave in “The Heart of Enterprise”. If the first box has three inputs and one output, each of them with two possible states, then the input variety is 8 and the output is 256. Going from 8 to 256 with only one output is impressive but when that is the input of a third black box, having only one output as well, then its variety reaches the cosmic number of 1.157×1077.

That seems to be one of the formulas of the writer Kazuo Ishiguro. As Margaret Atwood put it, “an Ishiguro novel is never about what it pretends to pretend to be about”. No wonder “Never Let Me Go” is so good. And the author, having much more variety than the stick insects, didn’t have to give his organs to be successful. He just made up characters that gave theirs.

  • 1
    There yet another way: to change our goal.
  • 2
    Some may prefer to put it more technically as “different level of recursion”.

Variety, Part 1

The cybernetic concept of variety is enjoying some increase in usage. And that’s both in frequency and in a number of different contexts. Even typing “Ross Ashby” in Google Trends confirms that impression.RossyAshby_as_seen_by_GoogleTrends In the last two years, the interest seems stable, while in the previous six – it was non-existent, save for the lonely peak in May 2010. Google Trends is not a source of data to draw serious conclusions from, yet it confirms the impression coming from tweets, blogs, articles, and books. On the one hand, that’s good news. I still find the usage insignificant compared to what I believe it should be. Nevertheless, little attention is better than none. On the other hand, it attracts some interpretations, leading to a misapprehension of the concept. That’s why I hope it’s worth exchanging more ideas about variety, and those having more variety themselves would either enjoy wider adoption or those using them – more benefits, or both.

The concept of variety as a measure of complexity had been preceded and inspired by the information entropy of Claude Shannon, also known as the “amount of surprise” in a message. That, although stimulated by the development of communication technologies in the first half of the twentieth century, had its roots in statistical mechanics and Boltzmann’s definition of entropy. Boltzmann, unlike classical mechanics and thermodynamics, defined entropy as the number of possible microstates corresponding to the macro-state of a system.

Variety is defined as the number of possible states in a system. It is also applied to a set of elements. The number of different members determines the variety of a set. It can be applied to the members themselves, which can be in different states, and then the set of possible transitions has a certain variety. This is the first important property of variety. It’s recursive. I’ll come back to this later. Now, to clarify what is meant by “state”:

By a state of a system is meant any well-defined condition or property that can be recognised if it occurs again.

Ross Ashby

Variety can sometimes be easy to count. For example, after opening the game in chess with a pawn on D4, the queen has a variety of three: not to move or move to one of the two possible squares. If only the temporary variety gain is counted, then choosing D2 as the next move would give a variety of 9, and D3 would give 16. That’s not enough to tell if the move is good or bad, especially keeping in mind that some of that gained variety is not effective. However, in case of uncertainty, in games and elsewhere, moving to a place that both increases our future options and decreases those of the opponent seems good advice.

Variety can be expressed as a number, as it was done in the chess example, but in many cases, it’s more convenient to use the logarithm of that number (in case that sounds like a distant memory from school years, nowadays there are easy ways to refresh it in minutes). The common practice, maybe because of the first areas of application, is to use binary logarithms. When that is the case, variety can be expressed in bits. It is indeed more convenient to say the variety of a four-letter code using the English alphabet is 18.8 bits instead of 456 976. There is an extra bonus. When the logarithmic expression is used, varieties of elements are combined by adding instead of multiplying.

Variety is sometimes referred to and counted as permutations. That might be fine in certain cases but as a rule it is not. To use the example with the 4-letter code, it has 358 800 permutations (26 factorial divided by 22 factorial), while the variety is 456 976 (26 to the power of 4).

Variety is relative. It depends on the observer. That’s obvious even from the word “recognised” in the definition of state. If, for example, there is a clock with two hands that are exactly the same or at least to the extent that an observer can’t make the difference, then, from the point of view of the observer, the clock will have a much lower variety than a regular one. The observer will not be able to distinguish, for example, 12:30 and 6:03 as they will be seen as the same state of the clock.

Clock with indistiguishable hands

This can be seen as another dependency. That of the capacity of the channel or the variety of the transducer. For example, it is estimated that regular humans can distinguish up to 10 million colours, while tetrachromats – at least ten times more. The variety of the transducer and the capacity of the channel should always be taken into account.

When working with variety, it is useful to study the relevant constraints. If we throw a stone from the surface of Earth, certain constraints, including those we call “gravity” and the “resistance of the air”, would allow a much smaller range of possible states than if those constraints were not present. Ross Ashby made the following observation: “every law of nature is a constraint”, “science looks for laws; it is therefore much concerned with looking for constraints”.

There is this popular way of defining a system as something which is more than the sum of its parts. Let’s see this statement through the lens of varieties and constraints. If we have two elements, A and B, and each can be in two possible states on their own but when linked to each other A can bring B to another, third state, and B can bring A to another state as well. In this case, the system AB has certainly more variety than the sum of A and B unbound. But if, when linking A and B they inhibit each other, allowing one state instead of two, then it is clearly the opposite. That motivates rephrasing the popular statement to “a system might have different variety than the combined variety of its parts”.

If that example with A and B is too abstract, imagine a canoe sprint kayak with two paddlers working in sync and then compare it with a similar setting, with one of the paddlers rowing while the other holds her paddle in the water.

Yet, “is more than the sum of” can be retained but then another modification is needed. Here’s one suggested by Heinz von Foerster:

The measure of the sum of the parts is greater than the sum of the measures of the parts. One is the measure of the sum; the other is the sum of the measures. Take, for example, the measurement function “to square,” which makes this immediately apparent. I have two parts, one is a, the other b. Now I have the measure of the sum of the parts. What does that look like? a + b as the sum of the parts squared, (a + b)2 gives us a2 + 2ab + b2. Now I need the sum of the measures of the parts, and with this I have the measure of a (= a2) and the measure of b (= b2): a2 + b2. Now I claim that the measure of the sums of the parts is greater than the sum of the measures of the parts and state that: a2 + b2 + 2ab is greater than a2 + b2. So the measure of the sum is greater than the sum of the measures. Why? a and b squared already have a relation together

Heinz von Foerster. The Beginning of Heaven and Earth Has No Name (Meaning Systems) (p. 18)

And now about the law of requisite variety. It’s stated as “variety can destroy variety” by Ashby and as “only variety can absorb variety” by Beer, and has other formulations such as “The larger the variety of actions available to control system, the larger the variety of perturbations it is able to compensate”. Basically, when the variety of the regulator is lower than the variety of the disturbance, that gives high variety of the outcome. A regulator can only achieve the desired outcome variety if its own variety is the same or higher than that of the disturbance. The recursive nature mentioned earlier can now be easily seen if we look at the regulator as a channel between the disturbance and the outcome or if we account for the variety of the channels at the level of recursion with which we started.

To really understand the significance of this law, it should be seen how it exerts itself in various situations, which we wouldn’t normally describe with words such as “regulator”, “perturbations” and “variety”.

In the chess example, the power of each piece is a function of its variety, which is the one given by the rules and reduced by the constraints at every move. Was there a need to know about requisite variety to design this game? Or any other game for that matter? Or was it necessary to know how to wage war? Certainly not. And yet, it’s all there:

It is the rule in war, if our forces are ten to the enemy’s one, to surround him; if five to one, to attack him; if twice as numerous, to divide our army into two.

Sun Tzu, The Art of War

Let’s leave the games now and come back to the relative nature of variety. The light signals in ships should comply with the International Regulations for Preventing Collisions at Sea (IRPCS). The agreed signals have a reduced variety to communicate the states of the ships but enough to ensure the required control. For example, if an observer sees one green light, she knows that another ship is passing from left to right. If she sees one red light, it passes right to left. There are lots of states – different angles of the course of the other ship – that are reduced into these two, but that serves the purpose well enough. Now, if she sees both red and green, that means that the ship is coming exactly towards her. That’s a dangerous situation. The reduction of variety, in this case, has to be very low.

The relativity of variety is not only related to the observer’s “powers of discrimination”, or those of the purpose of regulation. It could be dependent also on the context. Easop’s fable “The Fox and the Stork”comes to mind.

Fables, and stories in general, influence people and survive centuries. But is it that do you need a story instead of getting directly the moral of the story? Yes, it’s more interesting, there is this uncertainty element and all that. But there is something else. Stories are ambiguous and interpretable. They leave many things to be completed by the readers and listeners. To put it in different words, they have a much higher variety than morals and values.

That’s it for this part.

And here is the next.

Reasoning with Taskless BPMN

Was it Lisbon that attracted me so much or the word Cybernetics in the sub-title or the promise of Alberto Manuel that it would be a different BPM conference? May be all three and more. As it happened, the conference was very well organised and indeed different. The charm of Lisbon was amplified by the nice weather, much appreciated after the long winter. As to Cybernetics, it remained mainly in the sub-title but that’s good enough if it would make more people go beyond the Wikipedia articles and other easily digestible summaries.

My presentation was about using task-free BPMN which I believe, and the results so far confirm, can have serious benefits for modelling of both pre-defined processes and those with some level of uncertainty. In addition, there is an elegant data-centric way to execute such processes using reasoners. Enterprise Architecture (EA) descriptions can be improved if done with explicit semantics. Currently, EA descriptions are isolated from the operational data and neither the former is linked with what happens, nor the latter get timely updates from the strategy. More on this in another post. Here’s the slidedeck1You can watch on YouTube the slides with animations (no voice-over) and a 7 min compilation of the talk.: