The Role of Meaning and the Meaning of Roles

Let’s start with roles. ‘Role’ comes from ‘roll’, as it was on a paper roll where the actor part was written. It is about something prescribed and then performed. But it evolved from roles that were performed as prescribed, through those that were not, to performing roles that had not been prescribed at all.

Roles are about relations. In fact, in Description Logic roles play the same role as relation, association, property and predicate in other formal languages. If John has a son George and is 30 years old, then George is in the role of a son for John, and even 30, although not an actor, plays the role of an age for John. Roles are inherently relational. A relation to itself or to something other. There is never just a role, always a ‘role in’ or a ‘role for’.

Roles are not just relational but are often determined by the dynamics of interactions. Here’s a handy example. The role of this text in the situation of you being in the role of a reader, will be, as assigned by me in the role of a writer, to transfer my thoughts on the meaning of roles, but it would rather trigger the construction of both similar and complimenting thoughts of you, and by doing so will play a different role, which, while evoked by me, is determined by you.

As there is now something about the meaning of roles, it’s time to introduce the role of meaning. If the role is always a role-in, my interest here is in the role of meaning in living and social systems. These systems have some things in common. One such thing is that they have internally maintained autonomy. Such autonomous systems bring forth and co-evolve with their niches. And that happens by creating of meanings which motivate attitudes and actions. But how does meaning come about?

Before meaning, there is the primary cognitive act of making a distinction,  bringing up something out of its background, distinguishing a thing of that which it is not. That act determines and is determined by the dynamics of the interaction between the system and its niche (by ‘system’ here I will only refer to systems with internally maintained autonomy). This dynamics is circular and recursive. Roughly speaking, it goes like this:

The act of distinction, the very making of difference changes the making of difference and brings forth “the difference that makes a difference”, in Bateson words, or the “surplus of significance” in Varela words, which is also co-dependant: the sense made changes the sense-making that changes the sense that is made and then brings forth the behaviour changing relation between the sense maker and the meaning of the distinguished element. This results in attitude of attraction, aversion or neutrality (or something more sophisticated like staying put, paralysed by the equal amount of temptation and fear). Or, in other words, the sense-making transcends into value-making. The value-making evolves itself and in downward causality influences the evolution of the distinguishing and the sense-making capability, which is in fact again a distinguishing capability but it is now distinguished as sort of a second-order one.

Before going further, I need to make clarifications on the use of ‘sense-making’ and ‘value-making’. ‘Sense-making’ is giving meaning to what is experienced. Here I use it with an emphasis of the action of making, of the creation, or more precisely, co-creation of sense. It is not that the sense is out there and all we need to do is to disclose it. No, we (or whatever the system in focus is) are the ones that actually make it, the origin is within us, or rather within the dynamics between us and what we interact with.

‘Value-making’ should not be confused with value adding. The system makes a distinction of a higher order in terms of directing its behaviour based on this distinction, hence the choice of ‘value’. It is not specified by the distinguished element, it is determined by the internal structure of the system and the dynamics between the system and the environment it’s structurally coupled with. This and the fact that value-making is a sense-making of a higher-order, is where the preference for ‘making’ comes from.

Only a small part of the environment, a niche, is constantly changing its content, is being interacted with, and actually matters. The niche is not one niche but a network of dynamically changing and influencing each other niches. A family is one niche, but that’s not the family described by somebody knowing all the facts, and not the family which will be invariant for each of the family members. The family as a niche is the subjective construction of interactions, memories, emotions, attention and imagination unique for every member of the family, as long as they consider themselves as such. This could easily exclude actual members and include those that have a lasting experience as such. And the same applies to work circle and friends circle, as to all occasional and recurring encounters, and virtual communities. But then, apart from interactions with other humans, we also have a niche out of the air we breathe, the grass we walk on, the stairs we climb and descend. And all that, changed by us, is changing us and changing the way we change it. But those changes, as long as a system is viable, serve to protect the identity from changing. We take from our niche things to make them into more of ourselves and become better in doing so.

Unlike the air, the grass and the stairs, a family or an organisation are autonomous systems which maintain their identity. They have their own niches. Moreover they are social systems. There are different ways to look at them. One way would be as autopoietic systems of communications, having humans as their niche (Luhmann), and another would be to have humans as both sub-systems and niche depending on weather the processes they participate in are part of the closed network of processes creating the identity of the system in focus. And that can be talked about in terms of the roles humans play.

Now we are ready to see what the role of meaning has to do with the meaning of roles. A living bacterium is at a stage of development where the sense-making has transcended into value-making. It does not only distinguish an environment with low from that with high sucrose concentration but prefers and moves towards the latter. As Evan Thompson put it, while sucrose is a “present condition of the environment, the status of the sucrose as a nutrient is not”, “it is a relational feature, linked to the bacterium metabolism”. And this is how the creation of meaning and roles are co-dependant. The dynamics between the bacterium and its environment, by making the sucrose relevant for the viability of the bacterium, realises the role of sucrose as nutrient.

But that’s not only relevant for cells and living organisms. It’s applicable for social systems as well. A company acts so that to turn some part of the environment into employees, other into clients, partners and so on. And for the same reason as the bacterium – to maintain it’s viability.

Once a formal role of an employee is assigned by a contract, a less formal roles are enabled by different mechanisms. It is now common to refer to such mechanisms as ‘Governance’ or an essential part of it. That part is a meta-role assigned to some people to determine the role of others. One and the same person often plays several roles.

Roles can be determined by formal assignment, by methodology, or by a Governance body but they can be also invented and self-assigned by the actors themselves, as typical for the organisations researched by Frederic Laloux. In that case, the evolutionary nature and the granularity of the roles are both dealing with the pathologies of assigned roles (being status currency, perpetuated even when obsolete, determined by politics and so on) and making organisations more responsive to change. Again the system, by what it does, takes from its environment what it needs to make more of itself, recursively: it turns non-employees into employees and vice versa,  and employees take up and leave roles, as determined by the dynamics of their interactions.

And so it seems that the meaning of roles is continuously realised by the role of meaning as a way in which a system generates its niche by asserting itself and maintaining its viability.

Posted on January 11, 2015. | Short Link
1 Comment





Language and meta-language for EA

That was the topic of a talk I gave in October last year at an Enterprise Architecture event in London. These are the slides, or most of them, anyway.

They probably don’t tell the story by themselves and I’m not going to help them here unless this post provokes a discussion. What I’ll do instead is a clarification about the title. “Language” refers to the means of describing organisations. They could be different. Given the current state of maturity, I have found those based on description logic to be very useful. What I meant by the “current state of maturity” is that a method in its theoretical development, application, the technologies supporting it and the experience with their application justifies investments in utilising them and helping in their further development. Although I find such a language clearly superior to the alternatives in use, that doesn’t mean there are no issues and that there are no approaches showing convincing solution to those issues. However, the practice with the latter or with the available tools doesn’t give me enough reason to stand behind them. The situation with the “meta-language” is similar but let’s first clarify why I call it that.

Metalanguage is commonly defined as language about language. And if that was the meaning I intended, it would have that type of relation to the language and then probably what I’m writing here could have been referred to as a mixture of another meta- and a meta-meta-language. It wasn’t the meaning I intended. I have found that there is a need to describe properly the “objects” that people in organisations are concerned about and how they relate to each other. It could be some way to represent physical things such buildings, documents and servers or abstract concepts such as services, processes and capabilities. And although it relates also to abstract things, I sometimes call it “language for the substance”.

Organisations are autonomous and adaptive systems, continuously maintained by their interaction with their niche, the latter being brought forth from the background, by that very interaction. While a language such as the one proposed can be useful to understand the components of an organisation, it doesn’t help much in understanding the dynamics and viability. The language for the substance cannot be used to talk about the form. That’s why there is a need, maybe temporarily until we find a better solution and probably a single language, to have another language and that other language I called meta-language in the presentation.

As this is a language for the form, I keep looking for ways to utilise some proposals, one of the most fascinating being George Spencer-Brown’s Laws of Form. Papers like this one of Dirk Baecker give me hope that it is possible. Until then, for the purposes of Enterprise Architecture, I find the Viable System Model, with the whole body of knowledge and practice associated with it, as the most pragmatic meta-language.

Posted on January 3, 2015. | Short Link
No Comments





Essential Balances in Projects

These are part of the frames from the projects-flavour of the “Essential Balances” theme, delivered in a workshop format at a training event yesterday in Athens.

Note: new versions of the workshop slidedeck will appear in the frame above as soon as I update the file in SlideShare. That should explain why instead of Athens and July, the first slide refers to a different event.

Posted on July 8, 2014. | Short Link
No Comments





Redrawing the Viable System Model diagram

I’ve been arguing repeatedly that trying to get the Viable System Model from overviews, introductions and writings based on or about it, can put the curious mind in a state of confusion or simply lead to wrong interpretations. The absolute minimum is reading at least once each of the three books explaining the model. But better twice. Why? There are at least two good reasons. The obvious one is to better understand some points and pay attention to others that have been probably missed during the first run. But there is also another reason. Books are of linear nature and when tackling non-linear subjects, a second reading gives the chance to better interpret each part of the text when having in the memory more less all other parts which relate to it.

Still, one of the things that are expected to be most helpful is in fact what brings about either confusion, aversion or misuse: the VSM diagrams. They clearly favour expected ease of understanding over rigour and yet in some important points they fail in both. Here is my short list of issues, followed by description of each:

  • Representation of the channels
  • Confusion about operations and their direct management
  • Notation and labelling of systems
  • They show something in between generic and example model
  • Hierarchical implication

Representation of the channels

Stafford Beer admitted several times in his books the “diagrammatic limitations” of the VSM representations. Some of the choices had to do with the limitation of the 2D representation and others, I guess, aimed to avoid clutter. Figure 26 of “The Heart of Enterprise” is a good example for both. It shows eleven loops but implies twenty one: 3 between environment, operations and management, multiplied by 3 for the choice of showing three operations, then another 9 = 3×3 for loops between same-type elements and finally 3 more between operation management and the meta-system.

Confusion about operations and their direct management

Depending on the context, System One refers either to operations or to their direct management. In some diagrams S1 is the label of the circles, and in others – of the squares linked to them. Referring to one or the other in the text, depending on which channels are described, only adds to the confusion. That is related to the general problem of

Notation and labelling of systems

All diagrams representing the VSM in the originals writings and all interpretations I’ve seen so far, suggest that circles represent System One, and triangles pointing up and down, represent System Two and Three* respectively. Additionally most VSM overviews state exactly that in the textual description. My assertion is almost the opposite:

What is labelled as S1 and what is shown as circles are both not representing S1.

That might come as a shock to many and yet, now citing Beer, System One is not the “circles” but:

The collection of operating elements (that is, including their horizontal and vertical connexions)

The Heart of Enterprise, page 132

Well, strictly speaking a system is a system because it shows emergent properties so it is more than the collection of its parts but even referring to it as collection reveals the serious misinterpretation when taking only one of its parts to represent the whole system.

They show something in between generic and example model

Communicating such matters to managers trained in business schools wasn’t an easy task. And it is even more challenging nowadays. There is a lot to learn and a even more to unlearn. It is not surprising then that even in the generic models typically three operations are illustrated (same for System 2).  Yet, I was always missing a true generic representation, or what would many prefer to call “meta-model”.

Hierarchical implication

It can’t be repeated enough that the VSM is not an hierarchical model, and yet it is often perceived and used as such or not used especially because of that perception. It seems that recursivity is a challenging concept, while anything slightly resembling hierarchy is quickly taken to represent one. And sadly, the VSM diagram only amplifies that perception, although the orthogonality of the channels serves an entirely different purpose. Stafford Beer rarely misses an opportunity to remind about that. Nevertheless, whatever is positioned higher implies seniority and the examples of mapping to actual roles and functions only help in confirming this misinterpretation.

 

There are other issues as well but my point was to outline the motivation for trying alternative approaches for modelling the VSM, without alternating the essence or the governing principles. Here is one humble attempt to propose a different representation (there is a less humble one which I’m working on, but it’s still too early to talk about it). The following diagram favours circular, instead of orthogonal representation which I hope achieves at least in destroying the hierarchical perception. Yet, from a network point of view the higher positioning of S3 is chosen on purpose as the network clearly shows that this node is a hub.

GenericCircularViewOfTheViableSystemModel

System One is represented by red colouring, keeping the conventional notation for the operations (S1.o) and their direct management (S1.m). As mentioned above, apart from solving this, the intention  as to have it as a generic model. If that poses a problem for those used to the hybrid representation, here’s how it would look if two S1s are shown:

 

Circular Netwrok View Of The Viable System Model-2operations

I hope this proposal solves fully or partially the five issues explained earlier and brings a new perspective that can be insightful on its own. In any case the aim is to be useful in some way. If not as it is now, then by triggering feedback that might bring it to a better state. Or, it can be useful by just provoking other, more successful attempts.

 

Posted on April 14, 2014. | Short Link
3 Comments





More on Requisite Inefficiency

The “slides” supporting my talk on Requisite Inefficiency a couple of months ago have been on Slideshare since then but I haven’t had the time to share them here. Which I do now.

The various manifestations of Requisite Inefficiency in both organisms and organisations can be understood by observing the maintenance of balances between homeostasis and heterostasis (as in the adaptive immune systems), exploration and exploitation (foraging of ants or curiosity-driven vs market-driver research) as well as various types of redundancy or shift of function. The latter can be elastic, as it is in degeneracy, or plastic, as it is in exaptation.
Having an underutilised structure/function that is capable of providing the deficit of variety to the utilised structures of  a system in order to match the complexity of an external stimulus, or that can be adapted in sufficiently short time to do so, is a prerequisite for survival.
Posted on March 30, 2014. | Short Link
1 Comment





Variety – part 2

Can you deal with it? It is amazing how language evolved to adapt to reductionist mindset.  “Deal”, which originated from divide and initially meant only to distribute and then to trade, is now used as synonym for cope, manage and control. We manage things by dividing them. We eat elephants peace by peace, we start journeys of thousand miles with a single step, and we divide to conquer.

(This is the second part of a sequence devoted to the concept “variety” used as a measure of complexity. It’s a good idea to read previous part before this one but even doing it after, or not at all is fine.)

And that indeed proved to be a good way of managing things, or at least some things, and in some situations. But often it’s not enough. To deal with things, and here I use “deal” to mean manage, understand, control, we need requisite variety and when that is not the case initially, we could get it in three ways: by attenuating the variety of what has to be dealt with, by amplifying our variety, or by doing a bit of both when the difference is too big.

And how do we do that? (I use we/them instead of regulator/regulated or systemA/systemB or organisation/environment, because I find it easier to imply the perspective and the purpose this way) Let’s start with an attempt to put some very common activities in each of these groups. We attenuate external variety by grouping, categorising, splitting, standardising, setting objectives, filtering, reporting, coordinating, and consolidating. We amplify our variety by learning, trial-and-error, practising, networking, advertising, buffering, doing contingency planning, and innovating. And we can add a lot more to both lists, of course. We use activities from both list but when doing these activities we need requisite variety as well. That’s why we have to apply them at the lower level of recursion. We learn to split and we split to learn, for example.

Attenuate and amplify variety

It could be easy to put in the third group pairs from each list but now the task is to classify single types. Here are two suggestions that seem to fit: planning and pretending.

With planning we get higher variety by being prepared for at least one scenario, especially in the parts of what we can control, in contrast to those not prepared even for that. But then, we reduce different possibilities to one and try to absorb part of the deflected variety with risk management activities. Planning is important in operations and projects but somehow in the business setting we can get away with poor planning, at least for long enough to lose the opportunity to adapt. And that is the case in many systems with delayed feedback. That’s why I like the test of quick-feedback and skin-in-the-game situations, like sailing. In sailing, not having a good plan could be equality disastrous as sticking to a plan for too long and not adapting or replacing it quickly based on evidence against the assumptions of the initial one. And that’s valid at every level, no matter if the plan is for a week, day or an hour.

Pretending is even more interesting in its dual role. It can be so successful as to reinforce its application to the extreme. Pretending is so important for stick insects, for example, that they apply it 24/7. That proved to be really successful for their survival and they’ve been getting better at it for the last fifty million years. It turned out to be also so satisfactory that they can live without sex for one million years. Well, that’s for a different reason but nevertheless their adaptability is impressive. The evolutionary pressure to better resemble sticks made them sacrifice their organ symmetry so that they can afford thinner bodies. Isn’t it amazing: you give up one of your kidneys just to be able to lie better? Now, why do I argue that deception in general, and pretending in particular, has a duel role in the variety game? Stick insects amplify their morphologic variety and through this they attenuate the perception variety of their predators. A predator sees stick as a stick and stick insect as a stick, two states attenuated into one.

Obviously snakes are more agile than stick insects but for some types that agility goes beyond the capabilities of their bodies. Those snakes don’t pretend 24/7 but just when attacked. They pretend to be dead. And one of those types, the hognose snake, goes so far in their act as to stick its tongue out, vomit blood and sometimes even defecate. Now, that should be not just convincing but quite off-putting even for the hungriest of predators.

If pretending can be such a variety amplifier (and attenuator), pretending to pretend can achieve even more remarkable results. A way to imagine the variety proliferation of such a structure is to use an analogy with the example of three connected black boxes that Stafford Beer gave in “The Heart of Enterprise”. If the first box has three inputs and one output, each of them with two possible states, then the input variety is 8 and the output is 256. Going from 8 to 256 with only one output is impressive but when that is the input of a third black box, having only one output as well, then its variety reaches the cosmic number of 1.157×1077.

That seems to be one of the formulas of the writer Kazuo Ishiguro. As Margaret Atwood put it, “an Ishiguro novel is never about what it pretends to pretend to be about”. No wonder “Never Let Me Go” is so good. And the author, having much more variety than the stick insects, didn’t have to give his organs to be successful. He just made up characters to give theirs.

Posted on September 14, 2013. | Short Link
4 Comments





Variety – part 1

The cybernetic concept of variety is enjoying some increase in usage. And that’s both in frequency and in number of different contexts. Even typing “Ross Ashby” in Google Trends supports that impression.RossyAshby_as_seen_by_GoogleTrends In the last two years the interest seems stable, while in the previous six – non-existing, save for the lonely peak in May 2010. That’s not a source of data to draw some conclusions from, but it supports the impression coming from tweets, blogs, articles, and books. On one side, that’s good news. I still find the usage insignificant, compared to what I believe it should be, given its potential and the tremendously increased supply of problems of such nature that if not in solving, at least it can help in understanding. Nevertheless, some stable attention is better that none at all. On the other side, it attracts a variety of interpretations and some of them might not be healthy for the application of the concept. That’s why I hope it’s worth exchanging more ideas about variety, and those having more variety themselves would either enjoy wider adoption or those using them – more benefits, or both.

The concept of “variety” as a measure of complexity had been preceded and inspired by the “information entropy” of Claude Shannon, also known as the “amount of surprise” in a message. That, although stimulated by the development of the communication technologies in the first half of the twentieth century, had its roots in statistical mechanics and Boltzmann‘s definition of entropy. Boltzmann, unlike classical mechanics and thermodynamics, defined entropy as the number of possible microstates corresponding to the micro-state of a system. (These four words “possible”, “microstates”, “micro-states” and “system” deserve a lot of attention. Anyway, they’ll not get in this post.)

Variety is usually defined as the number of possible states in a system. It is also applied to a set of elements, as the number of different members determine the variety of a set, and to the members themselves which can be in different states and then the set of possible transitions has certain variety. This is the first important property of “variety”. It’s recursive. I’ll come back to this later. Now, to clarify what is meant by “state”:

By a state of a system is meant any well-defined condition or property that can be recognised if it occurs again.

Ross Ashby

Variety can sometimes be easy to count. For example, after opening the game in chess with pawn on D4, the queen has variety of three – not to move or move to one of the two possible squares. If only the temporary variety gain is counted, then choosing D2 as the next move would give variety of 9, and D3 would give 16. That’s not enough to tell if the move is good or bad, especially having in mind that some of that gained variety is not effective. However, in case of uncertainty, in games and elsewhere, moving to a place which both increases our future options and decreases those of the opponent, seems a good advice.

Variety can be expressed as a number, as it was done in the chess example, but in many cases it’s more convenient to use the logarithm of that number. The common practice, maybe because of the first areas of application, is to use binary logarithms. When that is the case, variety can be expressed in bits. It is indeed more convenient to say the variety of a four-letter code using the English alphabet is 18.8 bits instead of 456 976. And then, when the logarithmic expression is used, combining varieties of elements is done just by adding instead of multiplying. This has additional benefits when plotting etc.

Variety is sometimes referred to and counted as permutations. That might be fine in some cases but as a rule it is not. To use the example with the 4-letter code, it has 358 800 permutations (26 factorial divided by 22 factorial), while the variety is 456 976 (26 to the power of 4).

Variety is relative. It is dependant on the observer. That’s obvious even from the word “recognised” in the definition of “state”. If, for example, there is a clock with two hands that are exactly the same or at least to the extent that an observer can’t make the difference, then, from the point of view of the observer, the clock will have much lower variety than a regular one. The observer will not be able to distinguish for example 12:30 and 6:03 as they will be seen as the same state of the clock.

Clock with indistiguishable hands

This can be seen as another dependency. That of the capacity of the channel or the variety of the transducer. For example, it is estimated that regular humans can distinguish up to 10 million colours, while tetrachromats –  at least ten times more. The variety of the transducer and the capacity of the channel should always be taken into account.

When working with variety, it is important to study the relevant constraints. If we throw a stone from the surface of Earth, certain constraints, including those we call “gravity” and the “resistance of the air”, would allow a much smaller range of possible states than if those constraints were not present. Ross Ashby made the following inference out of this: “every law of nature is a constraint”, “science looks for laws; it is therefore much concerned with looking for constraints”. (Which is interesting in view of the recent claim of Stuart Kauffman that “the very concept of a natural law is inadequate for much of the reality” and that we live in a lawless universe and should fundamentally rethink the role of science…)

There is this popular way of defining a system as something which is more than the sum of its parts. Let’s see this statement through the lenses of varieties and constraints. If we have two elements, A and B, and each can be in two possible states on their own but when linked to each other A can bring B to another, third state, and B can bring A to another state as well. In this case, the system AB has certainly more variety than the sum of A and B unbound. But if, when linking A and B they inhibit each other, allowing one state instead of two, then it is clearly the opposite. That motivates rephrasing the popular statement to “a system might have different variety than the combined variety of its parts”.

If that example with A and B is too abstract, imagine a canoe sprint kayak with two paddlers working in sync and then compare it with the similar setting but one of the paddlers rowing while the other holds her paddle in the water.

And now about the law of requisite variety. It’s stated as “only variety can destroy variety” by Ashby and as “variety absorbs variety” by Beer, and has other formulations such as “The larger the variety of actions available to control system, the larger the variety of perturbations it is able to compensate”. Basically, when the variety of the regulator is lower than the variety of the disturbance, that gives high variety of the outcome. A regulator can only achieve desired outcome variety, if its own variety is the same or higher than that of the disturbance. The recursive nature, mentioned earlier, can now be easily seen, if we look at the regulator as channel between the disturbance and the outcome or if we account for the variety of the channels at the level of recursion with which we started.

To really understand the profound significance of this law, it should be seen how it exerts itself in various situations, which we wouldn’t normally describe with words such as “regulator”, “perturbations” and “variety”.

In the chess example, the power of each piece is a function of its variety which is the one given by the rules reduced by the constraints at every move. Was there a need to know about requisite variety to design this game? Or any other game for that matter? Or was that necessary to know how to wage war. Certainly not. And yet, it’s all there:

It is the rule in war, if our forces are ten to the enemy’s one, to surround him; if five to one, to attack him; if twice as numerous, to divide our army into two.

Sun Tzu, The Art of War

But let’s leave the games for a moment and remember the relative nature of variety. The light signals in ships should comply with the International Regulations for Preventing Collisions at Sea (IRPCS). Yes, here we even have the word “regulation”. Having the purpose in mind, to prevent collision, the signals have reduced variety to communicate the states of the ships but enough to ensure the required control. For example, if an observer sees one green light, she knows that another ship is passing from left to right, and  – from right to left – if she sees red light. There are lots of states – different angels of the course of the other ship –  that are reduced into these two but that is serving the purpose well enough. Now, if she sees both red and green that means that the ship is coming exactly towards her that’s an especially dangerous situation. That’s why the reduction of variety in this case has to be very low.

The relativity of variety is not only related to the observer’s “powers of discrimination”, or those of the purpose of regulation. It could be dependant also on the situation, on the context. An example that first comes to mind is the Easop’s fable “The Fox and the Stork”.

Fables, and stories in general, are interesting phenomenon. Their capability to influence people and survive many centuries is amazing. But why is that? Why do you need a story, instead of getting directly the moral of the story. Yes, it’s more interesting, there is this uncertainty element and all that. But there is something more. Stories are ambiguous, interpretable. They leave many things to be completed by the readers and listeners. And yes, to put in different words, they have much higher variety than morals and values.

That’s it for this part. Stay tuned.

Posted on September 11, 2013. | Short Link
1 Comment





The Change of the Change

Let’s have a variable V representing at this moment an aspect of interest I from the behaviour of a system S. This variable W is changed through transduction of a certain characteristic C of S using a transducer T.

Now, which variable are we talking about, V or W?  Probably Z? It would be V only if I, S, C and T and the meaning of ‘variable’ remained the same at the moment of writing the second sentence and even that would require ‘only if’ to work as assumed by logic.

Yes, it can be that fast.

Posted on June 23, 2013. | Short Link
No Comments





Reasoning with Taskless BPMN

Was it Lisbon that attracted me so much or the word Cybernetics in the sub-title or the promise of Alberto Manuel that it would be a different BPM conference? May be all three and more. As it happened, the conference was very well organised and indeed different in a nice way. The charm of Lisbon was amplified by the nice weather, much appreciated after the long winter. As to Cybernetics, it remained mainly in the sub-title but that’s good enough if it would make more people go beyond the Wikipedia articles and other easily digestible summaries.

My presentation was about using task-free BPMN which I believe, and the results so far confirm, can have serious benefits for modelling of both pre-defined processes and those with some level of uncertainty. In addition, there is a nice way to execute such processes using reasoners and achieve transparency in Enterprise Architecture descriptions which are usually isolated from the operational data and neither the former is linked with what happens, nor the latter gets timely updates from the strategy. More on this in another post. Here’s the slidedeck:

Posted on April 24, 2013. | Short Link
4 Comments





Requisite Inefficiency

In his latest article Ancient Wisdom teaches Business Processes, Keith Swenson reflects on an interesting story told by Jared Diamond. In short, the potato farmers in Peru used to scatter their strips of land. They kept them that way instead of amalgamating them which would seem like the most reasonable thing to do. This turned out to be a smart risk mitigating strategy. As these strips are scattered, the risk of various hazards is spread and the probability to get something from the owned land every year is higher.

I see that story as yet another manifestation of Ashby’s law of requisite variety. The environment is very complex and to deal with it somehow, we either find a way to reduce that variety in view of a particular objective, or try to increase ours. In a farming setting an example of variety reduction would be building a greenhouse. The story of the Peruvian farmers is a good example of the opposite strategy – increase of the variety of the farmers’ system. The story shows another interesting thing. It is an example of a way to deal with oscillation. The farmers controlled the damage of the lows by giving up the potential benefits of the highs.

Back to the post of Keith Swenson, after bringing this lesson to the area of business process, he concludes

Efficiency is not uniformity.  Instead, don’t worry about enforcing a best practice, but instead attempt only to identify and eliminate “worst practices”

I fully agree about best practices. The enforcement of best practices is what one can find in three of every four books on management and in nearly every organisation today. This may indeed increase the success rate in predictable circumstances but it decreases resilience and it is just not working when the uncertainty of the environment is high.

I’m not quite sure about the other advice: “but instead attempt only to identify and eliminate “worst practices”. Here’s why I’m uncomfortable with this statement:

1. To identify and eliminate “worst practice” is a best practice itself.

2. To spot an anti-pattern, label it as “worst-practice” and eliminate it might seem the reasonable thing to do today. But what about tomorrow? Will this “worst-practice” be an anti-pattern in the new circumstances of tomorrow? Or something that we might need to deal with the change?

Is a certain amount of bad practice necessarily unhealthy?

It seems quite the opposite. Some bad practice is not just nice to have, it is essential for viability. I’ll not be able to put it better than Stafford Beer:

Error, controlled to a reasonable level, is not the absolute enemy we have been thought to think of. On the contrary, it is a precondition for survival. […] The flirtation with error keeps the algedonic feedbacks toned up and ready to recognise the need for change.

Stafford Beer, Brain of the firm (1972)

I prefer to call this “reasonable level” of error requisite inefficiency. Where can we see this? In most – if not all – complex adaptive systems. A handy example is the way immune system works in humans and other animals having the so called adaptive immune system (AIS).

The main agents of the AIS are T and B lymphocytes. They are produced by stem cells in the bone marrow. They account for 20-40% of the blood cells which makes about 2 trillion. The way the AIS works is fascinating but for the topic here of requisite inefficiency, what is interesting is the reproduction of the B-cells.

The B-cells recognise the pathogen molecules, the “antigens”, depending on how well the shape of their receptor molecules match that of the antigens. The better the match, the better the chance to be recognised as antigen. And when that is the case, the antigens are “marked” for destruction. Then follows a process in which the T-cells play an important role.

As we keep talking of the complexity and uncertainty of the environment, the pathogens seem a very good model of it.

The best material model of a cat is another, or preferably the same, cat.

N. Wiener, A. Rosenblueth, Philosophy of Science (1945)

What is the main problem of the immune system? It cannot predict what pathogens will invade the body and prepare accordingly. How does it solve it? By generating enormous diversity. Yes, Ashby’s law again. The way this variety is generated is interesting in itself for the capability of cells DNA to carry out random algorithms. But let’s not digress.

The big diversity may increase the chance to absorb that of pathogens but what is also needed is to have match in numbers to have requisite variety. (This is why I really find variety, in cybernetic terms, such a good measure. It is relative. And it can account for both number of types and quantities of the same type.) If the number of matches between B-cell receptors and antigens is enough to register “attack”, the B-cells get activated by the T-cells and start to release antibodies. Then these successful B-cells go to a lymph node where they start to reproduce rapidly . This is a reinforcing loop in which the mutations that are good match with the antigens go to kill invaders and then back to the lymph nodes to reproduce. Those mutations that don’t match antigens, die.

That is really efficient and effective. But at the same time, the random generation of new lymphocytes with diverse shapes continues. Which is quite inefficient when you think of it. Most of them are not used. Just wasted. Until some happen to have receptors that are good match of a new invader. And this is how such an “inefficiency” is a precondition for survival. It should not just exist but be sufficient. The body does not work with what’s probable. It’s ready for what’s possible.

The immune system is not the only complex system having requisite inefficiency. The brain, the swarms, the networks are just as good examples. Having the current level of study, the easiest systems to see it in are ant colonies.

When an ant finds food, it starts to leave a trail of pheromones. When another ant encounters the trail, it follows it.  If it reaches the food, the second ant returns to the next leaving trail as well. The same reinforcing loop we saw with the B-cells, can be seen with ants. The more trails, the more likely the bigger number of ants will step on it, follow it, leave more pheromones, attract more ants and so on. And again, at the same time there always is a sufficient amount of ants moving randomly which can encounter new location with food.

The requisite inefficiency is equally important for social systems. Dave Snowden gave a nice example coincidently again with farmers but in that case experiencing high frequency of floods. Their strategy was to build their houses not in a way to prevent the water coming in but to allow the water to quickly come out. He calls that “architecting for resilience”:

You build your system on the assumption you prevent what can fail but you also build your system so you can recover very very quickly when failure happens. And that means you can’t afford an approach based on efficiency. Because efficiency takes away all superfluous capacity so you only have what you need to have for the circumstances you anticipate. […] You need a degree of inefficiency in order to be effective.

It seems we have a lot to learn from B-cells, ants and farmers about how to make our social systems work better and recover quicker. And contrary to our intuition, there is a need for some inefficiency. The interesting question is how to regulate it or how to create conditions for self regulation. For a given system, how much inefficiency is insufficient, how much is just enough and when it is too much? May be for the immune systems and ant colonies these regulatory mechanisms are already known. The challenge is to find them for organisations, societies and economies. How much can we use from what we already know for other complex adaptive systems? Well, we also have to be careful with analogies. Else, we might fall into the “best practice” trap.

Posted on March 10, 2013. | Short Link
2 Comments





Copyright © 2011-2015 Strategic Structures