The Scissors of Science

Three centuries ago the average life expectancy in Europe was between 33 and 40 years. Interestingly, 33 was also the average life expectancy in the Palaeolithic era, 2.6 million years ago. What is that we’ve got in the last three centuries that we hadn’t got in all the time before? Well, science!

Science did a lot of miracles. But like all things that do miracles, it quickly turned into a religion. A religion that most people in the Western world believe in today. And like all believers, when science fails, we just think it has not advanced yet in that area, but we don’t think that there is anything wrong with science itself. Or when the data doesn’t match, we think it’s because those scientists are not very good at statistics. Or, if not that, then simply the problem is with failed control over scientific publications, as it was concluded three years ago, when Begley and Ellis published a shocking study that they were able to reproduce only 11 per cent of the original cancer research findings.

Well, I believe that the problem with science is more fundamental than that.

The word science comes from skei, which means “to cut, to divide”. The same is the root of scissors, schizophrenia and shit. “Dividing” when applied to science, comes handy to explain some fundamental problems with it. It has at least the following six manifestations.

The first one is that the primary method of scientific study is based on “dividing”. Things are analysed, which means they are split into smaller parts. In some areas of science it is even believed that if everything is known about the smallest parts, that will help explain why the bigger ones do what they do. However, when taking things apart, what is not preserved is the relations that binds them together.

The second is the split between “true” and “false”. It is inherited from the Aristotelian logic, together with the principles of non-contradiction, and the excluded middle. Such a division might be useful for studying matter but is it useful for studying life, where the paradox is the orthodox?

The third manifestation of dividing in science is the actual disciplines; as if they are something that exists independently out there:  physics, biology, chemistry and so on. We can know something as long as certain framework allows us to. When someone steps into another discipline, he or she is not regarded well by their fellows as it is not a proper biology for example. They are not received well also by the gurus in the ventured discipline who should know much better by the virtue of being there longer. All that is additionally supported by the citations count and other crazy metrics.

The forth is splitting sharply science from non-science. That split can be experienced as specific distinction such as science/philosophy or science/art, or at a more general level. And this is not just about the area of concern or the method. This distinction is social and economical. Here’s the simplest example that you can check out for yourself. Almost all scientific papers can be downloaded from the internet at a price between 25 and 35 EUR. If you work for an academic institution, you have free access, if not – you have to pay. So, you can only give feedback to scientists, if you are scientist yourself, member of some institution, some scientific fiefdom. If science is scientific, why it doesn’t use the fact that we are in a connected world and publishing costs nothing. Is it afraid of the feedback? Well, it should be. See what happened with all traditional media when Twitter appeared.

The fifth one is that the scientists do not regard themselves as part of what they do but as something with a privileged role. Somebody isolated, godlike, that can bring neutral evidence applying rigour and “objectivity”, confirmed by others who can reproduce the results. This is the principle on which science works. There is the idea of truth, which is based on evidence and the whole notion of discovering objective reality as something being out there, independent from those who “discover” it.

The sixth one is closely related to the fifth: science is based on some dichotomies that are rarely questioned. Here’s the top list: subject/object, mind/body, information/matter, emotion/cognition, form/substance, nature/culture. It is not my intention here to trace how this came to be the situation, nor why it is problematic and when it is especially problematic. But if I we take seriously the warning of Heisenberg,

what we observe is not nature herself, but nature exposed to our method of questioning,

let me just say that our method of questioning nowadays includes denial to seriously question these dichotomies.

The six manifestations of dividing are not independent of each other. And they do not pretend to diagnose all the diseases of science. They are just a small attempt to once again ask the question: could it be that the limitations of science are due to the same qualities that brought its success?

What can be done about it?

First, admit there are serious problems not just with how science is done but with the science itself?

Second, pay more attention to attempts to solve these problems. Some good insights can be found in e.g. certain approaches in cognitive science that balance between 1st and 3rd person perspective when trying to understand life, mind, consciousness, emotions, language and interactions. Others – from the attempts of complexity sciences to promote trans-disciplinary studies. And then there is already quite some work on second-order science and endo-science trying to deal with reflexivity together with a few others of the manifestations of dividing that I tried to shortly state in this article.

Third, accept that the solution will probably not look very scientific within the contemporary definition. Alternatively it might suffer from the same diseases it is trying to cure.


Don’t buy ideas. Rent them.

Some ideas are so good. An idea can be good at first glance or after years of digging and testing, or both. When it looks instantly convincing, it just resonates with your experience. There are so many things that such an idea can help you see in a new light, or help you fill some old explanatory gaps. Or, it can sound absurd when you first hear it, and then later it gets under your skin. Either way you buy it. You buy it on impulse, or after years of testing. You invest time, emotions, sometimes reputation. And once you buy it, you start paying the maintenance costs. It’s quite like buying a piece of land. You build a house on it, then furnish it, and then you start to repair it. Somewhere along this process you find yourself emotionally attached. And that’s where similarities end. The house you can sell and buy a new one. With ideas, you can only buy new. But sometimes the previous investment doesn’t leave you with sufficient resources. And then you can’t just buy any new idea. It needs to fit the rest of your narrative.

Once you buy an idea, it can open a lot of new paths and opportunities. But it can also obscure your vision and preclude other opportunities. One thing you learn can sometimes be the biggest obstacle to learn something else, potentially more valuable.

Instead of buying ideas, wouldn’t it be better to just rent them? But not like renting a house, more like renting a car. With that car you can go somewhere, stay there, or go further, or elsewhere, or – if it is not good for a particular road or destination – come back and rent another car.

Do you like this idea to rent, instead of buying ideas? If yes, don’t buy it.

If you are not tired of metaphors by now, here’s another one. I often present my ideas as glasses. Not lenses but glasses. First, they should be comfortable. They should fit our head, and nose and ears. Not too tight, so that we can easily take them out. Not too loose, so that we don’t drop them when shaken. Second, when we put on new glasses, we don’t just put on new lenses, but also new frames. Being aware of that is being aware of limitations, and of the fact that there are hidden choices. It would also help to realise when it’s time to try on a new pair of glasses.

From Distinction to Value and Back

I tried to explain earlier how distinction brings forth meaning, and then value. Starting from distinction may seem arbitrary. Well, it is. And while it is, it is not. That wouldn’t be very difficult to show, but until then, let’s first take a closer look at distinction. As the bigger part of that work is done already by George Spencer-Brown, I’ll first recall the basics of his calculus of indications, most likely augmented here and there. Then I’ll quickly review some of the resonances. Last, I’ll come back to the idea of re-entry and apply it to the emergence of values.


If I have to summarise the calculus of indications, it would come down to these three statements:

To re-call is to call.

To re-cross is not to cross.

To re-enter is to oscillate.

In the original text, the “Laws of Form”, only the first two are treated as basic, and the third one is coming as their consequence. Later on, I’ll try to show that the third one depends on the first two, just as the first two depend on the third one.

The calculus of indications starts with the first distinction, as “we cannot make an indication without drawing a distinction”. George Spencer-Brown introduces a very elegant sign to indicate the distinction:


It can be seen as a shorthand of a rectangle, separating inside from outside. The sign is called “mark” as it marks the distinction. The inside is called the “unmarked state” and the outside is the “marked state”. The mark is also the name of the marked state.

This tiny symbol has the power to indicate several things at once:

  1. The inside (emptiness, void, nothing, the unmarked state)
  2. The outside (something, the marked state)
  3. The distinction as a sign (indication)
  4. The distinction as on operation of making a distinction
  5. The observer, the one that makes the distinction

That’s not even the full list as we’ll see later on.

Armed with this notation, we can express the three statements:


They can be written in a more common way using brackets:

()() = ()

(()) =   .

a  =  (a)

The sign “=” may seem as something we should take for given, but it’s not. It means “can be confused with”, or in other words: “there is no distinction between the value on the left side of the equation and the value on the right side”. Again, a form made out of distinction.

The first statement is called the law of calling. Here’s how it is originally formulated in the “Laws of Form”:

The value of a call made again is the value of the call.

If we see the left sign as indicating a distinction, and the right sign as the name of the distinction, then the right sign indicates something which is already indicated by the left sign.

Or let’s try a less precise but easier way to imagine it. If you are trying to highlight a word in a sentence and you do that by underlining it, no matter how much times you would draw a line below that word, at the end the word will be as distinguished as it was after the first line. Or if you first underline it, then make a circle around it, and then highlight it with a yellow marker and so on, as long as each of them doesn’t carry special meaning, all these ways to distinguish it, make together just what each of them does separately – draw attention to the word.

The second law is originally stated as:

The value of a crossing made again is not the value of the crossing.

One more way to interpret the mark is as an invitation to cross from the inside to the outside. As such it serves as operator and an operand at the same time. The outer mark operates on the inner mark and turns it into void.

If the inner mark turns its inside, which is empty, nothing into outside, which is something, then the outer mark turns its inside, which is something, due to the operation of the inner mark, into nothing.

Picture a house with fence that fully surrounds it. You jump over the fence and  then continue walking straight until you reach the fence and then jump on the other side. As long changing your being inside our outside is concerned, crossing twice is equal to not crossing at all.

The whole arithmetic and algebra of George Spencer-Brown is based on these two equations. Here is a summary of the primary algebra.

The third equation has a variable in it.

a = (a)

It has two possible values, mark or void. We can test what happens, trying the two possible values on the right side of the equation.

Let a be void, then:

a = ( )

Thus, if a is void, then it is a mark.

Now, let a be mark, then:

a = (()) =   .

If a is a mark, then substituting a with a mark on the right side will bring a mark inside another, which according to the law of crossing will give the unmarked state, the void.

This way we have an expression of self-reference. It can be seen in numerical algebra in equations such as x = 1/x, which doesn’t have a real solution, so it could only have an imaginary one. It can be traced in logic and philosophy with the Liar paradox, statements such as “This statement is false”, or the Russell’s set of all sets that are not members of themselves.

However, in the calculus of indications, this form lives naturally. In a similar way as the distinction creates space, the self-reference, the re-entry, creates time.

There is no magic about it. In fact all software programs not just contain self-referential equations, they can’t do without them. They iterate using expressions such as n = n + 1.


The Laws of Form resonate in religions, philosophies and science.

Chuang Tzu:

The knowledge of the ancients was perfect. How so? At first, they  did not yet know there were things. That is the most perfect knowledge; nothing can be added. Next, they knew that there were things, but they did not yet make distinctions between them. Next they made distinctions, but they did not yet pass judgements on  them. But when the judgements were passed, the Whole was destroyed. With the destruction of the Whole, individual bias arose.

The Tanakh (aka Old Testament ) starts with:

In the beginning when God created the heavens and the earth, the earth was a formless void… Then God said, ‘Let there be light’; and there was light. …God separated the light from the darkness. God called the light Day, and the darkness he called Night.

That’s how God made the first distinction:

(void) light

And then, in Tao Te Ching:

 The nameless is the beginning of heaven and earth…

Analogies can be found in Hinduism,  Buddhism and Islamic philosophy. For example the latter distinguishes essence (Dhat) from attribute (Sifat), which are neither identical nor separate. Speaking of Islam, the occultation prompts another association. According to the Shia Islam, the Twelfth imam has been living in a temporary occultation, and is about to reappear one day.  Occultation is also one of the identities in the primary algebra:


In it the variable b disappears from left to right, and appears from right to left. This can be pictured by changing the position of an observer to the right, until b is fully hidden behind a, and then when moving back to the left,  b reappears:


Another association, which I find particularly fascinating, is with the ancient Buddhist logical system catuskoti, the four corners. Unlike the Aristotelian logic, which has the principle of non-contradiction, and the excluded middle, in catuskoti there are four possible values:

not being


both being and not being

neither being or not being

I find the first three corresponding with the void, distinction, and re-entry, respectively, which is in line with Varela’s view that apart form the unmarked and the marked state, there should be a third one, which he calls the autonomous state.

The fourth value would represent anything which is unknown. If we set being as “true”, and not-being as “false”, then every statement about the future is neither true nor false at the moment of uttering. And we make a lot of statements about the future, so it is common to have things in the fourth corner.

The fourth value reminds me, and is reminded by the Open World Assumption, which I find very useful in many cases, as I mentioned here, here, and here. It also tempts to add a fourth statement to the initial three:

To not know is not to know there is not.

Catuskoti fits naturally to the Buddhist world-view, while being at odds with the Western one. At least until recently, when some multi-valued logics appeared.

George Spencer-Brown, Louis Kauffman, William Bricken demonstrated that many of the other mathematical and logical theories can be generated using the calculus of indications. For example, in elementary logic and set theory, negation, disjunction, conjunction, and entailment, can be represented respectively with (A), AB, ((A)(B)), and (A)B, so that the classical syllogism ((A entails B) and (B entails C)) entails (A entails C), can be shown with the following form:


If that’s of interest, you can find explanations and many more examples in this paper by Louis Kauffman.

“Laws of Form” inspired extensions and applications in mathematics, second-order cybernetics, biology, cognitive and social sciences. It influenced prominent thinkers like Heinz von Foerster, Humberto Maturana, Francisco Varela, Louis Kauffman, William Bricken, Niklas Luhmann,  and Dirk Baecker.


Self-reference is awkward: one may find the axioms in the explanation, the brain writing it’s own theory, a cell computing its own computer, the observer in the observed, the snake eating its own tail in a ceaseless generative process.

F. Varela, A Calculus for Self-reference

Is re-entry fundamental or a construct? According to George Spencer-Brown, it’s a construct. Varela, on the other hand, finds it not just fundamental but actually the third value, the autonomous state.  He brings some quite convincing arguments. For Kauffman, the re-entry is based on distinction, just as distinction is based on re-entry:

the emergence of the mark itself requires self-reference, for there can be no mark without a distinction and there can be no distinction without indication (Spencer-Brown says there can be no indication without a distinction. This argument says it the other way around.). Indication is itself a distinction, and one sees that the act of distinction is necessarily circular.

That was the reason I presented three statements, and not only the first two, as a summary of the calculus of indications.

Similar kind of reasoning can be applied to sense-making. It can be seen as an interplay between autonomy and adaptivity. The autonomy makes the distinctions possible and the other way round. Making distinctions on distinctions is in fact sense-making but it also changes the way distinctions are made due to adaptivity. At this new level, distinctions become normative. They have  value in the sense that the autonomous system has attitude, has re-action determined by (and determining) that value. The simplest and most clearly distinguished attitudes are those of attraction, aversion and neutrality.

This narrative may imply that values are of a higher order. First distinctions are made, then sense, and then values, in a sort of a linear chain. But it is not linear at all.

As George Spencer-Brown points out, a distinction can only be made by an observer, and the observer has motive to make certain distinctions and not others:

If a content is of value,  a name can be taken to indicate this value.

Thus the calling of a name can be identified with the value of the content

Thus values enable distinctions and close the circle. Another re-entry. We can experience values due to the significance that our interaction with the world brings forth. This significance is based on making distinctions, and we can make distinctions because they have value for us.

But what is value and is it valuable at all? And if value is of any value, what is it that makes it such?

Ezequiel Di Paolo defines value as:

the extend to which a situation affects the viability of a self-sustaining and precarious network of processes that generates an identity

And then he adds that, the “most intensely analysed such process is autopoiesis”.

In fact the search for a calculus for autopoiesis was what attracted Varela to the mathematics of Laws of Form in the first place. It was a pursuit to explain life and cognition. Autopoiesis was also the main reason for Luhmann and Baecker’s interest  but in their case for studying social systems.

The operationally closed networks of processes in general, and the autopoiesis in particular, show both re-entry, and distinction enabled by this re-entry and sustaining it. For the operationally closed system all its processes enable and are enabled by other processes within the system. The autopoietic system is the stronger case, where the components of the processes are not just enabled but actually produced by them.

Both are cases of generating identity, which is making a distinction between the autonomous system and environment. Environment is not everything surrounding it, but only the niche which makes sense to it. This sense-making is not passive and static, it is a process enacted by the system which brings about its niche.

Identity generation makes a distinction which is also what it is not, a unity. That is how living systems get more independent from the environment, which supplies the fuel for their independence and absorbs the exhaust of practising independence. And more independence would mean more fuel, hence bigger dependence. The phenomenon of life is a “needful freedom”, as pointed out by Hans Jonas.

Zooming out, we come back to the observation of George Spencer-Brown:

the world we know is constructed in order to see itself.[…] but in any attempt to see itself, […]it must act so as to make itself distinct from, and therefore false to, itself.


Closing the circle from distinctions to sense-making through value-making to (new) distinctions, solves the previous implication of linearity but it may now be misunderstood to imply causality. First, my intention was to point them out as enabling conditions, not treating for now the question if they are necessary and sufficient. Second, the circle is enabled by and enables many others, the operationally closed self-generation of identity being of central interest so far. And third, singling out these three operations is a matter of distinctions, made by me, as an act of sense-making, and on the basis of certain values.




Same as Magic

When I started my journey in the world of Semantic Web Technologies and Linked Data, I couldn’t quite get what was all that fuss about the property owl:sameAs. Later I was able to better understand the idea and appreciate it when actively using Linked Data. But it wasn’t until I personally created graphs from heterogeneous data stores and then applied different strategies for merging them, when I realised the “magical” power of owl:sameAs.

The idea behind “same as” is simple. It works to say that although the two identifiers linked with it are distinct, what they represent is not.

Let’s say you what to bring together different things recorded for and by the same person Sam E. There is information in a personnel database, his profile and activities in Yammer, LinkedIn, Twitter, Facebook. Sam E. is also somebody doing research, so he has publications in different online libraries. He also makes highlights in Kindle and check-ins in Four-square.

Let’s imagine that at least one of the email addresses recorded as Sam E’s personal email is used in all these data sets. Sam E. is also somehow uniquely identified in these systems, and it doesn’t matter if the identifiers use his email or not. When creating RDF graphs from each of the sources,  URI for Sam E. should be generated in each graph if such doesn’t exist already. The only other thing needed is to declare that Sam E’s personal email is the object of foaf:mbox, where the subject is the respective URI for Sam E from in each of the data sets.

The interesting thing about foaf:mbox is that it is “inverse functional”. When a property is asserted as owl:inverseFunctionalProperty, then the object uniquely identifies the subject in that statement. To get what that means, let’s first see the meaning of a functional property in OWL. If Sam E. “has birth mother” Jane, and Sam E. “has birth mother” Marry, and “has birth mother” is declared as functional property, a DL reasoner will infer  that Jane and Marry are the same person. The “inverse functional” works the same way in the opposite direction. So if Sam.E.Yammer has foaf:mbox “”, and Sam.E.Twitter has foaf:mbox “”, then Sam.E.Yammer refers to the same person as Sam.E.Twitter. That is because a new triple Sam.E.Yammer–owl:sameAs–Sam.E.Twitter is inferred as a consequence of foaf:mbox being owl:inverseFunctionalProperty. But that single change brings a massive effect: all facts from Yammer about Sam E are inferred for Sam E from Twitter and vice versa. And the same applies for LinkedIn, Facebook, online libraries, Four-square and so on.

Now, imagine you don’t do that for Sam E, but for all your Twitter network. Then you’ll get a graph that will be able to answer questions such as “From those that tweeted about topic X within my network, give me the names and emails of all people that work within 300 km from here”, or “Am I in a same discussion group with somebody that liked book Y?”. But wait, you don’t need to imagine it, you can easily do it. Here is for example one way to turn Twitter data into an RDF graph.

Of course, apart for persons, similar approaches can be applied for any other thing represented on the web: organisations, locations, artefacts, chemical elements, species and so on. 

To better understand what’s going on, it’s worth reminding that there is no unique name assumption in OWL. The fact that two identifiers X and Y are different, does not mean that they represent different things. If we know or if it can be deduced that they represent different things, this can be asserted or respectively inferred as a new triple X–owl:differentFrom–Y. In a similar way a triple saying just the opposite X–owl:sameAs–Y can be asserted or inferred. Basically, as long as sameness is concerned, we can have three states: same, different, neither same nor different. Unfortunately, a fourth state, both same and different, is not allowed, and why would that be of value will be discussed in another post. Now, let’s get back to the merging of graphs. 

Bringing the RDF graphs about Sam E, created from the different systems, would link them in one graph just by using foaf:mbox. Most triple stores like Virtuoso, would do such kind of basic inferencing at run time. If you want to merge them in an ontology editor, you have to use a reasoner such as Pallet if you are using Protégé, or run inferencing with SPIN, if you are using TopBraid Composer. Linking knowledge representation from different systems in a way independent from their underlying schemas, can bring a lot of value, from utilising chains of relations to learning things not known before linking.

The power of “same as” has been used a lot for data integration both in controlled environments and in the wild. But let’s not forget that in the latter, the web,   “Anyone can say anything about anything”. This was in fact one of the leading design principles for RDF and OWL. And then even with the best intentions in mind, people can create a lot of almost “same as” relations that would be mixed with the reliable “same as” relations. And they did and they do.

The problems with “same as” have received a lot of attention. In one of the most cited papers on the subject, Harry Halpin et al. outline four categories of problems for owl:sameAs: “Same Thing As But Different Context”; “Same Thing As But Referentially Opaque”, “Represents”, and “Very Similar To”. Others worn about problems with provenance. Still, almost all agree that the benefits for the owl:sameAs for Linked Data by far outnumber the risks, and the latter can be mitigated by various means.

Whatever the risks with owl:sameAs in the web, they are insignificant, or non-existent in corporate environments. And yet, most of the data stays in silos and it gets integrated only partially and based on some concrete requirements. These requirements represent local historical needs and bring expensive solutions to local historical problems. Those solutions typically range from point-to-point interfaces with some ETL, to realisation of REST services. They can get quite impressive with the means for access and synchronisation, and yet they are all dependant on the local schemas in each application silo and helpless for enterprise-wide usage or any unforeseen need. What those solutions bring is more complicated application landscape, additional IT investments brought by any change of requirements and usually a lot of spendings for MDM software, data warehouses and suchlike. All that can be avoided if the data from heterogeneous corporate and open data sources is brought together into an enterprise knowledge graph, with distributed linked ontologies and vocabularies to give sense to it, and elegant querying technologies, that can bring answers, instead of just search results. The Semantic Web stack is full of capabilities such as owl:sameAs, that make this easy, and beautiful. Give it a try.


The Infatuation with "Building Blocks"

Why the majority of Enterprise Architects are in love with the concept of “building block”. Perhaps because blocks don’t answer back and spoil the design, Aiden Ward suggested.

In TOGAF 9.1 “building block” is used 349 times, not counting the separate use of ABB and SBB acronyms. There is a whole chapter devoted to building blocks. In there, we find the following:

A building block is a package of functionality defined to meet the business needs across an organization.

A building block has a type that corresponds to the TOGAF content metamodel (such as actor, business service, application, or data entity)

A building block has a defined boundary and is generally recognizable as ‘‘a thing’’ by domain experts.

Now, if a “building block” is a “package of functionality”, taking it together with the second statement, then “actor” and “data entity” are – according to TOGAF – types of “package of functionality”. That doesn’t bother me so much, as logic and consistency have never been among the strong sides of TOGAF. I’m more concerned with the last statement. There are indeed some entities from the model that are recognised by domain exerts as “things” with defined boundary, but I haven’t heard a domain expert using “building block” to discuss their business. Even if there are some that do, having surrendered to the charms of eloquent Enterprise Architects, I don’t think they  use that language among themselves or that they need to.

Enterprise Architecture is not unique in using technology metaphors for understanding its organisation. After some circulation of the brain as metaphor for computer, then the computer returned the favour and have been used for decades as a metaphor for the brain. Not even that, it tuned the thinking of some cognitive scientists in such a way,  that many of them managed to “prove” that the brain is doing information processing. That school of thought, the so called computationalists, has still a lot of followers today. The same school regards the mind “running” on the body just like software is running on hardware. That approach has been doing much more harm on an array of disciplines, than the usage of building blocks on Enterprise Architecture. But qualitatively, it is a similar phenomenon.

It’s now time to choose some structure for building the point about my puzzlement (and then hopefully solving it), and I select the following titles for my text blocks: name, history, and purpose.


The primary source of irritation – I have to admit – is the name itself. But it is also a minor one, so we can quickly deal with it.

Where is the name coming from?

EA is often perceived by non-EA people as something coming from IT which is not even proper IT. The IT parlance in general has borrowed a lot of terms from other engineering practices. Software development in particular has shown preference for construction engineering over – what would have seemed a more natural choice – electronic engineering. And that’s why we are “building” software. Having in mind how iterative that is, if that “building” is brought back and applied to actual buildings, there won’t be many happy investors or inhabitants.

Anyway, “to build”applications is well established and doing well in software development. But what is the utility of “building block” in Enterprise Architecture? Indeed, representing some abstractions as boxes, and placing them one over another resembles playing with Lego. The metaphor is bringing a more than welcome simplicity to the architecture discussions. But, apart from smoothing communication between Enterprise Architects, is it useful for anything else? And how does the name reflect the actual use of  the concept?

My quick answer is: there is nothing that can be built using architecture building blocks. Something can be drawn and then reconfigured in another drawing. Isn’t it then more appropriate to talk about “description” blocks or “design” blocks?

Some Enterprise Architects are well aware of that. In fact it seems that the choice of  “building block” over “component” is exactly to remind the users that the former is not the latter but just an abstract representation of it. This has been debated for about a decade, and various solutions have been proposed, including “reusable” blocks, which would make the purpose explicit, at least.

But this awareness is neither widespread not persistent enough to prevent the stabilisation of some thinking habits leading to decisions that look good only within the protected virtual reality of architecture descriptions.


There are two lines of history we can follow. One is the evolution of “building block” in the Enterprise Architecture discipline; the other is how a building block comes about in a concrete EA practice. If that was about species, a biologist would call the first line phylogenesis, and the second one – ontogenesis.

Following the  first one would probably reveal interesting things but would require an effort that the concept does not justify for me. That’s why I’ll limit tracing that history to the hypothesis of the origin I discussed in the Name section.

A building block in a concrete enterprise architecture practice is typically coming as specialisation of a generic element. That generic element, called concept, stereotype or construct, is based on assumptions about what are, based on a history and knowledge of different concerns at the moment of the meta-design, the important patterns that need to be represented. These are, if we take Archimate as a popular example, things like Role, Event, Process, Service, Interface, and Device. Their application however, is limited by three restrictions:

  • first, when an entity is represented it has to be as one of these types;
  • second, the criteria for making the choice is coming from interpreting definitions of these concepts and not from what the represented object from the real world has or does (there are generalised relations, but they are not used for that);
  • third, something cannot be represented as more than one concept.

So, a building block, being a specialisation of a concept, asserted in this way, inherits both this set of restrictions and everything else that’s coming from the object-oriented paradigm, which the use of “specialise” and “inherits” implied already.


The use of building blocks in Enterprise Architecture is either directly supporting, or historically linked with, reusability.

A building block represents a (potentially re-usable) component of business, IT, or
architectural capability that can be combined with other building blocks to deliver
architectures and solutions. (TOGAF 9.1, p.11)

In order to support reusability building blocks should have the right level of granularity. Basically, they need to balance between flexibility and efficiency. In order to be reused, which would mean recombined, they shouldn’t be too coarse-grained, otherwise there won’t be enough flexibility. On the other hand, if there are too fine-grained, it won’t be efficient to maintain reference architectures.

Now, this seems key for understanding the popularity of “building block”. Enterprise Architects have to justify their reason-to-be. As both reusability and flexibility make a good business case, stabilisation of that concept could be just due to survival pressures. And then Enterprise Architecture in order to exist as such, should not only be allowed and fed but should have ways to assert itself as a social system. And the “building blocks” provide one of the conventions that make the network of communications within the EA practice sustain its identity.

Fine then, problem solved: Enterprise Architecture – being predominantly practised  the way it is now – needs concepts such as “building blocks” in order to survive as economic entity, and as social system.

And yet, Enterprise Architecture can be practised and can bring more benefits without building blocks, layers, tiers and suchlike. It can be done by not reducing what is learned in the process to what was decided at the moment of knowing much less when the meta-model was fixed. The objects that need to be described and designed can get their type from how their descriptions satisfy the conditions for classification. And last but not least, it could be done entirely in the language that domain experts use and “building block” is not used by them, at least not in the service industry. When it is used by actual builders, I doubt they mean “package of functionality”.

What’s Wrong with Best Practices?

That is what several people asked me in the last few weeks. I have even made one quick attempt to answer but then I realised that the topic deserves more clarification.

While I have a lot of sympathy for those that object to ‘Best Practice’ as a name – even more – to those that object it as a claim, my uneasiness is somewhat different. I’ll try to shortly explain what I know about it.

Some best practices are very useful. In fact most mature and adequately applied best practices for achieving some technical task from painting a wall and repairing an engine, to building a factory or a ship, are indeed valuable, as long as they don’t suffocate innovation.

The real problem is when best practices are applied to people and social systems. I call this a ‘problem’, but it is in fact a huge opportunity for many. Most of the contemporary non-fiction books, especially management and self-help texts, have been seizing this opportunity extremely well. It’s not easy to find a best seller in this category, or a popular article, that doesn’t provide some sort of prescriptions and advices, often numbered, on how to achieve or avoid something. May be it is a best practice for best sellers. Let’s give it a try then:


Four Reasons Why You Should Be Cautious When Applying Best Practices:


1. Correlations.

How Best Practices come about? Some individuals or organisations, become known, or are later made known by the actions of the best practice discoverers and proponents, as successful according to some norms. Let’s call these individuals or organisations ‘best practice pioneer’. Then one or more observers, the ‘best practice discoverer’, studies the pioneers to find out what made them successful. The discoverer first takes certain effects, and then selects, by identifying commonalities, what she or he believes were the causes. That is followed by generalisation of the commonalities, from which point they begin their life as prescriptions, regardless if they are called ‘best practices’, ‘methodologies’, ‘techniques’, ‘recipes’,’templates’, or something else. Later they are tested, and based on feedback, reappear in more mature forms and variations, in some cases supplied with scalability criteria and conditions for successful application. The successes and failures of those that apply them give birth to Best Practices on applying Best Practices.

The problem is, that the discoverers select common patterns among the observed successful pioneers, and infer causal relation between these commonalities and those that were the criteria to select that set of pioneers in the first place. Then such correlation leads the discoverer to select what to pay attention to, and every pattern that support the hypothesis based on the selected commonalities, would be preferred over those that don’t.

2. The risk of over-simplification.

Best practices help us deal with external stimuli, when they are too many to handle, by prompting which ones to pay attention to and how to react to them. The only way to deal with a situation is when the number of responses is higher than the number of the stimuli, within given goal set (Ashby’s law). Or in other words, best practices are tools for reducing external variety. But not only that, they also provide means for amplifying internal variety in a special way – coupled with those stimuli that you are advised to pay attention to. So if that assumption was wrong, and it often is, neither the external variety is reduced, nor the internal is amplified.

3. Assumptions about the application context.

I used to be a practitioner of PRINCE2. What I still appreciate is a few smart techniques and in fact – the name itself. The name is the important disclaimer I’m missing in most methodologies and other types of best practices: they only work in controlled environments. They work when most of the conditions of the design-time, if you allow me the IT jargon, are unchanged in run-time. This is rarely the case and increasingly less so. Which brings another interesting phenomenon: the same conditions that make the world less predictable also help quickly productise and spread best practices. They come with better marketing and with more authority in a world in which less of what has happened could prepare you for what will.

4. The habits created by Best Practices.

The worst is when people hide behind the authority of best practices or their proponents. If not that, best practices create habits of first looking for best practices, instead of thinking. And then, there is the alternative cost: the more time people spend on learning best practices, the less time they have for developing their senses for detection of weak signals and for developing their capabilities for new responses.

In summary, if you are sure that certain best practice is useful, and it’s not based on wrong inference, and does not lead you to dismiss important factors, and the situation you are in is not complex, and it doesn’t weaken your resilience, then go ahead, use it.


The Pathologies of Silo-fighting

The division of labour has been the main principle for structuring organisations in the last two centuries. That is still the dominant approach for allocating resources, information and power in companies and public institutions. The new dynamics in a connected world have revealed a rich spectrum of problems related with these structures ranging from ineffective coordination to turf wars. Hence came the reaction, bringing the conventional stigmatising label ‘silos’ and the birth of a whole industry of silo-fighters armed with silo-bridging or silo-breaking services, methods and technologies.

There are various organisational pathologies brought by these silo-fighters. Here I will point out only the two of them which I see most often. The choice to classify the silo-fighters into silo-bridging and silo-breaking – an obvious oversimplification – is made to support the illustration of the these two pathologies.

‘Bridging the silos’ is not a strategy based much on appreciating their role and thus opting for bridging over breaking.  It’s mainly due to silo-fighters’ insufficient resources and power. If they manage to successfully sell the story of the bad silos, coming with a rich repertoire of metaphors such as walls, chasms, stove-pipes, islands and such like, then they get permission to build – as such a narrative would logically suggest – bridges between the silos.

Now, the problem with bridges is that they are either brittle and quickly break, or they are strong enough to defend their reason to be. They break easily when they fail to channel resources for longer time than the patience over their initial failures would allow. However, the identity formation switches on viability mode, and they grow out of network of decisions supporting their mission, now turned into an on-going function. If the reason the bridges exist are silos, and the bridges want to keep on bridging, then the silos have to be kept healthy and strong as they are what the bridges hang on to.

The bridges reinforce and perpetuate themselves up to a point, in which they are recognised as silos, and the problem is solved very often by building new bridges between them. This is how a cancerous fractal of bridges starts to grow. As attractive as this hyperbole is, I have witnessed repeatedly only two levels of recursion, but isn’t that bad enough?

In contrast, the silo-breaking strategies want nothing less than the destruction of silos. There, the silos are seen only in their role of a problem. Nobody asks what kind of problems this problem was а solution to. Instead, this type of silo-fighters start waging exhausting wars. The wars can end in several ways. A common one is resource depletion. Another is with the silos withstanding, or with the silo-fighter being chased away or transformed. And then of course it could be the case of victory for the silo-fighters. And this is when the disaster strikes. Having the silos down, the silos fighters are faced with all the problems being continuously solved by the silos during their life-span. Usually, they have no preparation to deal with those problems, neither they have the time to come up with and build alternative structures.

When discussing these two pathologies, it is very attractive to search for their root cause and then, when found, fix it. But that would be exactly the fuel these two types of silo-fighters run on. It takes a deeper understanding of the circularity of and in organisations, to avoid this trap. By ‘understanding’, I mean the continuous process, not the stage, after which the new state of ‘understood’ is achieved. And it takes, among other things, the ability to be much more in, and conscious of it, and at the same time much more out, but only as a better point for observation, not as an attempt for excluding the observer.


The Role of Meaning and the Meaning of Roles

Let’s start with roles. ‘Role’ comes from ‘roll’, as it was on a paper roll where the actor part was written. It is about something prescribed and then performed. But it evolved from roles that were performed as prescribed, through those that were not, to performing roles that had not been prescribed at all.

Roles are about relations. In fact, in Description Logic roles play the same role as relation, association, property and predicate in other formal languages. If John has a son George and is 30 years old, then George is in the role of a son for John, and even 30, although not an actor, plays the role of an age for John. Roles are inherently relational. A relation to itself or to something other. There is never just a role, always a ‘role in’ or a ‘role for’.

Roles are not just relational but are often determined by the dynamics of interactions. Here’s a handy example. The role of this text in the situation of you being in the role of a reader, will be, as assigned by me in the role of a writer, to transfer my thoughts on the meaning of roles, but it would rather trigger the construction of both similar and complimenting thoughts of you, and by doing so will play a different role, which, while evoked by me, is determined by you.

As there is now something about the meaning of roles, it’s time to introduce the role of meaning. If the role is always a role-in, my interest here is in the role of meaning in living and social systems. These systems have some things in common. One such thing is that they have internally maintained autonomy. Such autonomous systems bring forth and co-evolve with their niches. And that happens by creating of meanings which motivate attitudes and actions. But how does meaning come about?

Before meaning, there is the primary cognitive act of making a distinction,  bringing up something out of its background, distinguishing a thing of that which it is not. That act determines and is determined by the dynamics of the interaction between the system and its niche (by ‘system’ here I will only refer to systems with internally maintained autonomy). This dynamics is circular and recursive. Roughly speaking, it goes like this:

The act of distinction, the very making of difference changes the making of difference and brings forth “the difference that makes a difference”, in Bateson words, or the “surplus of significance” in Varela words, which is also co-dependant: the sense made changes the sense-making that changes the sense that is made and then brings forth the behaviour changing relation between the sense maker and the meaning of the distinguished element. This results in attitude of attraction, aversion or neutrality (or something more sophisticated like staying put, paralysed by the equal amount of temptation and fear). Or, in other words, the sense-making transcends into value-making. The value-making evolves itself and in downward causality influences the evolution of the distinguishing and the sense-making capability, which is in fact again a distinguishing capability but it is now distinguished as sort of a second-order one.

Before going further, I need to make clarifications on the use of ‘sense-making’ and ‘value-making’. ‘Sense-making’ is giving meaning to what is experienced. Here I use it with an emphasis of the action of making, of the creation, or more precisely, co-creation of sense. It is not that the sense is out there and all we need to do is to disclose it. No, we (or whatever the system in focus is) are the ones that actually make it, the origin is within us, or rather within the dynamics between us and what we interact with.

‘Value-making’ should not be confused with value adding. The system makes a distinction of a higher order in terms of directing its behaviour based on this distinction, hence the choice of ‘value’. It is not specified by the distinguished element, it is determined by the internal structure of the system and the dynamics between the system and the environment it’s structurally coupled with. This and the fact that value-making is a sense-making of a higher-order, is where the preference for ‘making’ comes from.

Only a small part of the environment, a niche, is constantly changing its content, is being interacted with, and actually matters. The niche is not one niche but a network of dynamically changing and influencing each other niches. A family is one niche, but that’s not the family described by somebody knowing all the facts, and not the family which will be invariant for each of the family members. The family as a niche is the subjective construction of interactions, memories, emotions, attention and imagination unique for every member of the family, as long as they consider themselves as such. This could easily exclude actual members and include those that have a lasting experience as such. And the same applies to work circle and friends circle, as to all occasional and recurring encounters, and virtual communities. But then, apart from interactions with other humans, we also have a niche out of the air we breathe, the grass we walk on, the stairs we climb and descend. And all that, changed by us, is changing us and changing the way we change it. But those changes, as long as a system is viable, serve to protect the identity from changing. We take from our niche things to make them into more of ourselves and become better in doing so.

Unlike the air, the grass and the stairs, a family or an organisation are autonomous systems which maintain their identity. They have their own niches. Moreover they are social systems. There are different ways to look at them. One way would be as autopoietic systems of communications, having humans as their niche (Luhmann), and another would be to have humans as both sub-systems and niche depending on weather the processes they participate in are part of the closed network of processes creating the identity of the system in focus. And that can be talked about in terms of the roles humans play.

Now we are ready to see what the role of meaning has to do with the meaning of roles. A living bacterium is at a stage of development where the sense-making has transcended into value-making. It does not only distinguish an environment with low from that with high sucrose concentration but prefers and moves towards the latter. As Evan Thompson put it, while sucrose is a “present condition of the environment, the status of the sucrose as a nutrient is not”, “it is a relational feature, linked to the bacterium metabolism”. And this is how the creation of meaning and roles are co-dependant. The dynamics between the bacterium and its environment, by making the sucrose relevant for the viability of the bacterium, realises the role of sucrose as nutrient.

But that’s not only relevant for cells and living organisms. It’s applicable for social systems as well. A company acts so that to turn some part of the environment into employees, other into clients, partners and so on. And for the same reason as the bacterium – to maintain it’s viability.

Once a formal role of an employee is assigned by a contract, a less formal roles are enabled by different mechanisms. It is now common to refer to such mechanisms as ‘Governance’ or an essential part of it. That part is a meta-role assigned to some people to determine the role of others. One and the same person often plays several roles.

Roles can be determined by formal assignment, by methodology, or by a Governance body but they can be also invented and self-assigned by the actors themselves, as typical for the organisations researched by Frederic Laloux. In that case, the evolutionary nature and the granularity of the roles are both dealing with the pathologies of assigned roles (being status currency, perpetuated even when obsolete, determined by politics and so on) and making organisations more responsive to change. Again the system, by what it does, takes from its environment what it needs to make more of itself, recursively: it turns non-employees into employees and vice versa,  and employees take up and leave roles, as determined by the dynamics of their interactions.

And so it seems that the meaning of roles is continuously realised by the role of meaning as a way in which a system generates its niche by asserting itself and maintaining its viability.

Language and meta-language for EA

That was the topic of a talk I gave in October last year at an Enterprise Architecture event in London. These are the slides, or most of them, anyway.

They probably don’t tell the story by themselves and I’m not going to help them here unless this post provokes a discussion. What I’ll do instead is a clarification about the title. “Language” refers to the means of describing organisations. They could be different. Given the current state of maturity, I have found those based on description logic to be very useful. What I meant by the “current state of maturity” is that a method in its theoretical development, application, the technologies supporting it and the experience with their application justifies investments in utilising them and helping in their further development. Although I find such a language clearly superior to the alternatives in use, that doesn’t mean there are no issues and that there are no approaches showing convincing solution to those issues. However, the practice with the latter or with the available tools doesn’t give me enough reason to stand behind them. The situation with the “meta-language” is similar but let’s first clarify why I call it that.

Metalanguage is commonly defined as language about language. And if that was the meaning I intended, it would have that type of relation to the language and then probably what I’m writing here could have been referred to as a mixture of another meta- and a meta-meta-language. It wasn’t the meaning I intended. But to clarify the intended meaning of “meta”, I need to first clarify “language”.

I have found that there is a need to describe properly the “objects” that people in organisations are concerned about and how they relate to each other. It could be some way of representing physical things such as buildings, documents and servers or abstract concepts such as services, processes and capabilities. And although it relates also to abstract things, I sometimes call it “language for the substance”.

Organisations are autonomous and adaptive systems, continuously maintained by their interaction with their niche, the latter being brought forth from the background, by that very interaction. While a language such as the one proposed can be useful to understand the components of an organisation, it doesn’t help much in understanding the dynamics and viability. The language for the substance cannot be used to talk about the form. That’s why there is a need, maybe temporarily until we find a better solution and probably a single language, to have another language and that other language I called meta-language in the presentation.

As this is a language for the form, I keep looking for ways to utilise some proposals, one of the most fascinating being George Spencer-Brown’s Laws of Form (this post includes a brief introduction). Papers like this one of Dirk Baecker give me hope that it is possible. Until then, for the purposes of Enterprise Architecture, I find the Viable System Model, with the whole body of knowledge and practice associated with it, as the most pragmatic meta-language.

Essential Balances in Projects

These are part of the frames from the projects-flavour of the “Essential Balances” theme, delivered in a workshop format at a training event yesterday in Athens.

Note: new versions of the workshop slidedeck will appear in the frame above as soon as I update the file in SlideShare. That should explain why instead of Athens and July, the first slide refers to a different event.

Redrawing the Viable System Model diagram

I’ve been arguing repeatedly that trying to get the Viable System Model from overviews, introductions and writings based on or about it, can put the curious mind in a state of confusion or simply lead to wrong interpretations. The absolute minimum is reading at least once each of the three books explaining the model. But better twice. Why? There are at least two good reasons. The obvious one is to better understand some points and pay attention to others that have been probably missed during the first run. But there is also another reason. Books are of linear nature and when tackling non-linear subjects, a second reading gives the chance to better interpret each part of the text when having in the memory more less all other parts which relate to it.

Still, one of the things that are expected to be most helpful is in fact what brings about either confusion, aversion or misuse: the VSM diagrams. They clearly favour expected ease of understanding over rigour and yet in some important points they fail in both. Here is my short list of issues, followed by description of each:

  • Representation of the channels
  • Confusion about operations and their direct management
  • Notation and labelling of systems
  • They show something in between generic and example model
  • Hierarchical implication

Representation of the channels

Stafford Beer admitted several times in his books the “diagrammatic limitations” of the VSM representations. Some of the choices had to do with the limitation of the 2D representation and others, I guess, aimed to avoid clutter. Figure 26 of “The Heart of Enterprise” is a good example for both. It shows eleven loops but implies twenty one: 3 between environment, operations and management, multiplied by 3 for the choice of showing three operations, then another 9 = 3×3 for loops between same-type elements and finally 3 more between operation management and the meta-system.

Confusion about operations and their direct management

Depending on the context, System One refers either to operations or to their direct management. In some diagrams S1 is the label of the circles, and in others – of the squares linked to them. Referring to one or the other in the text, depending on which channels are described, only adds to the confusion. That is related to the general problem of

Notation and labelling of systems

All diagrams representing the VSM in the originals writings and all interpretations I’ve seen so far, suggest that circles represent System One, and triangles pointing up and down, represent System Two and Three* respectively. Additionally most VSM overviews state exactly that in the textual description. My assertion is almost the opposite:

What is labelled as S1 and what is shown as circles are both not representing S1.

That might come as a shock to many and yet, now citing Beer, System One is not the “circles” but:

The collection of operating elements (that is, including their horizontal and vertical connexions)

The Heart of Enterprise, page 132

Well, strictly speaking a system is a system because it shows emergent properties so it is more than the collection of its parts but even referring to it as collection reveals the serious misinterpretation when taking only one of its parts to represent the whole system.

They show something in between generic and example model

Communicating such matters to managers trained in business schools wasn’t an easy task. And it is even more challenging nowadays. There is a lot to learn and a even more to unlearn. It is not surprising then that even in the generic models typically three operations are illustrated (same for System 2).  Yet, I was always missing a true generic representation, or what would many prefer to call “meta-model”.

Hierarchical implication

It can’t be repeated enough that the VSM is not an hierarchical model, and yet it is often perceived and used as such or not used especially because of that perception. It seems that recursivity is a challenging concept, while anything slightly resembling hierarchy is quickly taken to represent one. And sadly, the VSM diagram only amplifies that perception, although the orthogonality of the channels serves an entirely different purpose. Stafford Beer rarely misses an opportunity to remind about that. Nevertheless, whatever is positioned higher implies seniority and the examples of mapping to actual roles and functions only help in confirming this misinterpretation.


There are other issues as well but my point was to outline the motivation for trying alternative approaches for modelling the VSM, without alternating the essence or the governing principles. Here is one humble attempt to propose a different representation (there is a less humble one which I’m working on, but it’s still too early to talk about it). The following diagram favours circular, instead of orthogonal representation which I hope achieves at least in destroying the hierarchical perception. Yet, from a network point of view the higher positioning of S3 is chosen on purpose as the network clearly shows that this node is a hub.


System One is represented by red colouring, keeping the conventional notation for the operations (S1.o) and their direct management (S1.m). As mentioned above, apart from solving this, the intention  as to have it as a generic model. If that poses a problem for those used to the hybrid representation, here’s how it would look if two S1s are shown:


Circular Netwrok View Of The Viable System Model-2operations

I hope this proposal solves fully or partially the five issues explained earlier and brings a new perspective that can be insightful on its own. In any case the aim is to be useful in some way. If not as it is now, then by triggering feedback that might bring it to a better state. Or, it can be useful by just provoking other, more successful attempts.