SASSY Architecture

SASSY Architecture is a practice of combining two seemingly incompatible worldviews. The first one is based on non-contradiction and supports the vision for an ACE enterprise (Agile, Coherent, Efficient), through 3E enterprise descriptions (Expressive, Extensible, Executable), achieving “3 for the price of 1”: Enterprise Architecture, Governance, and Data Integration.

The second is based on self-reference and is a way of seeing enterprises as topologies of paradoxical decisions. Such a way of thinking helps deconstruct constraints to unleash innovation, reveal hidden dependencies in the decisions network, and avoid patterns of decisions limiting future options.

As a short overview, here are the slides from my talk at the Enterprise Architecture Summer School in Copenhagen last week.

You can download the slides from SlideShare but you can’t see the animations there and they can help in getting more of the story. To see the animations, play the presentation on YouTube.

Related posts, talks, and workshops:

Language and meta-language for EA

Wikipedia “Knows” more than it “Tells”

Reasoning with Taskless BPMN

Productive Paradoxes in Projects

Essential Balances in Organisations

QUTE: Enterprise Space and Time


QUTE: Enterprise Space and Time

Here’s another pair of glasses with which to look at organisations. It can be used either together with the Essential Balances or with the Productive Paradoxes, or on its own. For those new to my “glasses” metaphor, here’s a quick intro.

The Glasses Metaphor

As I’m sceptical about the usefulness of methodologies, frameworks and best practices when it comes to social species, my preference is to work with habits and instead of using models, to use organisations directly as the best model of themselves.

The best material model of a cat is another, or preferably the same, cat.

N. Wiener, A. Rosenblueth, Philosophy of Science (1945)

What I find important in working with organisations is to break free from some old habits, by changing them with new ones. And most of all, cultivating the habit of being conscious about the dual nature of habits: that they are both enabling and constraining; that while you create them they influence the way you create them. Along with recipes and best practices, I’m also sceptical about KPIs, evidence-based policies, and all methods claiming objectivity.

Objectivity is a subject’s delusion that observing can be done without him. Involving objectivity is abrogating responsibility – hence its popularity.

Heinz von Foerster

Instead of “this is how things are”,  my claim is that “it’s potentially useful to create certain observational habits”. Or – and here comes the metaphor – the habit of observation using different pairs of glasses. “Different” implies two things. One is that you are always wearing some pair of glasses, regardless of whether you realise it or not. And the other is, that offering a new pair is less important than creating the habit of changing the glasses from time to time. I prefer “glasses” to “lens” metaphor and here’s why. Glasses have indeed lenses and lenses are meant to improve the vision or, at any rate, they change it.  Quite often, the glasses I offer bring surprises. Where you trust your intuition, you might see things that are counter-intuitive, and where you’d rather use logic, they might appear illogical. It’s not intentional. It just often happens to be the case. The first reason I prefer glasses metaphor to just lens is that glasses have frames. That should be a constant reminder that every perspective has limitations, creates a bias, and leaves a blind spot. Using the same glasses might be problematic in some situations or in all situations, if you wear them for too long. And the second reason is that glasses are made to fit, they are something designed for our bodies. For example, they wouldn’t fit a mouse or even another person. This has far-reaching implications, which I’ll not go into now.


QUTE stands for “Quantum Theory of Enterprise”.

Quantum, as it is inspired by quantum mechanics, with focus on the method of questioning rather than the answers, with an understanding of multiple potentialities, context dependence, and the impact of measurement. Yet, it’s rather inspired than drawn from, as neither there will be any attempts to prove analogies, nor the arguments will be guided by an analogy hypothesis.

Theory should be understood in Maturanian terms:

We human beings propose theories as systems of explanation of what we distinguish as happening in what we observe or do in the realization of our living. Theories are systems of logical deductions that we propose in order to follow the consequences that would arise in a particular situation if we transformed everything in it around the conservation of some set of basic premises that we choose to adopt – either because we accept their validity according to some logical argument or, a priori, because we like them.

Humberto Maturana

QUTE uses Enterprise instead of organisation for three main reasons. First, it is not only about things that are organisations but also to those that have an organisation.  Second, as organisations can be defined in terms of their distinction of and coupling with the environment, the horizon is beyond organisations. And third, from the definition of Enterprise in  The Law Dictionary, I like “purpose”, “incorporated or not”, but also the implication that it is transient. Being transient is somehow close to the Weick’s concept of “impermanent organization”:

To depict impermanent organizing is to presume that people have agency, that there is an ongoing dialectic between continuity and discontinuity from which events emerge, that humans shape their circumstances, and that minds and selves emerge from action.

Space and Time

We can talk about Space and Time only using language. Most of our thinking is also restricted by language. The affordances of language for talking about Space and Time are shaped by our structure, life and history. It is paradoxical that from one side we have difficulties to imagine Time as another dimension, and not so different in nature than Space, but at the same time, we can only talk about Time as if it’s Space. We draw time as distance, we travel in time, time is long and short, and we fear that the end is near. Lakoff and Johnson provided plenty of strong arguments that we are missing a concept of time in itself. Our understanding of time is relative to other concepts such as space and motion. Not only that, but the perception of time has fixed spatial orientation for most cultures. We think and talk about past as being behind and about future as being ahead of us. This fixation is reinforcing and being reinforced by other habits, such as forecasting by extending the past to infer the future. There is not much we can do about talking of Time in terms of Space apart from being conscious about it. But we can mentally reverse the arrow of time and start thinking about time as flowing in the opposite direction, from future to present to past. This is what the Quantum Approach to Time and organizational Change (QATC) applies. This theory invites us to look at the future as potentialities collapsing into the present by the particular selection of the context. The authors of QATC not only provide an elegant application of quantum mechanics in social sciences, but also a powerful thinking tool that overcomes many of the limitations of conventional organisational thinking.

Before doing the exercise of mentally reversing the arrow of time, let’s start with an easier one. As we are now used to do a lot of zooming and panning with our computers and mobile devices, this would help start using the QUTE Space-Time glasses right away. A possible movement in enterprise space would be to go mentally from organisation to organisation or from one function to another in the same organisation. We can call this enterprise space panning. But we can also follow the fractal dimension. We can call this enterprise space zooming. That doesn’t necessarily mean to follow the hierarchical structure as there are organisations now that are quite flat but still show fractal properties. We can imagine this zoom-in movement as making a step to the next embedded system that has an internally maintained way to distinguish itself from its environment. Conventionally, that would be a movement from division to department to unit to team to individual. Between team and individual there can be other social identities, some transient, some longer-lasting. We can move in enterprise time in a similar way. For example, looking at all changes in a billing process would mean to look at the history of changing the way billing was done in that organisation. Depending on the preferred sensitivity, we can distinguish a different number of changes. Let’s say we see 7 big changes. Then, if we zoom in to see only the last change, we may count the number of instances of the billing process since the current structure has been implemented. Let’s say 3 thousand. And then we can zoom in again to review an instance, its start and finish, overall duration, durations of the instances of each process step, accumulated waiting time and whatever else is of interest. This is an example of enterprise time zooming. But just as with space, there is also enterprise time panning. We already do that by comparing different future scenarios. However, these scenarios are often an extension of the path from past to present by changing some variables. Then we are often surprised when we see totally unexpected, discontinuous change. In other words, it’s easy to imagine different trajectories calculated by the same kind of function, but difficult to imagine one drawn from an entirely different function. And then, even if we do, what will it be a function of? Аnd if we eventually find out, what will we do with these arguments without having a historical data?  This is where we need to take out some constraints to imagine possible futures, to understand the interaction of processes and constraints that enact the present, and to look at the past not only for what happened, but also for what might have happened.

Once we know these four basic ways to use the QUTE Space-Time glasses, we can start using them.

The Space-Time glasses can be used together with the Autonomy-Cohesion glasses. Working in organisations can be looked at as a triple loss of freedom. It is about where you want to be, what you want to do and how you want to do it. An easy way to understand pathologies of insufficient autonomy is to look for a tight coupling of what and how obscuring the way. The distinction between why, what and how is a crude one but to be useful, it’s important to understand it’s relativity. Whether it’s the why, what or why, would depend on how we look at it. Here’s where the Space-Time glasses become handy.  In Enterprise Space, taking a traditional hierarchical structure, we can easily see that the what for a certain level in focus is the how for the level above, and the why for the level below. Respectively the what for the level below is the how for the level in focus. In the time dimension, we can see the why, the what and the how replacing sequentially each other in the network of decisions. And this is the case regardless if we look at the organisation as a whole or if we take the work of an individual person. At the time of taking a decision, it’s a matter of choosing what, enabled but the choice made previously, which is the why from the current perspective, and was the what back then. Once what is decided, it needs and enables how, which will be the next what and so on.

Speaking of decisions, the distortion of Autonomy-Cohesion balance is related to what type of decisions are taken, where and by whom.  In space, the autonomy of a team or a department from the decisions of the structure they are part of, is just as important as that at the lower or higher level. And then, it’s equally applicable in the time dimension, for example to rules, as they are based on decisions made earlier, possibly by people that those affected by the decisions don’t even know.

Decisions are even more interesting to look at if we put the Productive Paradoxes glasses on. Then we look at organisations as social systems. According to Luhmann’s social systems theory, organisation are made of and constantly reproduced by a network of decisions. Decisions are paradoxical in their nature. They are paradoxical in three ways. First, decisions have an inherent contingency. They are neither impossible nor necessary. Every decision could have been decided otherwise. Probably the most concise formulation of this paradox was made by Heinz von Foerster: “Only those questions which are in principle undecidable, we can decide”. The others are simply calculations. The second paradox is that once a decision is made, it is only a potential decision until it is chosen as a decision premise by a subsequent decision. At the same time, this subsequent decision can only be made if the potential decision happened as a communication event. In that sense, if at a moment T1, a communication event represents a potential decision A, it enables decision B, which, when taken at moment T2, produces the previous decision A, taken at moment T1, by selecting it as its decision premise. If A is not used as a decision premise by some subsequent decision, it would simply be regarded as “organisational noise”. Decision A enables and is produced by decision B and by the same pattern, A produces its premises backwards and B enables subsequent decisions. The third paradox is that what is regarded as decision, is also a decision. The deparadoxizasion of decisions is the driving force of the organisations and it happens both in space, by the functional differentiation, and in time, by postponing one of the conflicting elements.

Decisions are the main tool to handle complexity and absorb uncertainty in organisations. Looking at this process with the Stimuli-Responses glasses on, we can easily see other tools for variety attenuation. But there are some that are not so easy to see unless we also put the Space-Time glasses on. One such tool is what Brunsson calls “organisational hypocrisy”.  Organisations need to meet the demands of various stakeholders and these demands are often in conflict. Meeting conflicting demands might be impossible in space but is somehow made possible in time. Let’s imagine three groups of stakeholders, which are happy with X, Y, and Z respectively. If the organisation talks X, it makes the first group happy, but then by deciding Y, it makes the second group happy, and later on, when doing Z it will make the third group happy.

Hypocrisy is a way of handling conflicts by reflecting them in inconsistencies among talk, decisions, and actions.

Nils Brunsson

And last, following QATC, we can overcome some of the limitations of the prevailing thinking on conceptualising of time, the nature of change, the reliance on existing practices and the usefulness of performance measurement. QATC suggest using probability waves (aka wave packets) as a metaphor for organisational processes. There are many possible outcomes since the future has many potentialities. The analogue from quantum mechanics is the superpotentiality state “in which many possibilities are in an indefinite state but have the potential to occur when influenced by a specific context”:

when conjoined with a particular context, this superpotentiality state will collapse because of experienced constraints to create a specific experienced reality, much like an electron appearing in a defined position in space upon measurement

Dord, Dihn, Hoffman

Using the QUTE Space-Time glasses, we need to see the time flowing from future to present, so that the particular state (on one of the many paths, see enterprise time panning above) is selected, or rather – enacted – by the particular configuration of organisational unit, context and technology (understood broadly). Most of the potentialities are not directly experienced and can be seen as something that might have happened through retrospection. QATC proposes to use counterfactual thinking to examine what could have been different. This is very similar to Luhmann’s use of contingency to indicate “neither necessary nor impossible”.  According to QATC counterfactual thinking can help “to understand how a present is selected from many different potential alternatives that once existed in the future, rather than how the past leads to one future state”.  The particular interaction of constraints and processes in an organisation, make certain potentialities develop more than others. Those that develop more form attractors that channel processes in definite and consistent ways. This is what we see also using the Stability-Diversity glasses and is highly relevant for maintaining the Exploration-Exploitation balance (see Essential Balances in Organisations).

I hope this is enough, for now, to start observing organisations with this pair of Space-Time glasses. Give them a try and share your findings. There is more to elaborate on its application for decisions, change “management”, trust, detecting weak signals and dealing with uncertainty. It would be interesting, now drawing more from relativity theory, to check if and how the Space-time curvature increases with the size of organisations. And then beyond (yet always within) Space-Time, QUTE can bring useful perspective on emotions, communication, measurements, best practices, and power.

Wikipedia “Knows” more than it “Tells”

When pointing out the benefits of Linked Data, I’m usually talking about integrating data from heterogeneous sources in a way that’s quite independent of the local schemas and not fixed to past integration requirements. But even if we take a single data source, and a very popular one, Wikipedia, it’s easy to demonstrate what the web of data can bring that the web of documents can’t.

In fact, you can do it yourself in less than two minutes. Go to the page of Ludwig Wittgenstein. At the bottom of the infobox on the right of the page, you’ll find the sections “Influences” and “Influenced”. The first one contains the list (of links to the Wikipedia pages) of people that influenced Wittgenstein, and the second – those that he influenced. Expand the sections and count the people. Depending on when you are doing this, you might get a different number, but if you are reading this text by the end of 2017, you are likely to find out that, according to Wikipedia, Wittgenstein was influenced by 18 and influenced 32 people, respectively.

Now, if you look at the same data source, Wikipedia, but viewed as Linked Data, you’ll get a different result. Try it yourself by clicking here or use this link:

The influencers are 19 and the influenced are 95 at the moment of writing this post, or these numbers if you click now.

Note that the query is taking the data from the actual Wikipedia, not from the dump used in the regular DBpedia. Using the same data source as document web and as Semantic Web gives us different results. It turns out that Wikipedia “knows” more than it “tells” if asked properly.

Of course, Wikipedia can improve the application logic updating the pages, but that would be a local patch, while the logic is already in the data and there is no need for specific rules to be added, which would serve one use case but would not be able to envisage many others.

And the result you got from DBpedia-live could be just a starting point for exploring the knowledge graph. Click on one of the results and choose how you want to browse. If you prefer visual exploration, LodLive would bring a nice experience, but for faster browsing use one of the other options.

Or, using relFinder, you can check what goes on between two nodes from the two columns, for example:

Or, you might want to rank all influencers of Wittgenstein by their influence, counting also the influence of the influenced and their influence on till the last known by Wikipedia. We might call this the “reach” of the influencers of Wittgenstein. And for the top ten, we get this. This is another thing that Wikipedia knows, but wouldn’t tell.

Now, if you are concerned about a corporate application landscape, imagine how many things your applications know but wouldn’t tell due to limitations of their application logic built to meet certain historical requirements, and how many of them know only part of the answer. To get a complete and accurate one takes investing in interfaces, data warehouses, MDM systems, data lakes and various new and fancy, but in most cases proprietary methods of data integration and governance.


Productive Paradoxes in Projects

In 2011, when I started this blog, I wanted it to be a place for reading and as such the initial theme was just a bit busier than this one. I didn’t go that far, but you still don’t see categories, tag clouds, my Twitter feed and so on. It was only recently that I added sharing buttons and started putting more images. And because of me keeping it minimal, you might have been reading this blog for some time without knowing about its tagline, as it is simply not visible in the blog. But it’s been there and when the blog appears in search results, you can see it.

The theme about paradoxes appeared only a few times, for example in  From Distinction to Value and Back and previously in Language and Meta-Language for EA. I haven’t focused on it in a post so far. It was even more difficult to start talking about it to an audience of project managers. First, claiming that projects are produced and full of paradoxes might appear a bit radical. And second, project managers are solution-oriented people, while in paradoxes there is nothing to solve. There is a problem there, but its solution is a problem itself, the solution of which is the initial problem. And third, talking about paradoxes is one thing, but convincing that understanding them is useful is another. But I was lucky with my audience at the PMI Congress – very bright and open-minded people. Many of them recognised some of the phenomena I described in their own practice. I don’t know if the slides tell the story by themselves, but here they are:

You can download the slides from SlideShare but you can’t see the animations there and they can help in getting more of the story. To see the animations, play the presentation on YouTube:

In any case, this is just a start. There is a lot to explain, elaborate and further develop.


How I use Evernote, Part 3 – Classification and Wishlist

This is the third and final instalment about Evernote. You may want to check out the previous ones first:

How I use Evernote, Part 1 – Note Creation,

How I use Evernote, Part 2 – Kanban Boards

What is left for this post, is to go over the way I look at and use tags and notebooks and to share the top seven features I miss in Evernote.


Currently I have over six thousand notes in Evernote. To manage them I classify them. This means I apply certain criteria to make a note a member of a set of notes. The capabilities of Evernote supporting this are tags, notebooks and search. There are other ways to think about them, not as just being different means for classification, but I find this perspective particularly useful.

The nice thing about tags is that they can be combined. I see a note tagged #A as belonging to set {A}, and a note with tag #{B} as belonging to set {B}. I can find both the intersection, {A} AND {B} and the union, {A} OR {B}, by selecting as search principle “Any” or “All”. Most of my notes have between two and four tags and some have more.

It is nice that tags can have hierarchy but it would have been better if that relation had the same freedom as tags do, or if it would be able to trigger automatic tagging. By having the same freedom, I mean if there is for example a tag #BPMN which is a sub-tag of #BPM, it should be possible to make it a sub-tag also of #Notations. But that is not nearly as important as the second one. If I tag a note as #BPMN, as long as the tag is under #BPM in the tag hierarchy, the note should be automatically tagged #BPM as well, or it should at least appear in the results when searching for #BPM. This is not in the top seven things I miss in Evernote, so you won’t find it in the wishlist below. Not because I don’t miss it, but I realise that if such a capability is developed, it would not be used by many. I’m mentioning it here to explain that this limitation is the reason I use the tag hierarchy only for grouping. In fact, this is probably the intended use, similar to notebook stacks. After all, super-tags are referred to as tag categories. Yet, unlike stacks, they can be applied as actual tags, which is what creates my expectations for inferring sub-set relations . Anyway, for the reasons I explained, I use my super-tags just as collection label for ease of navigation and not as actual tags. I’ll give an example a bit later. The actual tags and super-tags appear on the left panel flat. When I tag a note, if it has a super-tag already, I add the super-tag as well, unless the note is in a notebook having the same label as the super-tag. This way by asserting sub-set relations, I maintain a tag taxonomy which is of course error-prone and is not mapped anywhere (the tag panel would be the natural place). Yet it serves my well so far.

I’m probably using tags in a similar way as other users, to indicate topic/category, theme, provenance, type of note and suchlike. There is only one application of tags that’s worth reporting, as some people found it exotic. I make author tags and I find them particularly useful for both navigation and searching. As mentioned in Part 1, I keep a “Books” notebook. It contains notes with highlights and annotations or other book references, clips of web pages of books I plan to check out, and in some cases actual electronic books. At first I started using author tags only for notes related to a whole book, but then I went on tagging also some quotes and those article-notes which I use often.

For this author-tagging practice, I find it particularly useful that tags are in alphabetical order and when collected in one category it’s very easy to locate the author, see the number of notes and click on the tag to apply it as filter. This is also an example of the only way I use tag hierarchy: all author tags are under #Authors, but I use #Authors only to keep them in one place, not as an actual tag.

The other classification method is notebooks. Unlike tags, Notebooks are disjoint. One note cannot be in more than one notebook. I don’t know what is the design decision behind that. That is indeed the case with physical notebooks, but transferring this limitation to an electronic one seems strange to me, especially when overcoming it does not represent an implementation challenge. Anyway, I have found some cases where it’s slightly better to use notebooks instead of tags, but what really determines this choice for me is if I use a notebook as a Kanban board, as explained in Part 2. Another reason to use stacks is to group projects per organisation. I keep a notebook per project for smaller projects, and those which are with the same organisation get in a stack with name of the organisation. I keep also a “Library” stack with notebooks “Book”, “Articles” as well as thematic sections such as “Music Library” and “Linked Data Library”. I have also a stuck “Learning” to group notebooks for the things I learn for a longer period, for example languages, technologies, instruments etc. Each language or technology gets a separate notebook. There is also a stack “Talks and Research” and a few stacks for the bigger projects.

The third classification method, ad-hoc and saved searches, I think I apply in a way all others do, so nothing worth reporting here save for the fact that, again, looking at the search results as a set of notes the members of which satisfy the search criteria, creates some good thinking habits and increases productivity.

It is now time to look at those things I miss in Evernote, which I believe would bring benefits to lots of other users as well.


There are many features that have been added and improved since I started using Evernote. Some of them I find useful. But there are also others which I really hope somebody uses regularly. These include chats, presentation mode, and flexible editing of tables. I personally use them rarely if at all and wouldn’t miss them if they are gone. I wish a small part of the effort put in those features was spent developing some others which, while basic in much simpler and older tools, are still missing in Evernote. These and a couple of others comprise my feature wishlist:

  1. New note from a selected text. By this I mean, it should be possible when selecting a text in a note, to create a new note with hyperlink to it, added to the original text. The implementation doesn’t matter. It could be wiki style or as option in a menu.
  2. Back-links. Currently navigation is possible in one direction, from a note to a another note, linked to a text in the first one. But there is no way to see all the notes having link to the current one and go to them if needed. In Zim and other notebook applications, this is a basic functionality.There is a solution to this problem in GitHub, but apart from the inconvenience of running external Python code, it has two other major drawbacks for me. If you use links a lot, you need the back-link available right after you create the link, and not to trigger it manually or once a day.  The second is, I find it important that back-links are not part of the note body (see the screen-shot above as an example of such implementation).
  3. Distraction-free mode. The only reason I avoid using Evernote for writing is the missing focus view. The interface is now less busy, and yet if there is a mode, where you just see the text and almost nothing else like asit is in Dark RoomZenPen, and Write!, it would be great. Now even MS Word has distraction-free mode where apart from the text, you only see this:
  4. Touch-screen scrolling. Currently it’s not possible to use touch-screen scrolling in Evernote desktop, making the reading of a bigger notes very frustrating.
  5. Manual note re-arrangement. When using a tile view, I miss the possibility to manually rearrange the notes.
  6. Reminder in web Clipper before saving. At least a third of my web clippings end up as reminders. The current implementation allows to add reminder only after the note is saved and synchronised. I need the reminder at the initial stage when I choose notebook and tags.
  7. Reminder in Android sharing. When sharing to Evernote in Android, there is this nice elephant button that allows to choose notebook and tag. But just as in the web clipper, I miss the option to set the note as a reminder. In Android, it’s not even available in a second stage.

Well, that’s all for now. If you are an Evernote user, please share your thoughts, what kind of problems you face, how do you solve them, what do you think about my way, do you support any part of my wishlist. If you are not an Evernote user, any ideas on note-taking would be just as interesting to learn about.

How I use Evernote, Part 2 – Kanban Boards

This is the second part of the sequel on my way of using Evernote. The first one was about the creation of notes and the third will be about notebooks and tags and my overall approach for organising the content inside Evernote. In this part, I’ll describe how I use Evernote for task management.

My tool of choice for task management had been Trello. It still is for collaborative work on projects and for strategic flows, but I “migrated” my personal task management entirely to Evernote.

How? I simply use the way reminders appear on top of all notes with the ability to rearrange them by dragging, as Kanban board. For most projects and themes I do it within a single notebook for the whole board. For some projects, I use a stack of notebooks. Let’s call them “small Kanban” and “big Kanban” respectively.

When I decide to use small Kanban in a notebook, I create a few reminders to clearly separate the lists in which work items are sorted depending on their state. They are typically titled “TO DO”, “DOING” and “DONE” like that:

Sometimes I add “ISSUES” or “BLOCKED”, as well as “REFERENCE”. I put in the latter some frequently used notes. This list serves me as sort of local shortcuts, while for the global ones I use the common shortcut feature of Evernote.

When a new task comes to the project notebook, for example from a forwarded email, it appears always on top. This way I treat the zone above “TO DO” as “BACKLOG”. Then I do the sorting by dragging what I plan next under “TO DO”. The rest of the items on top – if they are tasks – just stay there waiting to be planned. If they are issues or some reference, they go to their list where, as with tasks, their order would indicate priority or importance.

To be properly called “Kanban” such a board should help not only in visualising the workflow but also in keeping the work-in-progress within predefined limits. I’m not strict about that, yet I’m doing it in a way. The control is, when I put “TO DO” on top of my screen, I should see all tasks down to the last one in “DOING” without scrolling.

When I don’t need to see an item in “DONE” anymore, I use Evernote’s “Mark reminder as Done” to make it disappear from the board.

And that’s all I do in small Kanban boards.

Big Kanban boards in my Evernote are within stacks of notebooks. Each notebook in the stack contains a different set of notes, and for those with reminders, the membership is on the basis of workflow state. The reminders of the main project notebook are treated as “NEW” and “TO DO”. As it is important that this panel appears on top, I need sometimes to add a prefix to the notebook name. Yet, I try to keep it short and simple to minimise the effort when indicating destination in an email subject. In this sense, I wouldn’t use a name such as the one in the example below but I hope it does the job to demonstrate how the stack of notebook reminders serves as Kanban board.

Unlike in small boards, I keep the backlog – which is again a separate notebook – clearly indicated as such and at the bottom. That might look strange as the rest of the flow goes top down. However, for bigger projects, the backlog is also bigger and it is used less often than “TO DO” and “DOING” lists. There is another reason as well. In most cases, not one but many tasks move from “BACKLOG” to “TO DO”. It is not efficient to do note by note dragging, which otherwise is the main feature making this practice possible. What I do instead is, I select the notes and click on “Move to notebook”.

Then I select the destination notebook (…Project “A”) which serves as “TO DO” list. To clarify for those used to pure Kanban boards, keeping a backlog comes from the Scrum practice, or at least this is from where I know it. When it is decided to do something, it goes to the backlog. When it is decided to do it in the next iteration, then it is in “TO DO”. Even without timeboxing, I find the difference between [to do sometime] and [to do next] very useful, that’s why the first state is indicated by the work item belonging to the BACKLOG ( which is the area above “TO DO” for the small Kanban and the “BACKLOG” notebook for the big) and to TO DO (which section “TO DO” for the small Kanban and the main project notebook for the big).

And that’s it.

I’ve been doing this for four years now and I find it very useful. It’s low-maintenance and it improves productivity. There are additional benefits from the fact that tasks live in the same place as all other notes. There are a few simple capabilities which would make Evernote even better for task management in general, and for Kanban boards in particular. They will be one of the topics of the final part.


BPMN Board Game Prototype

I have recently found a BPMN2 board game prototype that I made many years ago with the intention to include it in my BPM courses. For some reason, I didn’t finish it and completely forgot about it. Now when I found it and shared a screenshot on LinkedIn, I was surprised by the enthusiastic response.

So I decided to share the actual model.

Here’s the unfinished list of rules:

  • Board with BPMN process, one dice, one pack of playing cards, four different colours tokens, 5 pieces by colour; 10 pieces of black bits (tokens of similar size, could be the same or different shape)
  • Each player plays with different colour tokens.
  • Each node except gateway (?) is a valid step (when traversing according to the what the rolled dice shows).
  • The pack of cards stays together face down. A card is drawn when so instructed when reaching (not necessary stopping at) a gateway.
  • Each player starts and finishes with one token. He/she can use more than one only after a parallel gateway split but continue with one after all tokens arrived at the join. The player chooses which token to advance on each turn when moving between two parallel nodes.
  • When a token stops where another token stays, the latter goes to START
  • When a token stops on compensation event all tokens that are on the respective compensation activities go to…
  • Bits can be used to mark processed tasks in ad-hoc sub-processes

If somebody decides to finish the work I started, that would be great. And if needed, I’d be happy to collaborate.

You can download the source file from here. It was a made with a free modelling software.

How I use Evernote, Part 1 – Note Creation

I’ve been using Evernote a lot in the past few years. My membership started in January 2010. I had been using Zim for note-taking before that and I kept using both in parallel for a while. It was the synchronisation capability that made me move entirely to Evernote.

After using Evernote for a few years  I decided to “migrate” more content and workflows to it. One trigger was the trouble of managing information on several platforms and failed attempts to link them. Another was the inspiration from the Luhmann’s Zettelkasten. I had no intention to imitate the latter and no illusion that I could solve the former, and yet there was a considerable improvement in my personal information management. Now I can quickly find what I’m looking for and what it is related with. More importantly, I can be surprised with new relations and discoveries, the important difference between using and working with.

I haven’t read any of the Evernote books, nor I participate in the fora. Yet, it seems that the way I use the tool is worth sharing, at least this was the feedback from people that had a glimpse of my note-taking practice.

I was using Trello for managing tasks, various note-taking apps – mainly Zim and Google Keep, Adobe Cloud for PDFs, shortly also Mendeley for research, and Pocket for reading articles. For website bookmarking and highlights I used Diigo, and Dropbox and Drive –  for storing and syncing documents. My main requirements were two: to quickly retrieve information from all these places and link resources within and across. That included the need to consolidate and link my notes and highlights from Kindle books and PDFs.

Evernote completely replaced Zim, Keep, Adobe Cloud, Mendeley and Diigo and partially – Trello, Dropbox and Google Drive. I miss many of the capabilities of these tools, but I don’t regret leaving them. When consulting enterprises, I always support diversity and demonstrate ways to experience a landscape of heterogeneous applications as a single system. However, a corporate approach to information management is not fully applicable for a personal one.

Most of what I do is probably common for regular Evernote users, or likely to be only related to my specific needs, but I guess there are a few tweaks that some might find useful, either to apply or as insights for a better solution.

This part will be mainly about note creation. In the next two, I’ll share how I use Evernote as a personal Kanban and some notes on classification and missing features.

Note creation

The common events that start my workflows supported by Evernote are related to receiving information, reading something on the internet or in a book, hearing information that’s worth noting, or a thought begging for assurance that it will be remembered.

A typical situation is when I get a document by email which needs some further action or is important for future reference. That would be an invoice to pay, a travel document (flight or train ticket, hotel booking etc) or a document to review. In the case of invoice, I just forward it to my Evernote email adding only “!” so that it appears as a reminder. There are two important points to make here. The first is that most attempts to make a better organisation lead to engaging in ancillary routines, and that I find inefficient. In Lean terms, it’s just a waste. It was always about finding what is the minimum effort to ensure sufficiently good utilisation in the future. And the second point is that when the note-triggering event is frequent, then the classification overhead slows down the process, sometimes to the point of breaking the habit. Adding to this the awareness of the life-cycle of the note would explain why I only add “!” to emails containing payment documents (invoices, bills, social security contributions and suchlike). It happens often, and it would probably be used only twice when in Evernote. My default notebook is called “.LOG”. Yes, you’ve guessed right. I want it to appear on top so that the urgent tasks get on top as well in the reminder area when the view mode is “All Notes”. Apart from payment documents, anything that needs to be done within two-three weeks goes there. I’m treating Evernote reminders as tasks or top references within a notebook. For overall top reference, I use Evernote shortcuts.

When a reminder is treated as a task, there are two cases. The first one is simple: it does through the two states [to do] and [done] available for reminders. This is the case when I plan to contact somebody or when I need to pay something. The second case is a bit more complicated. And this is when I use Evernote as a personal Kanban board. This will be explained in Part 2.

The other common situation for creating a note from email is when the reply contains some important information like additional detail or a commitment. In this case, I put the Evernote email in CC or BCC.

While general emails and emails for payments go to the default notebook by only appending the subject with “!”, for other like travel and projects, I add in the subject the respective notebook, tags and sometimes date.

Apart from email, the common way of creating notes is through Evernote Web Clipper in Chrome.

I find the options Article, Simplified Article and Full page very useful. There is a fourth option,  which is to clip selected text, available in the right-button menu. Then apart from the classification options, where the right Notebook is often well “guessed”, there is the option to add remark and mark the clipping as reminder, which I use a lot.

I keep the options for PDF and Related Notes activated. The one for PDF I use many times a day and its convenience is probably the main reason I moved away from Adobe Cloud and Dropbox for storing PDF documents. Yet the Clipper does not allow to edit the title and many notes end up titled “PDF –” for example. I can only modify these inside Evernote.

For the Related results I was skeptical but a few times it was really closely related and on other occasions the usefulness came more out of serendipity.

As Pocket is main tool for reading web articles, to avoid using both extensions, I share the resource via a Send-to-email browser extension and I have a group in my contacts with members and my Evernote email address.

The Evernote Web Clipper is also what I use for collecting Kindle highlights and Tweets. Kindle highlight used to be at and from July 2017 they will be available at Since I started collecting my highlights in Evernote, I’m now only able to quickly find important references, but also to increase their utility by adding notes inside a note with highlights and to add links to other Evernote notes and web resources. Here’s how a virgin note with Kindle highlights looks:

And this is how it looks with the new service:

I had trouble collecting my notes and highlight from PDFs, until I discovered this method, and from Word until I started using DocTools. At some point, I was more ambitious and wanted to create a graph of all notes and highlights, tweets and bookmarks, independent from the source. I used to extract the Kindle ones with DaleyKlippings. Then I used to turn them into RDF – using URI Burner for the tweets and RDF Refine for the rest –  and link them in TopBraid Composer. This practice had lots of benefits but it required heavy maintenance and I abandoned it.

Now I get my tweets by simply clipping All My Tweets, my book highlights – from the Kindle Highlight page and PDF ones – with the above-referred script.

The hand notes I get scanned with the Evernote document camera. And I keep being impressed by the character recognition capabilities of Evernote.

I have discovered recently another method, Whiteline Link. The quality is not as good but the camera is quicker than that of Evernote. The Whiteline app directly sends to Evernote. The advantage is that just by crossing an icon, it can also send the same note to an email and Dropbox at once. That’s not possible when scanning with Evernote. Yet, Whitelines Link allows only one tag per note, so when I need more than one, I still have to do that within Evernote.

I intend to keep using it until I fill up my current Whiteline notebook (I’m using Leuchtturm1917‘s – excellent quality), and then I’ll decide how to continue.

Apart from mailing, web clipping and scanning there is, of course, the regular note writing inside Evernote, the audio notes, and screen clips.

Once captured, a note starts its lifecycle either as a regular note or as a reminder. If it’s a reminder, it could mean [to read] if in “Articles” or “Books” notebooks, [to do], if in “.LOG”, and [backlog] when in some of the project notebooks. The latter case deserves special attention and the next part will be devoted to it.


Do We Still Worship The Knowledge Pyramid?

There are not many models that have enjoyed such a long life and wide acceptance as the Knowledge Pyramid. Also known as the DIKW pyramid, it features data as the basis and shows how information is built on it, then knowledge, and finally wisdom. Each layer refers to the lower one, using it but adding something more. It tells the story of how data is transformed into information and information into knowledge. And being a pyramid, it implies that the higher you go the better things get, that there is more value but less quantity. There are variations. In some it is not actually shown as a pyramid, in others, wisdom is skipped and in at least one popular version enlightenment is put on top of wisdom or added in another way.

The model goes together with a set of conventions about the meaning of each concept and their relations. There is quite some variation of these definitions but the logical sequence is rarely questioned. What I’ve found as the most popular narrative in business is the following: Data are individual facts that need to be processed to generate information. When data are categorised, or interpreted, or put in context or better all of that, they turn into information. There is greater divergence what knowledge is, but most sources seem to suggest that if what is done to data, is done once again to information, you’ll get knowledge. It all sounds like a recipe for a delicious cake. What’s not to like?

Well, just about everything.


There are authors that refer to data as symbols or signals, but these interpretations have much less currency compared to data as facts.

In the context of the model, the definitions for data either explicitly assert or imply that data are devoid of meaning, that they exist out there and that it’s too early to speak of information at this stage, in fact, that this would be a fundamental error. And knowledge, knowledge is a way up.

Let’s check them one by one.

1. Data exist out there.

If I see a footstep in the sand – whatever that is – data or information, it is what it is for me, and won’t be the same for an insect climbing a “hill” made by the footstep.

2. Data are devoid of meaning

It’s worth to distinguish “data as facts” and “data as symbols” definitions. The first one is especially problematic. A fact is a statement about something. And statements usually have meaning. “Data as facts” often goes along with data being defined as “basic individual items of numeric or other information”. This is another contradiction as statements need a subject, predicate, and object, and not just one of them.

“Data as symbols” does not imply that that data are devoid of meaning, as it does not go along with the belief that “data exist out there”, independent of the observer. The same datum may have meaning for some but not for others. And the meaning will be different for different observers. This view is quite fine, but I would suggest that whatever makes something a symbol for somebody, may symbolise different things depending on the interaction, and as such it does not have meaning by itself. The meaning is made by the observer in the process of interaction, which, by the way, does not include only perception. But the important point here is that the same physical thing for the same observer might be seen as two different symbols in two different interactions.

Some make a step further to define data not as facts but as “objective facts”, thus explicitly linking  (1) and (2). But isn’t that just another contradiction? Facts are statements and as such, they are stated by somebody. And even if we indulge the etymology quest, hopefully not literally seeking the “true sense” of the words but just insights, then fact, coming from the Latin verb facere, can be interpreted not only as “things done”, but also as “things made (up)”.

3. Knowledge comes way after data.

Let’s go back and imagine that what I see is not a footstep but just a dent, and I don’t know what it is exactly. But I can tell there is a dent. I know the difference between a smooth surface and a surface with a dent. Which means there is knowledge already. There is knowledge by the virtue of me being able to make a difference if there is a dent or not.


Information is said to be data in context. Whenever there are structure and context, data is transformed into information.

This notion of information “inherits” the problems of data, thus excluding the informed and their capability of being such. But there is another problem here – accepting the addition of structure and context as sufficient. Once they are added to data, there is information. But is that the case?

If I don’t know your birth date and you tell it to me and I understand what you tell me, this is information. If you tell me the same thing five minutes later, even if it has the same structure and context, as I already know it, it’s no longer information for me. If later on, I forget, and you tell me the same thing, then it will be information for me once again.  As Luhmann put it “a piece of information that is repeated is no longer information. It retains its meaning in the repetition but loses its value as information”.

To inform means to let somebody know. There are two things to notice here. The transformation of the verb to inform into the noun information changes the nature of informing as the act of bringing knowledge. First, it demands a new kind of distinction between information and knowledge and this is how we end up with the DIKW layering. And second, the act of informing is changing the state of awareness of the one who is being informed. Looking at information as informing, would be a useful reminder that it is about an event. And making the same announcement to the receiver will not change the already changed state of awareness. A more formal expression of this can be found in the second law of form: “The value of a call made again is the value of the call”, as discussed in another blog post.

The idea of information as bringing new knowledge would probably evoke associations with the popular interpretation of Claude Shannon’s notion of information as the “average amount of surprise”. However, the “amount of surprise” or the “information entropy” refers to a message. If anyway, Shannon’s information theory is considered a good source for a definition of information, one should carefully check its relevance outside the domain of telecommunications.

Seeing information as the act of bringing knowledge is very much in line with one of the original meanings “the act of communicating knowledge to another person”, the other one being “the action of giving a form to something material”. In fact, the history of the concept information is very interesting, so if you are curious, check this article of Rafael Capurro.

Usually, when we discuss such issues, sooner or later I’m asked to suggest a better definition since I’m not happy with the popular ones. I don’t value the efforts spent on definitions. And it is not about being right or wrong. But then again some definitions are more useful than others, and some are particularly harmful. And the latter is the main motivation to write this article. I’ll try to explain that later.

Now, for those who won’t accept “no” as an answer, if I have to suggest a useful definition of information, I would either pick the one of Bateson, or – if a longer one is allowed – I’d use the following suggestion of Evan Thompson built on Bateson and Susan Oyama:

“Information, dynamically conceived, is the making of a difference that makes a difference for some-body somewhere” (italics in the original).

Each part of this definition, “dynamically conceived”, “making of”, “difference that makes a difference”, “some-body” and “somewhere” deserve elaboration, but this will go beyond the objectives of this text.

I know there is a little chance of such definition to be accepted by the business, and later I’ll explain why. Now let’s quickly review knowledge and move on. I won’t bother discussing wisdom, as long as I’m keeping the text as prose.


The definitions of knowledge vary greatly. This makes it difficult to address their common properties. A prominent and a positive one is that people finally enter into the picture. On the other hand, almost without exception, knowledge is defined in reference to information. The availability of information is understood as the necessary but insufficient condition for knowledge. It’s rare that knowledge is seen dynamically as knowing, or even rarer as sense-making. This would have helped to realise that the place of knowledge in the pyramid is misleading, that knowledge cannot be embedded in documents, and that the popular split of knowledge into “implicit” and “explicit” neither makes sense, as knowledge can never be the one or the other, nor has any utility beyond providing conceptual support to consulting and software products and services. The split implicit/explicit is frequently used in Knowledge Management (KM) narratives.  KM itself is a misnomer. Knowledge cannot be managed, although historically there have been attempts by interrogators to manage the retrieval process with varying degrees of success. If Knowledge Management is preaching the split, then it should either keep “implicit” or “management” but not both.

Where then to look for a good definition of knowledge?

One natural choice would be traditional epistemology, according to which there are three kinds: practical knowledge, knowledge by acquaintance, and propositional knowledge (for more details see Bernecker here, and also here). Practical knowledge refers to skills. That’s not too far from what in business is known as know-how. Knowledge by acquaintance is direct recognition of external physical objects, organisms, or phenomena. Propositional knowledge usually comes in the form of knowing-that. Propositional knowledge has entered some organisations through ICT, in cases where reasoning algorithms are applied. However, that would also be in clash with the DIKW model, as non-inferential knowledge would be indistinguishable from information.

Apart from epistemology, if philosophy is regarded as an eligible source, another method to apply would be phenomenology or find something in-between, which would be my preference.

Then, of course, there shouldn’t be a better place to look for a definition of knowledge than the cognitive science. But this is not likely to be a fruitful quest. First, there is no common understanding in cognitive science what knowledge is. And second, whatever communication is part of cognitive science is there only by the virtue of referring and being referred to within science. If it can provoke some communication in business, such communication would only be part of the business if it refers to and is referred by other business communication. It will inevitably be a misunderstanding and, only if lucky, a productive one.


In summary, the DIKW pyramid is problematic in showing data-information-knowledge as a logical sequence, especially when it comes to knowledge; for ignoring people at the “level” of data and information; when defining data as facts, and information as facts in context.

But, the DIKW pyramid is just a model. And, as zillions presentations keep reminding us, all models are wrong but some are useful. I have tried to explain why this model is particularly wrong. Now it’s time to say a few words why it is not useful and often quite harmful.

One of the problems of many organisations is information management. It is understood as a complex problem. And for dealing with complex problems, every convincing method of categorisation or another way of simplification is more than welcome. Enter the DIKW pyramid, normally as part of a bigger package. Now things look a bit more manageable as there are smaller chunks to deal with. First, after some exercises to get a common understanding of the problem, comes the split of responsibilities. The most extreme version of it applies the DIKW pyramid literally. One department gets the mandate to deal with data, another with information, and a third one with knowledge. The results are disastrous and can live for a long time without appearing as such.

Then each “discipline” works out its own understanding, it’s own problems and solutions. For data, there are data-related problems and data-related solutions. Very often the solution is to buy some kind of software. If the main data-related problem is with Master Data Management(MDM), then the solution is MDM software. For information-related problems, the solution could be some Business Intelligence package, while most of the knowledge management problems are believed to be solved by implementing good collaboration software.

But the model is not always harmful or without utility.

Some organisations have the DIKW in well-respected reports, but they actually ignore it. It stays as an elegant theory, while the actual decisions are made without using this frame. That’s one case when it is not harmful.

Some interpretations of the data-information-knowledge distinctions and sequence, in some contexts, can even be useful. For example, in the area of Linked Data, there is some utility in seeing URIs and literals as data, when in triples as information, and then when a SPARQL query gets a good answer, that can be said is bringing new knowledge.

And yet, the small utility in some domains does no justify the much bigger risks of using the model to support business decisions.  So, before I let you go, in case your organisation is not worshiping the DIKW pyramid or has started to doubt it, I would suggest:

  1. If possible don’t use any layers and keep everything in one “discipline”. How about “Information management”?
  2. If that’s not possible, then distinguishing data and information can have some utility, but separate data management would always bring worse information management, and the latter is what the business cares about.
  3. Allow definitions to change depending on the context, as this is how the language works anyway.

The Scissors of Science

Three centuries ago the average life expectancy in Europe was between 33 and 40 years. Interestingly, 33 was also the average life expectancy in the Palaeolithic era, 2.6 million years ago. What is that we’ve got in the last three centuries that we hadn’t got in all the time before? Well, science!

Science did a lot of miracles. But like all things that do miracles, it quickly turned into a religion. A religion that most people in the Western world believe in today. And like all believers, when science fails, we just think it has not advanced yet in that area, but we don’t think that there is anything wrong with science itself. Or when the data doesn’t match, we think it’s because those scientists are not very good at statistics. Or, if not that, then simply the problem is with failed control over scientific publications, as it was concluded three years ago, when Begley and Ellis published a shocking study that they were able to reproduce only 11 per cent of the original cancer research findings.

Well, I believe that the problem with science is more fundamental than that.

The word science comes from skei, which means “to cut, to divide”. The same is the root of scissors, schizophrenia and shit. “Dividing” when applied to science, comes handy to explain some fundamental problems with it. It has at least the following six manifestations.

The first one is that the primary method of scientific study is based on “dividing”. Things are analysed, which means they are split into smaller parts. In some areas of science it is even believed that if everything is known about the smallest parts, that will help explain why the bigger ones do what they do. However, when taking things apart, what is not preserved is the relations that bind them together.

The second is the split between “true” and “false”. It is inherited from the Aristotelian logic, together with the principles of non-contradiction, and the excluded middle. Such a division might be useful for studying matter but is it useful for studying life, where the paradox is orthodox?

The third manifestation of dividing in science is the actual disciplines; as if they are something that exists independently out there:  physics, biology, chemistry and so on. We can know something as long as certain framework allows us to. When someone steps into another discipline, he or she is not regarded well by their fellows as it is not a proper biology for example. They are not received well also by the gurus in the ventured discipline who should know much better by the virtue of being there longer. All that is additionally supported by the citations count and other crazy metrics.

The forth is splitting sharply science from non-science. That split can be experienced as specific distinction such as science/philosophy or science/art, or at a more general level. And this is not just about the area of concern or the method. This distinction is social and economical. Here’s the simplest example that you can check out for yourself. Almost all scientific papers can be downloaded from the internet at a price between 25 and 35 EUR. If you work for an academic institution, you have free access, if not – you have to pay. So, you can only give feedback to scientists, if you are a scientist yourself, member of some institution, some scientific fiefdom. If science is scientific, why it doesn’t use the fact that we are in a connected world and publishing costs nothing. Is it afraid of the feedback? Well, it should be. See what happened with all traditional media when Twitter appeared.

The fifth one is that the scientists do not regard themselves as part of what they do but as something with a privileged role. Somebody isolated, godlike, that can bring neutral evidence applying rigour and “objectivity”, confirmed by others who can reproduce the results. This is the principle on which science works. There is the idea of truth, which is based on evidence and the whole notion of discovering objective reality as something that is out there, independent from those who “discover” it.

The sixth one is closely related to the fifth: science is based on some dichotomies that are rarely questioned. Here’s the top list: subject/object, mind/body, information/matter, emotion/cognition, form/substance, nature/culture. It is not my intention here to trace how this came to be the situation, nor why it is problematic and when it is especially problematic. But if we take seriously the warning of Heisenberg,

what we observe is not nature herself, but nature exposed to our method of questioning,

let me just say that our method of questioning nowadays includes a denial to seriously question these dichotomies.

These six manifestations of dividing are not independent of each other. And they do not pretend to diagnose all the diseases of science. They are just a small attempt to once again ask the question: could it be that the limitations of science are due to the same qualities that brought its success?

What can be done about it?

First, admit there are serious problems not just with how science is done but with the science itself?

Second, pay more attention to attempts to solve these problems. Some good insights can be found in e.g. certain approaches in cognitive science that balance between 1st and 3rd person perspective when trying to understand life, mind, consciousness, emotions, language and interactions. Others – from the attempts of complexity sciences to promote trans-disciplinary studies. And then there is already quite some work on second-order science and endo-science trying to deal with reflexivity together with a few others of the manifestations of dividing that I tried to shortly state in this article.

Third, accept that the solution will probably not look very scientific within the contemporary definition. Alternatively, it might suffer from the same diseases it is trying to cure.


The Mind Of Enterprise

I should have shared this presentation in November 2015 but anyway, better late than never. Here it is as static slides…

… and if your browser allows, you can play the original:

There is also a video, but due to a technical problem, only the first few minutes were recorded.

Don’t buy ideas. Rent them.

Some ideas are so good. An idea can be good at first glance or after years of digging and testing, or both. When it looks instantly convincing, it just resonates with your experience. There are so many things that such an idea can help you see in a new light, or help you fill some old explanatory gaps. Or, it can sound absurd when you first hear it, and then later it gets under your skin. Either way you buy it. You buy it on impulse, or after years of testing. You invest time, emotions, sometimes reputation. And once you buy it, you start paying the maintenance costs. It’s quite like buying a piece of land. You build a house on it, then furnish it, and then you start to repair it. Somewhere along this process you find yourself emotionally attached. And that’s where similarities end. The house you can sell and buy a new one. With ideas, you can only buy new. But sometimes the previous investment doesn’t leave you with sufficient resources. And then you can’t just buy any new idea. It needs to fit the rest of your narrative.

Once you buy an idea, it can open a lot of new paths and opportunities. But it can also obscure your vision and preclude other opportunities. One thing you learn can sometimes be the biggest obstacle to learn something else, potentially more valuable.

Instead of buying ideas, wouldn’t it be better to just rent them? But not like renting a house, more like renting a car. With that car you can go somewhere, stay there, or go further, or elsewhere, or – if it is not good for a particular road or destination – come back and rent another car.

Do you like this idea to rent, instead of buying ideas? If yes, don’t buy it.

If you are not tired of metaphors by now, here’s another one. I often present my ideas as glasses. Not lenses but glasses. First, they should be comfortable. They should fit our head, and nose and ears. Not too tight, so that we can easily take them out. Not too loose, so that we don’t drop them when shaken. Second, when we put on new glasses, we don’t just put on new lenses, but also new frames. Being aware of that is being aware of limitations, and of the fact that there are hidden choices. It would also help to realise when it’s time to try on a new pair of glasses.

From Distinction to Value and Back

I tried to explain earlier how distinction brings forth meaning, and then value. Starting from distinction may seem arbitrary. Well, it is. And while it is, it is not. That wouldn’t be very difficult to show, but until then, let’s first take a closer look at distinction. As the bigger part of that work is done already by George Spencer-Brown, I’ll first recall the basics of his calculus of indications, most likely augmented here and there. Then I’ll quickly review some of the resonances. Last, I’ll come back to the idea of re-entry and apply it to the emergence of values.


If I have to summarise the calculus of indications, it would come down to these three statements:

To re-call is to call.

To re-cross is not to cross.

To re-enter is to oscillate.

In the original text, the “Laws of Form”, only the first two are treated as basic, and the third one is coming as their consequence. Later on, I’ll try to show that the third one depends on the first two, just as the first two depend on the third one.

The calculus of indications starts with the first distinction, as “we cannot make an indication without drawing a distinction”. George Spencer-Brown introduces a very elegant sign to indicate the distinction:


It can be seen as a shorthand of a rectangle, separating inside from outside. The sign is called “mark” as it marks the distinction. The inside is called the “unmarked state” and the outside is the “marked state”. The mark is also the name of the marked state.

This tiny symbol has the power to indicate several things at once:

  1. The inside (emptiness, void, nothing, the unmarked state)
  2. The outside (something, the marked state)
  3. The distinction as a sign (indication)
  4. The distinction as on operation of making a distinction
  5. The invitation to cross from on side to the other
  6. The observer, the one that makes the distinction

That’s not even the full list as we’ll see later on.

Armed with this notation, we can express the three statements:


They can be written in a more common way using brackets:

()() = ()

(()) =   .

a  =  (a)

The sign “=” may seem as something we should take for given, but it’s not. It means “can be confused with”, or in other words: “there is no distinction between the value on the left side of the equation and the value on the right side”. Again, a form made out of distinction.

The first statement is called the law of calling. Here’s how it is originally formulated in the “Laws of Form”:

The value of a call made again is the value of the call.

If we see the left sign as indicating a distinction, and the right sign as the name of the distinction, then the right sign indicates something which is already indicated by the left sign.

Or let’s try a less precise but easier way to imagine it. If you are trying to highlight a word in a sentence and you do that by underlining it, no matter how much times you would draw a line below that word, at the end the word will be as distinguished as it was after the first line. Or if you first underline it, then make a circle around it, and then highlight it with a yellow marker and so on, as long as each of them doesn’t carry special meaning, all these ways to distinguish it, make together just what each of them does separately – draw attention to the word.

Or when somebody tells you “I like ice cream” and then tells you that again in 10 minutes, it won’t make any difference, unless you’ve forgotten it in the meantime. In other words, making the same announcement to the receiver will not change the already changed state of awareness. That has important implications for understanding information.

The second law is originally stated as:

The value of a crossing made again is not the value of the crossing.

One more way to interpret the mark is as an invitation to cross from the inside to the outside. As such it serves as operator and an operand at the same time. The outer mark operates on the inner mark and turns it into void.

If the inner mark turns its inside, which is empty, nothing, into outside, which is something, then the outer mark turns its inside, which is something, due to the operation of the inner mark, into nothing.

Picture a house with fence that fully surrounds it. You jump over the fence and  then continue walking straight until you reach the fence and then jump on the other side. As long changing your being inside our outside is concerned, crossing twice is equal to not crossing at all.

The whole arithmetic and algebra of George Spencer-Brown is based on these two equations. Here is a summary of the primary algebra.

The third equation has a variable in it.

a = (a)

It has two possible values, mark or void. We can test what happens, trying the two possible values on the right side of the equation.

Let a be void, then:

a = ( )

Thus, if a is void, then it is a mark.

Now, let a be mark, then:

a = (()) =   .

If a is a mark, then substituting a with a mark on the right side will bring a mark inside another, which according to the law of crossing will give the unmarked state, the void.

This way we have an expression of self-reference. It can be seen in numerical algebra in equations such as x = 1/x, which doesn’t have a real solution, so it could only have an imaginary one. It can be traced in logic and philosophy with the Liar paradox, statements such as “This statement is false”, or the Russell’s set of all sets that are not members of themselves.

However, in the calculus of indications, this form lives naturally. In a similar way as the distinction creates space, the self-reference, the re-entry, creates time.

There is no magic about it. In fact all software programs not just contain self-referential equations, they can’t do without them. They iterate using expressions such as n = n + 1.


The Laws of Form resonate in religions, philosophies and science.

Chuang Tzu:

The knowledge of the ancients was perfect. How so? At first, they  did not yet know there were things. That is the most perfect knowledge; nothing can be added. Next, they knew that there were things, but they did not yet make distinctions between them. Next they made distinctions, but they did not yet pass judgements on  them. But when the judgements were passed, the Whole was destroyed. With the destruction of the Whole, individual bias arose.

The Tanakh (aka Old Testament ) starts with:

In the beginning when God created the heavens and the earth, the earth was a formless void… Then God said, ‘Let there be light’; and there was light. …God separated the light from the darkness. God called the light Day, and the darkness he called Night.

That’s how God made the first distinction:

(void) light

And then, in Tao Te Ching:

 The nameless is the beginning of heaven and earth…

Analogies can be found in Hinduism,  Buddhism and Islamic philosophy. For example the latter distinguishes essence (Dhat) from attribute (Sifat), which are neither identical nor separate. Speaking of Islam, the occultation prompts another association. According to the Shia Islam, the Twelfth imam has been living in a temporary occultation, and is about to reappear one day.  Occultation is also one of the identities in the primary algebra:


In it the variable b disappears from left to right, and appears from right to left. This can be pictured by changing the position of an observer to the right, until b is fully hidden behind a, and then when moving back to the left,  b reappears:


Another association, which I find particularly fascinating, is with the ancient Buddhist logical system catuskoti, the four corners. Unlike the Aristotelian logic, which has the principle of non-contradiction, and the excluded middle, in catuskoti there are four possible values:

not being


both being and not being

neither being or not being

I find the first three corresponding with the void, distinction, and re-entry, respectively, which is in line with Varela’s view that apart form the unmarked and the marked state, there should be a third one, which he calls the autonomous state.

The fourth value would represent anything which is unknown. If we set being as “true”, and not-being as “false”, then every statement about the future is neither true nor false at the moment of uttering. And we make a lot of statements about the future, so it is common to have things in the fourth corner.

The fourth value reminds me, and is reminded by the Open World Assumption, which I find very useful in many cases, as I mentioned here, here, and here. It also tempts to add a fourth statement to the initial three:

To not know is not to know there is not.

Catuskoti fits naturally to the Buddhist world-view, while being at odds with the Western one. At least until recently, when some multi-valued logics appeared.

George Spencer-Brown, Louis Kauffman, William Bricken demonstrated that many of the other mathematical and logical theories can be generated using the calculus of indications. For example, in elementary logic and set theory, negation, disjunction, conjunction, and entailment, can be represented respectively with (A), AB, ((A)(B)), and (A)B, so that the classical syllogism ((A entails B) and (B entails C)) entails (A entails C), can be shown with the following form:


If that’s of interest, you can find explanations and many more examples in this paper by Louis Kauffman.

“Laws of Form” inspired extensions and applications in mathematics, second-order cybernetics, biology, cognitive and social sciences. It influenced prominent thinkers like Heinz von Foerster, Humberto Maturana, Francisco Varela, Louis Kauffman, William Bricken, Niklas Luhmann,  and Dirk Baecker.


Self-reference is awkward: one may find the axioms in the explanation, the brain writing it’s own theory, a cell computing its own computer, the observer in the observed, the snake eating its own tail in a ceaseless generative process.

F. Varela, A Calculus for Self-reference

Is re-entry fundamental or a construct? According to George Spencer-Brown, it’s a construct. Varela, on the other hand, finds it not just fundamental but actually the third value, the autonomous state.  He brings some quite convincing arguments. For Kauffman, the re-entry is based on distinction, just as distinction is based on re-entry:

the emergence of the mark itself requires self-reference, for there can be no mark without a distinction and there can be no distinction without indication (Spencer-Brown says there can be no indication without a distinction. This argument says it the other way around.). Indication is itself a distinction, and one sees that the act of distinction is necessarily circular.

That was the reason I presented three statements, and not only the first two, as a summary of the calculus of indications.

Similar kind of reasoning can be applied to sense-making. It can be seen as an interplay between autonomy and adaptivity. The autonomy makes the distinctions possible and the other way round. Making distinctions on distinctions is in fact sense-making but it also changes the way distinctions are made due to adaptivity. At this new level, distinctions become normative. They have  value in the sense that the autonomous system has attitude, has re-action determined by (and determining) that value. The simplest and most clearly distinguished attitudes are those of attraction, aversion and neutrality.

This narrative may imply that values are of a higher order. First distinctions are made, then sense, and then values, in a sort of a linear chain. But it is not linear at all.

As George Spencer-Brown points out, a distinction can only be made by an observer, and the observer has motive to make certain distinctions and not others:

If a content is of value,  a name can be taken to indicate this value.

Thus the calling of a name can be identified with the value of the content

Thus values enable distinctions and close the circle. Another re-entry. We can experience values due to the significance that our interaction with the world brings forth. This significance is based on making distinctions, and we can make distinctions because they have value for us.

But what is value and is it valuable at all? And if value is of any value, what is it that makes it such?

Ezequiel Di Paolo defines value as:

the extend to which a situation affects the viability of a self-sustaining and precarious network of processes that generates an identity

And then he adds that, the “most intensely analysed such process is autopoiesis”.

In fact the search for a calculus for autopoiesis was what attracted Varela to the mathematics of Laws of Form in the first place. It was a pursuit to explain life and cognition. Autopoiesis was also the main reason for Luhmann and Baecker’s interest  but in their case for studying social systems.

The operationally closed networks of processes in general, and the autopoiesis in particular, show both re-entry, and distinction enabled by this re-entry and sustaining it. For the operationally closed system all its processes enable and are enabled by other processes within the system. The autopoietic system is the stronger case, where the components of the processes are not just enabled but actually produced by them.

Both are cases of generating identity, which is making a distinction between the autonomous system and environment. Environment is not everything surrounding it, but only the niche which makes sense to it. This sense-making is not passive and static, it is a process enacted by the system which brings about its niche.

Identity generation makes a distinction which is also what it is not, a unity. That is how living systems get more independent from the environment, which supplies the fuel for their independence and absorbs the exhaust of practising independence. And more independence would mean more fuel, hence bigger dependence. The phenomenon of life is a “needful freedom”, as pointed out by Hans Jonas.

Zooming out, we come back to the observation of George Spencer-Brown:

the world we know is constructed in order to see itself.[…] but in any attempt to see itself, […]it must act so as to make itself distinct from, and therefore false to, itself.


Closing the circle from distinctions to sense-making through value-making to (new) distinctions, solves the previous implication of linearity but it may now be misunderstood to imply causality. First, my intention was to point them out as enabling conditions, not treating for now the question if they are necessary and sufficient. Second, the circle is enabled by and enables many others, the operationally closed self-generation of identity being of central interest so far. And third, singling out these three operations is a matter of distinctions, made by me, as an act of sense-making, and on the basis of certain values.