This blog is part of an online learning platform which includes the Pathways to New Community Paradigms Wiki and a number of other Internet based resources to explore what is termed here 'new community paradigms' which are a transformational change brought about by members of a community.


It is intended to offer resources and explore ideas with the potential of purposefully directing the momentum needed for communities to create their own new community paradigms.


It seeks to help those interested in becoming active participants in the governance of their local communities rather than merely passive consumers of government service output. This blog seeks to assist individuals wanting to redefine their role in producing a more direct democratic form of governance by participating both in defining the political body and establishing the policies that will have an impact their community so that new paradigms for their community can be chosen rather than imposed.


Friday, September 6, 2019

We are, each of us, a system. We are, each of us, systems in multitudes


We are, each of us, a system. We are, each of us, systems in multitudes. We are both at the same time. We are ourselves, as systems, of the same nature as those systems manifested within the world to both our senses and mind continuing the discussion from the previous post and Still Learning to Understand Systems. Separated, from all that we are not without, and from what we are within, as well as between other systems by boundaries that are placed there by us.

The most determinative means, it seems to me, but with the least amount of information of assigning a boundary to a system is to give it a label. Buckminster Fuller’s definition of a system as “the first subdivision of universe into a conceivable entity” provides an important initial context without particularities.  

Russell Ackoff, whose focus on systems was in addressing practical, real-world messes, the kind raised by uncertainty and complexity, said that any particular or specific system could be characterized by three essential conditions which I could also see as being able to assist in establishing the boundary of a system as defined by Fuller.

First, each simple element in a system has an effect on the behavior of the whole system. If it doesn’t have such an effect then it’s not part of the system and belongs outside the boundary. Second, within a system each element is affected by at least one other element in that system, and that none of the elements has an independent effect on the whole. Every element then has an effect on some other element or elements in the system and only has such an effect on the whole in conjunction with some other element or elements.  Third, it is not possible to develop totally independent subsystems from a subgrouping of a system’s elements. Any subsystems within a system that can be made totally independent belong on the outside of the system. It will likely take some experimenting to determine which elements fulfill all three categories. This still leaves a potentially large portion of elements that can be placed on either side of a boundary depending upon how it or they are defined but it is then a matter of consensus or coercion, both of which, again in my view, can be detrimental to understanding a system.

Fuller’s definition, however, is not merely an esoteric abstraction but can provide an immediate, visual illustration of the boundaries of a three-tier hierarchy anchored in geometry. 

The simplest possible three-dimensional configuration or minimum set of relations representing a stable structure (which can also be fractalized) within the “real world” is a tetrahedron, a four-sided, triangular-faced pyramid having four vertices, four faces, and six edges subdividing the world into all that is outside the system’s structure (environment), the structure of the system itself (system), and the system’s interior (bounded elements and connections) having profound, practical implications for all system designers. 

Fuller’s definition of a system bears a particularly significant illustration of the aspect of verticality or nestability, the means of forming a hierarchy, series, or sequence wherein each member, element, or set is contained in and contains the next. From a systems thinking view of the world, this concept of hierarchy finds expression, in the natural world, as a stratified organization of increasing levels of complexity, which can be expressed as a sequence corresponding to levels of emergence distinguishing one level from the next by novel qualitative properties. 

Starting with elementary particles, atoms, and molecules to become various forms of matter, to leap from inorganic to simple organic life-forms,  to evolve into complex organisms, and in aggregate evolve into whole ecosystems, leaping again to consciousness to include humans and subsequently human society. Each level represents a cluster of interacting sub-components, consisting of elements of the previous level. 

Each level can be distinguished by the relative strength of the respective interactions by which it is constrained. Each level being stronger within and weaker between other levels. It is the constrained internal bonds that allow for the individual integrity of a level to stand out against the background of its environment and provide for a definition of boundary conditions.

This is very different from a strictly reductionist based top-down complicated systems of command and control hierarchy, discussed in the last post, that feature little in the way of nested verticality. It again means that from a systems thinking perspective one needs to optimize on at least three levels, the system under consideration, its environment, and its internal components, as a coherent, harmonious integration of relevant aspects for any constructive, systemic intervention.  The familiar and conventional organizational or governance structures of our typical commerce, political, or social affairs organizations are simply inadequate, in not being internally rich enough, to address the demands of an increasingly complex world. 

Many examples of business and governance management perpetuate a model that imposes structures with grossly insufficient variety such as conventional concepts of leadership that violate the law of requisite variety by popularly entrusting power in a single person. Consider the complex interactions that increasingly characterize today’s society, concerning the typical, still-prevailing, hierarchical, command-and-control structure, and I would add afflicted by ”complicatedness”. Such low-variety models ultimately only impoverish the system that is supposedly “under control.”

Another cybernetic term due to Ross Ashby is ”Ultrastability”, the cybernetic concept of regulation relating to the ability of a system to restore homeostatic equilibrium after unexpected perturbations even when a trajectory for doing so has not been specially pre-specified or built-in. A more complex, dynamic form of adaptation is manifested in the typical homeostatic mechanism by which a fixed decision rule is applied to trigger an appropriate corrective action whenever equilibrium is disturbed.

In more interesting cases, read as more complex cases, such as brain-like systems, societies, or ecosystem, a sufficient amount of variety can be “built” into a system so that its internal reconfiguration can be made to match unpredictable changes in its environment even if a specific decision rule is not already embedded in its structure. The general rule then becomes “keep changing internal configurations,” or basically rewire the internal variety of the system in the search for a subset that matches new demands in real-time. The internal variety of a system, even if very high and ultrastable is, however, still finite as an entirely new environmental context condition may require new options that the system cannot generate.  

This gives rise to Ashby’s Law of Requisite Variety which states that “Only variety can absorb variety.” Effective regulation then can only be achieved when the regulating system contains at a minimum, the same amount of variety as the system being regulated. The requirement for requisite variety is applicable regardless of the type of system whether automated devices, technology processes, ecosystems, or social systems.

One means of enhancing requisite variety is redundancy. The term redundancy, commonly understood as unnecessary, in information theory refers to protecting information integrity from deterioration due to the effects of background noise by increasing information content or channel capacity. 

At a state of maximum disorder or entropy, when no distinctions can be made or no information is discernible and activity ceases redundancy will be at zero. Redundancy then allows for more potential “possibilities.” If the rate of change of a system’s redundancy remains positive then it is self-organizing according to Heinz Von Foerster. This would logically seem to extend further to ”Redundancy of Potential Command”.

Internal complexity brought about by requisite variety allows for the emergence and re-emergence of different configurations in response to changing events. The important implication being that “’ living,’ self-organizing systems, including social systems of all types, depending on their internal complexity and inherent redundancy for resilience and long-term viability”.

Distributing and determining by function and relevant knowledge rather than by authority assigned by rank and seniority the processes of decision-making across a network-like organizational structure is termed ”Heterarchy” The potential for so-called “command” is thus distributed, or made redundant, over a large number of components and its location shifts constantly within the network. It is not permanently localized and no fixed vertical hierarchy of authority is discernible. 

Fuller’s definition of a system can be said to transverse across the chasm between the solely conceptual and the countless entities with distinct and independent existence within our universe, laying between that which is conceivable but which is not an entity within our universe and that which may perhaps be an entity, but that is not conceivable. In total, what we call our reality. It brings systems, conceptually defined, to a state of reality within which Norbert Wiener’s Cybernetics can be and by necessity needs to be applied.  

The seemingly, abstract, remote, and perhaps even, esoteric concepts of variety, ultrastability, redundancy of potential command, synergy, and self-organization are all related or are constrained together describing and arguably determining the characteristics of regulating mechanisms that underlie external behavior of complex systems. The practical implications of which are far-reaching and significant not only in that they shape the conduct of human affairs but they could be crucial in resolving the many sustainability-related challenges we are facing. The challenge of all interventions in any socio-ecosystemic domain would be then to keep an open, dynamic stance, working in tandem with the self-organizing properties of the system, rather than inadvertently destroying them. 




Monday, September 2, 2019

Finding Our Way Through Complexity and Uncertainty


The last post looked at Michael Ben-Eli’s paper, ”Understanding Systems”, a part of Sustainability Laboratory’s online course on Systems Thinking and Systems Modeling. This should have been said then, the Understanding Systems paper needs to be read, even more so than this blog post series. This is only my interpretation, far more likely subject to error, jumbled with alternative and added thoughts to tie it to New Community Paradigms. The paper by Ben-Eli has far more insights from Stafford Beer, Ross Ashby, and others not included here.

The previous post was able to fully define the concept of ”system”, based on the insights in ”Understanding Systems”, solely through mutually supporting abstract concepts without referring to anything physical. There wasn't any need for the particularity of specific instances. ”Organization” could be seen as a concept without the need for reference to a specific organization or particular type. Self-organization arose from the further internal interactions of mutually supporting concepts. There was an inference, with the mention of patterns, to our more immediate sensory world but that world is a veil behind which the concept of patterns lay.

That conceptual reality, however, is not the one that we have to survive, strive, and succeed in. Even if the interactions within such conceptual reality are constrained seamlessly, weaving together effortlessly, they are hidden from our own hard-knock reality by a veil of complexity, uncertainty, and randomness and must be divined from the clues made apparent when we look for them.

The key to piercing the veil, so as to glimpse at a system’s internal workings, is to see the same logic found in the circuitry in man-made, automatic, error-control mechanisms as being analogous to the homeostatic logic found in physiological structures that work to maintain specific physiological values such as body temperature, body fluid composition, blood pressure, blood sugar level, etc., by triggering an appropriate corrective action when deviations occur from a norm. 

Cybernetics focuses specifically on understanding the often obscured dynamics of a systems’ internal structures and the mechanisms that maintain its dynamic organization invariant.  The term “cybernetics” denotes the role of feedback mechanisms in processes of regulation and control. The number of possible distinctions related to a system’s internal states in cybernetics is termed “variety”. 

Norbert Wiener, the first pioneer in the field, and his colleagues established I am going to say the immutable connection between the observable behavior of a system or its output and the internal structure of a system in a 1943 paper, Behavior Purpose, and Teleology. 

First, by clarifying the question of purposive behavior by tying a system’s behavior to its specific internal structure which helped to remove the need for “vitalism,” or as in the last post ”vital force”, to explain the special qualities of living systems. Second, by making it clear that to modify a system’s behavior change must be made to the system’s core structure rather than being directed at the behavior of the system. 

Wiener defined cybernetics as “the science of control and communication in the animal and the machine,” highlighting two key ideas. One is establishing the validity of the theory for both man-made devices and living systems. Two, classifying  “control” and “communication” as one, signifying the role of information in processes of control and regulation. 

What then are the structures and mechanisms of a system by which it mediates its operation, viability, performance, and by which it determines how the system can regulate itself? How it can self-organize and how it can adapt and evolve?

Regulatory processes mediate between the factors interacting for particular outcomes or behaviors binding a system together to preserve (constrain) its singular identity. ”Control” then is not meant to be considered as a restricting concept.

Regulation in the cybernetic model is expressed in the context of a system’s capacity to maintain equilibrium states in the face of disturbances from an environment with which it interacts. Regulation can then be regarded as the manifestation of a system’s adaptive capacity.

Cybernetics extents the concept of regulation to evolutionary processes, by which a system can actually transform into a new entity with a higher regulation potency. Amplifying potential variety by processes such as coalition formation, highlighting the importance of cooperation in evolution as has been demonstrated by anthropologists looking at the work of Elinor Ostrom.

The number of different states that a system in the real world can assume or its potential internal variety can be considered a measurement of complexity. Complexity then can be thought of as the number of possible distinctions by which information can be known or that can be determined about a given system, independent of its size. A human cell can be more complex than a large distribution warehouse.

The number of possible states or a variety of a system increases both rapidly and exponentially when the numbers of elements it contains get larger along with the number of interactions or connections between them, as well as the possibility for each interaction to assume more than a single value.

A quantity of variety, however, also produces a measure of uncertainty. Uncertainty or randomness relates to our perception of order or information about the different states of a system although contrariwise to the way order relates to the more deterministic regularities in a system’s behavior produced by a sequence of a system’s changing states which are detectable. 

The concepts of information and uncertainty are closely related having similar mathematical expressions but opposite signs. If uncertainty is at a maximum then all events in a given universe can occur with equal probability which means that no information is available and variety is at zero. Uncertainty then is a measure of disorder in a system or the level of entropy. 

The process of conveying an amount of information concerning one state out of the variety of all the possible states of a system removes some uncertainty but never all of it, some of it is irreducible as Donella Meadows has pointed out. Too much information can be as bad as too little, making distinguishing relevant information from background noise impossible.

The behavior of whole systems, presumably those of sufficient complexity to require multiple subcomponents to achieve a high degree of internal variety, cannot be predicted from the behavior of the system’s parts alone due to the emergence of new, often unexpected properties. This perspective of an entire or whole system is termed synergy in Cybernetics. This ought to be assumed in all cases of social systems design and addressed as they will find inevitable expression in social interactions. 

The concept of variety, along with the concepts of organization, entropy, and order, and I would add synergy, are at the nexus of our ideas involving physics, information theory, and the philosophy of science. Such regularities that can be found within a system are produced by constraints imposed on that system’s potential variety or its internal structure as discussed in the last post. This the Understanding Systems paper asserts is profound.

The far more prevalent traditional reductionist analytical process, according to Ludwig von Bertalanffy, as cited by the Understanding Systems paper, sees interactions between elements being non-existent, or negligible in that they can supposedly be ignored for the purpose of analysis. According to a strictly reductionist perspective, understanding the world should be possible from constructing a picture of it by simply adding up detailed descriptions of its parts.  There is no need but to precisely measure what is immediately before us and sum it up. Ignoring the internal state of systems is to our peril as we are assailed with a world of complexity, uncertainty, and randomness by which the reductionist perspective can be seen as myopic.

The gross deficiencies of currently prevailing top-down hierarchical command and control structures, including an inability to effectively address complexity, have been highlighted by many including Dee Hock, founder and CEO emeritus of VISA International, who advocated instead for a creative combination of “order” and “chaos” or what he termed “chaordic”. A more versatile and dynamic, but also more complex form of organization that allows for the ability to innovate and experiment. 

Switching from a mechanistic, reductionistic perspective of the world to a system view of the world opens one up hopefully but also understandably hesitantly to a variable, dynamic and interdependent reality, which demonstrates not only the need for a transformative shift in values, attitude, and actions in the world but also to pathways able to achieve them. 

Personally, I doubt that this would actually force acknowledgment of such a reality, as Ben-Eli asserts. We, humans, are far more resilient in our mental models than that, persuasion is still required. As he says elsewhere, individual observers can either keep or change their frame of reference. Even if they do change, they will likely have to come together with others in some manner to change their collective conclusions to change the system. Regardless, the implications of switching are still profound.

Past Posts