Lessons from Dietrich Dörner’s The Logic of Failure

This collection of short statements from Dietrich Dörner's The Logic of Failure: Recognizing and Avoiding Error in Complex Situations is not a substitute for reading—for studying—his book. Dörner used computer simulations of real problems that participants of varied types and backgrounds were to solve. The data collected and the participants' own words show how people fail at thinking and what behaviors helped participants understand “dynamic systems” and think successfully

 

You can recognize his words by the dark green color, with [ ] indicating text added to his words or other brief explanation of a change to his words. (Click here for disclaimer.) Dörner's wisdom is not just about how we fail to think, but also applies to how to help ourselves think well (with fewer errors and in less time), how to improve education, and how to be effective citizens of our republic.

 

This webpage provides two ways to access Dörner's The Logic of Failure

§         Looking at Specific Issues Covered in the Quotations from Dörner's The Logic of Failure

What Is a system?

Why is systems thinking necessary?

What conditions exist in the systems in our world that make them hard to understand?

Can systems be self-correcting?

Who thinks well? (Who produces the good outcomes they wanted?)

Who thinks poorly? (Who produces negative outcomes and outcomes that are opposite to what they declared was their intention?)

What attributes do NOT guarantee good thinking?

How does failure in thinking happen?

How can failure escalate?

How do failures in thinking produce cynicism and fundamental disregard for those people and goals that the decision makers claimed they wanted to help?

§         Looking at Quotations from Dörner's The Logic of Failure in Page Order

 

Looking at Specific Issues Covered in the Quotations from Dörner's The Logic of Failure

What Is a system?

Dӧrner defines a system as “a network of many variables in causal relationships to one another.” (p. 73) A system may, however, best be understood by this visual image:

[W]e could liken a decision maker in a complex situation of a chess player whose set has many more than the normal number of pieces, several dozen, say. Furthermore, these chessmen are all linked to each other by rubber bands, so that the player cannot move just one figure alone. Also, his men and his opponent’s men can move on their own and in accordance with the rules the player does not fully understand or about which he has mistaken assumptions. And, to top things off, some of his own and his opponent’s men are surrounded by a fog that obscures their identity. (p. 42)

Why is systems thinking necessary?

…we face an array of closely¾though often subtly¾linked problems. The modern world is made up of innumerable interrelated subsystems, and we need to think in terms of these interrelations. In the past, such considerations were less important. What did the growth of Los Angeles matter to a Sacramento Valley farmer a hundred years ago? Nothing. Today, however aqueducts running the length of the state make northern and southern Californians bitter competitors for water. Of what concern to us were religious differences in Islam forty years ago? Apparently none. The global interrelations of today make such dissension important everywhere.…

 

The need to see a problem embedded in the context of other problems rarely arose. For us, however, this is the rule, not the exception. Do our habits of thought measure up to the demands of thinking in systems? What errors are we prone to when we have to take side effects and long-term repercussions into account? (pp. 5-6) [bold added]

One basic error accounts for all catastrophes: none of the participants realized that they were dealing with a system in which, though not every element interacted with every other, many elements interacted with many others.… They did not take into account the side effects and repercussions of certain measures. They dealt with the entire system, not as a system but as a bundle of independent minisystems. And dealing with systems in this way breeds trouble: if we do not concern ourselves with the problems we do not have, we soon have them. (pp. 86-87)

What conditions exist in the systems in our world that make them hard to understand?

§         Intransparency

§         Complexity      and the difference in complexity with and without experience

§         Interrelationships with their "side effects and long-term repercussions"       and changes of the whole over time

§         Time changes

§         Exponential changes

What characteristics of ourselves and of information make a difference in success with systems?

§         Assumptions, recognized and not

§         Information without methods to know what it "means"      overload

Can systems be self-correcting? What attributes make a safer system?

§         A “well-buffered system” with negative feedback for stability

Who thinks well? (Who produces the good outcomes they wanted?)

How good thinkers work initially

§         Uses analogy rigorously to determine the structure of the system    and as part of the strategies Dörner suggests for systems thinking     an example

§         Collects and PLOTs data (needed to see “covariations” that may have a “time lag”) 

§         Takes action but waits for measurement before acting again 

§         Takes more aspects into consideration 

§         Focuses on what they want to retain

§         Uses “reverse planning”

§         Follows a process - example

§         Develops goals that can adjust to changes and deals with contradictory goals

§         Makes more decisions

 

How good thinkers continue and complete their work

§         Does more testing of their own hypotheses (less assumption of "truth")

§         Has more accountability (less blame-shifting)

§         Has more tolerance of uncertainty

§         Encourages more critique

§         Continues to pay attention      and before acting "submit" our observations to "'strategic' scrutiny"

§         Is able to deal with changes over time

Who thinks poorly? (Who produces negative outcomes and outcomes that are opposite to what they declared was their intention?)

§         Summary of major signs of poor thinking

 

How poor thinkers work:

§         Action oriented without checking whether solutions are working 

§         “ballistic behavior” – always moving onto “new problems” because they never check the results of their past decisions

§         Lack of focus

§         Lack of "telling and frequent feedback" leading to “magical” hypotheses based on "local experience"

§         “Oversteering”       More

 

Certainty and poor thinkers:

§         Prior successful violation of rules

§         Group overconfidence and group pressure (groupthink)

§         Individual certainty

 

Response when their solutions do not work:

§         Prone to doublespeak

§         Prone to treat symptoms not problems

§         Prone to false simplification      More      More

§         Prone to refusing to look at what they don't want to see

§         Prone to "horizontal flight" – retreating to the “small, cozy corner of reality where we feel at home”

§         Prone to “vertical flight” – “no longer have to deal with reality but only with what e we happen to think about it

What attributes do NOT guarantee good thinking?

§         Having good intentions (and how “evil intentions plus intelligence" may be less dangerous than "good intentions plus stupidity")

§         Having a high IQ

§         Being an expert – positives to being an expert but negatives with group pressure (groupthink) and negatives when projecting change over time

How does failure in thinking happen?

§         Incremental, built-up small errors

§         Habits of thought and stress

§         Unclear goals      More

§         Focus on own skills in choosing problems to solve (working on the problems you like to solve, not the problems you need to solve)

§         Focus on a “single point” (in this case “satisfaction of the population”) and use of a visual that shows that “centralistic organization” of that single point”

How can failure escalate?

§         Unintended consequences and side effects leading to catastrophe

§         Focus only on recognized problems (not those on the rise)       More

§         Focus only on what we want to change, not on what we "want to retain"

§         "repair service behavior"

§         Vicious circles of solving the problems you caused

§         "goal inversion" – renaming a problem as a sign that a solution was developing

§         "doublespeak" (including the misuse of the word voluntary

How do failures in thinking produce cynicism and fundamental disregard for those people and goals that the decision makers claimed they wanted to help?

§         Example 1

§         Example 2

 

Looking at Quotations from Dörner's The Logic of Failure in Page Order

Disclaimer

The use of these quotations does not indicate Dietrich Dörner's endorsement of the content on this webpage. Previously, Dörner gave permission to use these quotations as a resource for others. I have sent an email request to use these quotations here. If he does not agree, we will, of course, remove these quotations immediately.

 

Page

Quotation from Dietrich Dörner's Logic of Failure

04

In short, they solved some immediate problems but did not think about the new problems that solving the old ones would create.

05

No one is distressed by failing to see very subtle points that required specialized knowledge. We are distressed, however, if we overlook the obvious.

05-06

…we face an array of closely¾though often subtly¾linked problems. The modern world is made up of innumerable interrelated subsystems, and we need to think in terms of these interrelations. In the past, such considerations were less important. What did the growth of Los Angeles matter to a Sacramento Valley farmer a hundred years ago? Nothing. Today, however aqueducts running the length of the state make northern and southern Californians bitter competitors for water. Of what concern to us were religious differences in Islam forty years ago? Apparently none. The global interrelations of today make such dissension important everywhere.…

 

[In the past], [t]he need to see a problem embedded in the context of other problems rarely arose. For us, however, this is the rule, not the exception. Do our habits of thought measure up to the demands of thinking in systems? What errors are we prone to when we have to take side effects and long-term repercussions into account? [bold added]

07

Our brains are not fundamentally flawed; we have simply developed bad habits. When we fail to solve a problem, we fail because we tend to make a small mistake here, a small mistake there, and these mistakes add up.

08

But the nurturing of good intentions is an utterly undemanding mental exercise, while drafting plans to realize those worthy goals is another matter. Moreover, it is far from clear whether “good intentions plus stupidity” or  “evil intentions plus intelligence” have wrought more harm in the world. People with good intentions usually have few qualms about pursuing their goals. As a result, incompetence that would otherwise have remained harmless often becomes dangerous, especially as incompetent people with good intentions rarely suffer the qualms of conscience that sometimes inhibit the doings of competent people with bad intentions. The conviction that our intentions are unquestionably good may sanctify the most questionable means.

09

[Description of the experiments performed that are the basis of Dörner's conclusions. To simplify, Dörner provides computer simulations of real problems that participants of varied types and backgrounds use. The data collected and the participants' own words show how people fail at thinking and what is more important for the long term—how people can think successfully.]

10

Failure does not strike like a bolt from the blue; it develops gradually according to its own logic. As we watch individuals attempt to solve problems, we will see that complicated situations seem to elicit habits of thought that set failure in motion from the beginning. From that point, the continuing complexity of the task and the growing apprehension of failure encourage methods of decision making that make failure even more likely and then inevitable.

 

We can learn, however. People court failure in predictable ways. …We need only apply the ample power of our minds to understanding and then breaking the logic of failure.

13

It did not occur to them that their measures [in the Tanaland simulations] had in fact set a time bomb ticking, and they were taken by complete surprise when the almost inevitable famines broke out in later years.

17

Their [the Tanaland simulations] ultimate failure shows clearly, however, that more thinking and less action would have been the better choice.

17

…These participants did not consciously set out to redefine problems; the redefinitions crept up on them. [Gives the example of the focus on a canal project by one participant, instead of noticing a famine.]

17

Some participants reacted cynically to repeated reports of famine. At first, the reports evoked concern, but after participants had made vain attempts to solve the problem, we began to hear remarks like “They’ll just have to pull in their belts and make sacrifices for the sake of their grandchildren,” “Everybody has to die sometime,” “It’s mostly the old and weak who are dying, and that’s good for the population structure.”

18

The parallels to real situations were obvious. Here, as in the real world, we found that our decision makers

§         acted without prior analysis of the situation,

§         failed to anticipate side effects and long-term repercussions,

§         assumed that the absence of immediately obvious negative effects meant that correct measures had been taken,

§         led over involvement in “projects” blind them to emerging needs and changes in the situation

§         were prone to cynical reactions [Bullets added]

21

The first obvious difference is that the good participants [in the Greenvale simulations] made more decisions than the bad ones….

22

The participants acted “more complexly.” Their decisions took different aspects of the entire system into account, not just one aspect. This is clearly the more appropriate behavior in dealing with complicated systems.

24

The good participants differed from the bad ones, however, in how often they tested their hypotheses. The bad participants failed to do this. For them, to propose a hypothesis was to understand reality; testing that hypothesis was unnecessary. Instead of generating hypotheses, they generated “truths.”

Then, too, the good participants asked more why questions (as opposed to what questions). They were more interested in the causal links behind events….

27

When they could see no other way out, their last resort was to say: Tom or Dick or Harry should tend to the problem. This is a normal human dodge… But it has potentially serious consequences. If, the moment something goes wrong, we no longer hold ourselves responsible but push the blame onto others, we guarantee that we remain ignorant of the real reasons for poor decisions, namely inadequate plans and failures to anticipate the consequences.

27

What factors shaped the behavior of participants? The usual battery of psychological tests is useless in predicting participant behavior. We would assume that “intelligence” would determine behavior in complex situations like this, for complicated planning¾formulating and carrying out of decisions¾presumably places demands on what psychology has traditionally labeled “intelligence.” But there is no significant correlation between scores on IQ tests and performance in the Greenvale experiment or in any other complicated problem-solving experiment.

27

It seems likely that the capacity to tolerate uncertainly has something to do with how our participants behaved. [Goes on to criticize delegators.]

30-31

[Uses research on Chernobyl.] This tendency to “oversteer” is characteristic of human interaction with dynamic systems. [Refers to chapter 5 on time for details on overcorrection leading to failure.]

 

The operators obviously misjudged the situation. Why? Hardly because the dangers of instability had never been pointed out to them.… One [reason] was probably the time pressure they were under or felt they were under…. The other reason was probably that, although the operators may well have known “theoretically” about the danger of reactor instability, they could not conceive of the danger in a concrete way. Theoretical knowledge is not the same thing as hands-on knowledge.

31

Another likely reason for this violation of the safety rules was that operators had frequently violated them before. But as learning theory tells us, breaking safety rules is usually reinforced, which is to say, it pays off.… Safety rules are usually devised in such a way that a violator will not be instantly blown sky high, injured, or harmed in any other way but will instead find his life made easier. And this is precisely what leads people down the primrose path. The positive consequences of violating safety rules reinforce our tendency to violate them, so the likelihood of a disaster increases.

33-34

 

What kind of psychology do we find here? We find a tendency, under time pressure, to apply overdoses of established measures. We find an inability to think in terms of nonlinear networks of causation rather than of chains of causation¾an inability, that is to properly assess the side effects and repercussions of one’s behavior. We find an inadequate understanding of exponential development, an inability to see that a process that develops exponentially will, once it has begun, race to its conclusion with incredible speed. These are all mistakes of cognition.

 

The Ukrainian reactor operators were an experienced team of highly respected experts who had just won an award for keeping their reactor on the grid ….The great self-confidence of this team was doubtless a contributing factor in the accident. They thought they knew what they were dealing with, and they probably also thought themselves beyond the “ridiculous” safety rules devices for tyro reactor operators, not for a team of experienced professionals.

34

The tendency of a group of experts to reinforce one another’s conviction that they are doing everything right, the tendency to let pressure to conform suppress self-criticism within the group. [Cites Irving Janis, The Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascos. Groupthink includes materials on President Kennedy’s Bay of Pigs disaster and his alteration of the system for foreign policy decisions before the Cuban Missile Crisis, thus accounting for a different outcome.]

 

Furthermore, the violations of the safety rules were by no means “exceptions” committed here for the first time. They had all been committed before¾if not in this precise sequence¾without consequence. They had become established habits in an established routine.

35

There is a difficulty in assessing side effects and long-term repercussions, that is a tendency to think in terms of isolated cause-and-effect relationships.

37

All are, at least in part, “intransparent”; one cannot see everything one would like to see. And all develop independent of external control, according to their own internal dynamic.

37

Complexity is the label we will give to the existence of many interdependent variables in a given system. The more variables and the greater their interdependence, the greater that system’s complexity. Greater complexity places high demands on a planner’s capacities to gather information, integrate findings, and design effective actions. The links between the variables oblige us to attend to a great many features simultaneously, and that, concomitantly, makes it impossible for us to undertake only one action in a complex system.

37

A system of variables is “interrelated” if an action that affects or is meant to affect one part of the system will also always affect other parts of it. Interrelatedness guarantees that an action aimed at one variable will have side effects and long-term repercussions. A large number of variables will make it easy to overlook them. [bold added]

39

Complexity is not an objective factor but a subjective one. Take, for example, the everyday activity of driving a car.… The main difference between these two individuals [the beginner and the experienced driver] is that the experienced driver reacts to many “supersignals.” For her, a traffic situation is not made up of a multitude of elements that must be interpreted individually. It is a “gestalt”…

 

Supersignals reduce complexity, collapsing a number of features into one,

40

The dynamics inherent in systems make it important to understand developmental tendencies. We cannot content ourselves with observing and analyzing situations at any single moment but must instead try to determine where the whole system is heading over time. For many people this proves to be an extremely difficult task. [bold added]

40

[Gives Chernobyl case of intransparency.] [I]ntransparency¾the condition in which decision makers “must make decisions affecting a system whose momentary features they can see only partially, unclearly, in blurred and shadowy outline¾or possibly not at all.”

41

If we want to operate within a complex and dynamic system, we have to know not only what its current status is but what its status will be or could be in the future, and we have to know how certain actions we take will influence the situation. For this we need “structural knowledge,” knowledge of how the variables in a system are related and how they influence one another.”

41

The totality of such assumptions in an individual’s mind¾assumptions about the simple or complex links and the one-way or reciprocal influences between variable¾constitute what we call that individual’s “reality model.” A reality model may be explicit, always available to the individual in a conscious form, or it can be implicit, with the individual himself unaware that he is operating on a certain set of assumptions and unable to articulate what those assumptions are.

41

On the “implicit knowledge of experts”: Experts often display such intuition [not being able to articulate what are the factors in their decisions] in their specialties.

42

An individual’s reality model can be right or wrong, complete or incomplete. As a rule it will be incomplete and wrong, and one would do well to keep that probability in mind. But this is easier said than done. People are most inclined to insist they are right when they are wrong and when they are beset by uncertainty. (It even happens that people prefer their incorrect hypotheses to correct ones and will fight tooth and nail rather than abandon an idea that is demonstrably false.)

42

[W]e could liken a decision maker in a complex situation of a chess player whose set has many more than the normal number of pieces, several dozen, say. Furthermore, these chessmen are all linked to each other by rubber bands, so that the player cannot move just one figure alone. Also, his men and his opponent’s men can move on their own and in accordance with the rules the player does not fully understand or about which he has mistaken assumptions. And, to top things off, some of his own and his opponent’s men are surrounded by a fog that obscures their identity.

43

[Draws stages of "problem-solving process," including "review of effects of actions and revision of strategy.”]

44

And it is not just the normal citizen who lacks time to gather information. Politicians faced with the need to make a decision will rarely have time to digest even readily available information, much less to pursue new lines of inquiry.

44

We are constantly deciding how much information is enough.… We need, of course, to do more with information than simply gather it. We need to arrange it into an overall picture, a model of the reality we are dealing with. [bold added] Formless collections of data about random aspects of a situation merely add to the situation’s impenetrability and are no aid to decision making. We need a cohesive picture that lets us determine what is important and what unimportant, what belongs together and what does not¾in short, that tells us what our information means. This kind of “structural knowledge” will allow us to find order in apparent chaos.”

45

On the other hand, “Methodism,” as Carl von Clausewitz (1780-1831) called this tendency, can impose a crippling conservatism on our activity.

46

Action follows decision. Plans must be translated into reality. This, too, is a difficult enterprise, one that calls for constant self-observation and critique. Is what I expected to happen actually happening? … We must be prepared to acknowledge that a solution is not working.

50-51

The distinction between positive and negative goals may sound academic, but it is important. With a positive goal we want to achieve some definite condition. With a negative goal we want some condition not to exist.… it is inherent in the logic of “not” that negative goals are more likely to be vaguely defined. A “nonstove” or “nonchair” is more difficult to define than a “stove” or “chair” (though no less easy to recognize¾thus a negative goal is not necessarily unclear).

 

Unclear goals are ones that lack a criterion by which we can decide with certainty whether the goal has been achieved….

 

[On 51, he says such goals are frequently “really multiple goals,” and “Criteria for our goals can be linked in different ways” and may be “independent.”

52

…in complex situations we cannot do only one thing. Similarly, we cannot pursue only one goal. If we try to, we may unintentionally create new problems. We may believe that we have been pursuing a single goal until we reach it and then realize with amazement, annoyance, and horror¾that in ridding ourselves of one plague we have created perhaps two others in different areas.

52

Implicit goal—[Example: Health is not a goal until one is sick.]

… the fact that most people’s actions are driven by an excessive (or exclusive) preoccupation with explicit goals accounts for a great deal of bad planning and counterproductive behavior. People concern themselves with the problems they have, not the ones they don’t have (yet). Consequently, they tend to overlook the possibility that solving a problem in area A may create one in area B.

52

[G]oals may be:

§         positive or negative

§         general or specific

§         clear or unclear

§         simple or multiple

§         implicit or explicit

53

[Required balancing act]:

§         [not too specific too early] –“Who knows how the game will develop?”

§         [not avoid having specific goals] ¾”…if particular actions are not informed by an overall conception, behavior will respond only to the demands of the moment.”

53-54

[Covers “efficiency diversity” as a way out of a “rigid definition of final goals too early in the game”—something that can “blind him to the course of developments and limit his flexibility.”] A situation is characterized by high efficiency diversity if it offers different possibilities (“diversity”) for actions that have a high probability of success (“efficiency”).

55

By labeling a bundle of problems with a single conceptual label, we make dealing with that problem easier¾provided we’re not interest in solving it. Phrases like “urgently needed measures for combating unemployment” roll off the tongue if we don’t have to do anything about unemployment. A simple label can’t make the complex nature of a problem go away, but it can so obscure complexity that we lose sight of it. And that, of course, is a great relief.

55

[Covers looking for “central problems.”]

56

[Covers dangers of “delegation.”]

57-

58

[Covers example “implicit goals.”]

For example, although many people today consider DDT an ecological plague, it was regarded as a blessing when it was first developed: at last an effective means to prevent vast insect destruction of crops, particularly in the Third World¾at last an effective weapon against hunger. The problems caused by the use of DDT came to light only gradually.

 

By developing DDT, scientists solved one problem, but that solution caused new problems. why did no one anticipate those new problems? The easy answer is, “Because we didn’t know enough back then. “ But I think the lack of knowledge is secondary. More important, it seems to me, is that no one took the trouble to acquire the necessary knowledge. When we are working on a given problem, we focus on that problem alone and not on problems that don’t exist yet. So the mistake is less not knowing than not wanting to know. And not wanting to know is a result not of ill will or egoism but of thinking that focuses on an immediately acute problem.

58

How can we avoid this pitfall? Simply by keeping in mind, whenever we undertake the solution of a problem, the features of the current situation that we want to retain. Simple? Apparently not.

 

As Brecht observed late in life, advocates of progress often have too low an opinion of what already exists. When we set out to change things, in other words, we do not pay enough attention to what we want to leave unchanged. But an analysis of what should be retained:

§         gives us our only opportunity to make implicit goals explicit

§         and to prevent the solution of each problem from generating new problems like heads of the Hydra.

59

The upshot is repair-service behavior: the mayor solves the problems that people bring to him.

 

In a traffic accident, for example, …minor injuries…the seriously wounded are no longer screaming and therefore no longer calling attention to their plight.

60

She solved not the problems she needed to solve but the ones she knew how to solve.

60

A goal that remains unclear, one that is not broken down into concrete partial goals, runs the risk of taking on a life of its own. Without concrete goals, there are no criteria that can be used to judge whether progress is in fact being made.

61

[50-50 chance of success will keep people playing - cites Mihaly Csikszentmihalyi’s term “flow experience”]

62

An interim goal happened on by chance may seduce him [a problem solver] into a flow situation he is helpless to escape (and perhaps may not even want to escape).

[Gives example of becoming a computer jockey.]

63

[Cites “economist Charles Lindblom” on value of  “muddling through," Lindblom’s term.

Cites Karl Popper on “pragmatic politics.”]

65-67

Contradictory goals are the rule, not the exception, in complex situations. In economic systems costs and benefits are almost always at odds.[bold added] … More dangerous are the situations in which the contradictory relation of partial goals is not evident.”

[Gives example of goals of liberty and equality in “days of French revolution.”]

 

Unrecognized contradictory relations between partial goals lead to actions that inevitably replace one problem with another. A vicious circle is commonly the result. By solving problem X, we create problem Y. And if the interval between the solutions is long enough that we an forget that the solution of the first problem created the second one…someone is sure to come up with an old solution for whatever the currently pressing problem is and will not realize that the old solution will create problem X again and send the circle into another cycle.

 

The same thing happens when current problems are so urgent that we will do anything to be rid of them. This, too, can produce a vicious circle in which we flip-flop between two problematic solutions [as in headache/stomach cures].

67

When people recognize that they are caught in vicious circles of this kind [where solving one problem creates another—but not immediately and not where they recognize they caused new problem], they find different ways to deal with them. One of these is “goal inversion.” They give up one goal or even pursue the exact opposite of the original goal.

67

[Uses example of dam project.]

This was the occasion on which one “development director,” responding to reports about the inadequate food supply, declared, “They’ll just have to pull in their belts and make sacrifices for the sake of their grandchildren.” The misery this participant had created he now declared a necessary transitional phase on the road to a future paradise.

 

Another participant went a step further in this direction. He is the one who said, “It’s mostly the old and weak who are dying, and that’s good for the population structure.” Here the famine was not simply labeled a necessary transitional phase but was elevated to the status of a benefit. Instead of recognizing it as the catastrophe it was, this participant recast it as a desirable means of redressing the population problem.

67-68

…another means of coping when we find ourselves pursuing or

having achieved contradictory goals is “conceptual integration,” or plainly put, doublespeak.

 

[Gives example of foreign policy problem plus “vast unemployment at home.”]

So our participant had created conflicting goals for himself. What did he do? He introduced voluntary conscription, commenting as he did so, “Everybody will surely understand the need for this.”

 

The intent behind these verbal unions of two incompatible realities seems clear to me. The perpetrator wants to keep one thing without losing the other¾and verbally it seems to work. He may not even realize that he is trying to mix oil and water when he puts two such terms together.

 

It is worth noting in passing here that these verbal integrations of incompatibles can, over time, produce changes in the meaning of words.

69

[Gives “black humor” allegedly “instantly shot” workers for “sabotage."]

69

[Covers “conspiracy theories by participants who were failing in thinking."]

69

[Covers “self-protection”—“best of intentions”]

72

Because of the way the goal had been defined, all effort had gone toward treating a symptom and none toward solving the underlying problem”

73

A system is a network of many variables in causal relationships to one another.

74-75

 

…the different ways the variables can affect one another and themselves. …

 

Positive feedback in a system means that an increase in a given variable produces a further increase in that variable; a decline a further decline.… Positive feedback tends to undermine the stability of a system and a system in which many variables are regulated by positive feedback can easily go haywire.

 

Negative feedback in a system means that an increase in one variable produces a decrease in another and vice versa. This kind of feedback tends to perpetuate the status quo. It maintains equilibrium in a system and should a disturbance occur, works to return the system to equilibrium….

 

A system incorporating many variables regulated by negative feedback is a well-buffered system. It can absorb a great many disturbances without becoming unstable [bold added]. But in natural systems, the capacities of buffers are usually limited. A feedback system consumes materials or energy, and if either one is exhausted, the system may collapse.… [Gives water wells as an example.]

 

The critical variables in a system a system are those that interact mutually with a large number of other variables in the system. They are, then, the key variable: if we alter them, we exert a major influence on the status of the entire system.

 

Indicator variables are those that depend on many other variables in the system but that themselves exert very little influence on the system. They provide important clues that help us assess the overall status of a system.

76--77

[Uses example of watch production and person who used analogy to process for self-manufacturing of cigarettes. Analogies are presented as a powerful way to learn. The participant as able to notice the commonalities in the “production process”:

§         The need for “raw materials”

§         The “certain sequence” and “set plan” with those raw materials to create the product

§         The need for “energy” and “how much is required”

§         The “skills do the makers of watches have to have”]

 

By thinking of watch production as analogous to rolling cigarettes, this participant was able to develop a mental picture of watch manufacturing. This gave her a basis for asking further questions about watch production and enabled her to grasp quickly the essentials of the field she had to work in.

 

This kind of analogous thinking is possible only if we consider things in the abstract. We must understand that making watches is only one narrow form of the broad concept of production process. And all production processes have in common is using energy to put different materials together according to a set plan.

77

Thinking by analogy may seem, after the fact, a rather primitive and obvious step, but many of our participants never made use of it and therefore bog down hopelessly in concrete situations. The prerequisite for making connections between watch production and rolling cigarettes—and therefore for thinking of useful question to ask—is an abstract understanding of watch manufacturing as a production process.

79

To deal effectively with a system… [w]e need to know:

§         …. how the causal relationships among the variables in a system work together in that system.

§         … how the individual components of a system fit into a hierarchy of broad and narrow concepts. This can help us fill in by analogy those parts of a structure unfamiliar to us.

§         ….component parts into which the elements of a system can be broken and the larger complexes in which those elements are embedded. We need to know this so that we can propose hypotheses about previously unrecognized interactions between variables.”

 

How do we acquire knowledge about the structure of a system? One important method is analogy, as illustrated above.

79

Another method…is to observe the changes that variables undergo over time. If we observe in an given ecosystem that an increase in animal population A is followed by an increase in animal population B and if we then observe a decline in population A followed by a decline in population B, we can assume that animals of type B feed on animals of type A and that the two populations form a predator-prey system.

 

The observation of covariations, between which there may be a time lag, is one way of acquiring structural knowledge, and all it requires is the collection and integrating of data over time. [Methods for plotting this data so that change over time is visible is a key method.]

 

Even after we know enough about a system to understand its structure, we must continue to gather information. We need to know about the system’s present status so as to predict future developments and assess the effects of past actions. These requirements make information essential for planning.[bold added]

86-87

One basic error accounts for all catastrophes: none of the participants realized that they were dealing with a system in which, though not every element interacted with every other, many elements interacted with many others.… They did not take into account the side effects and repercussions of certain measures. They dealt with the entire system, not as a system but as a bundle of independent minisystems. And dealing with systems in this way breeds trouble: if we do not concern ourselves with the problems we do not have, we soon have them.

87-

88

At the moment, we don’t have other problems, so why think about them? Or, to put it better still, why think that we should think about them?

 

Another reason is informational overload. Participants are given a lot of information, and to solve their problems, they have to gather a lot of data and address many aspects of the situation. There just doesn’t seem to be time enough to worry about problems that are not immediately pressing.

88

“It’s the Environment” [Title of section]

To deal with a system as if it were a bundle of unrelated individual systems is, on the one hand, the method that saves the most cognitive energy. On the other hand, it is the method that guarantees neglect of side effects and repercussions and therefore guarantees failure.

89-91

[Shows a visual that illustrates the hypothesis that customer satisfaction solves all problems. The visual shows a “centralistic organization” and places “Satisfaction of the population” at the center.]

 

Such a hypothesis] has the virtue of making it easy to deal with the system. And in its individual assessments this hypothesis is not wrong. But in its overall assessment it is wrong because it is incomplete. It does not take into account the manifold feedback loops in the Greenvale system or the fact that in this system, ad in many others, we are dealing not with star shaped network of interdependencies but rather with a network that more closely resembles an innerspring mattress. If we pull on one spring, we will move all the others, some more, some less. And if we press down on another, the same thing happens. There is no single central point. Not every point is a central one, but many are….

 

The point is that the satisfaction of the citizenry is in reality embedded in a network of positive- and negative-feedback loops, and knowing what will maintain citizen satisfaction at a high level in the long run is no simple matter.

 

The reductive hypothesis shown in figure 16 allows complicated considerations to be avoided. And that is why reductive hypotheses are so popular….

91-92

The person who feels moved for one reason or another to study the nature of our world or at least of our society and who concludes that we live in an “automobile society” or a “service society” or an “information society” or an “atomic society” or a “leisure society” proffers a reductive hypothesis that invites us to extrapolate a structure from it.

 

The fact that reductive hypotheses provide simplistic explanations for what goes on in the world accounts not only for their popularity but also for their persistence. … [The reality is] an unsurveyable system made up of interacting variables linked together in no immediately obvious hierarchy. Unsurveyability produces uncertainly; uncertainty produces fear. That is probably one reason people cling to reductive hypotheses.…

92

One excellent way to maintain a hypothesis indefinitely is to ignore information that does not conform to it.… We are infatuated with the hypotheses we propose because we assume they give us power over things. We therefore avoid exposing them to the harsh light of real experience, and we prefer to gather only information that supports our hypotheses.

94

[Covers “chair-ness” and illusion.]

94-

95

[This is from a section about a participant in the computer simulations who persisted in a solution and could not see it was not applicable to new problems.] The experience of success left an indelible mark. “The promotion of tourism,” this participant believed, “pays off.” But this abstract formulation was an overgeneralization based on only one success story.

 

His promotion of tourism had proved successful only because it had coincided with a favorable constellation in the environment… But our participant did not take note of that constellation of conditions in formulating his abstract concepts….

 

This “deconditionalized” concept¾this concept removed from the context of conditions bearing on it¾led our participant into disaster.

95

The English psychologist James T. Reason thinks that this kind of error is the result of a general propensity for “similarity matching,” that is, a tendency to respond to similarities more than to differences.

95

The effectiveness of a measure almost always depends on the context within which the measure is pursued. A measure that produces good effects in one situation may do damage in another, and contextual dependencies mean that there are few general rules (rules that remain valid regardless of conditions surrounding them) that we can use to guide our actions. Every situation must be considered afresh.

98

A sensible and effective measure in one set of circumstances can become a dangerous course of action when conditions change. We must keep track of constantly changing conditions and never treat any image we form of a situation as permanent. Everything is in flux, and we must adapt accordingly. The need to adapt to particular circumstances, however, runs counter to our tendency to generalize and form abstract plans of action. We have here an example of how an important element of human intellectual activity can be both useful and harmful. Abstract concepts are useful in organizing and mastering complicated situations. Unfortunately, this advantage tempts us to use generalization and abstraction too freely. Before we apply an abstract concept to a concrete situation, we should submit it to “strategic” scrutiny to decide whether it is appropriate to the context.

99

To the ignorant, the world looks simple. If we pretty much dispense with gathering information, it is easy for us to form a clear picture of reality and come to clear decisions based on that picture.

100

Positive feedback between uncertainty and information gathering may explain why people sometimes deliberately refuse to take in information. It … is said that before his invasion of Poland Hitler deliberately ignored a report that England was serious about coming to the aid of its ally if Germany attacked Poland.

 

New information muddies the picture. Once we finally reach a decision we are relieved to have the uncertainty of decision making behind us. And now somebody turns up and tells us things that call the wisdom of the decision into question again. So we prefer not to listen.

104

We may resort to “horizontal flight,” pulling back into a small, cozy corner of reality where we feel at home, like the Greenville mayor trained in social work who finally focused all her attention on one troubled child.

 

Or we may resort to “vertical flight,” kicking ourselves free of recalcitrant reality altogether and constructing a more cooperative image of that reality. Operating solely within our own minds, we no longer have to deal with reality but only with what e we happen to think about it. We are free to concoct plans and strategies any way we like. The only thing we have to avoid at all cost is reestablishing contact with reality.

107-

110

We live and act in a four-dimensional system. In addition to the three dimensions of space, this system includes the fourth dimension of time, which moves in one direction, and that direction is toward the future…. We rarely have trouble dealing with configurations in space. If we’re not entirely sure of what we’re looking at, we can take another look and resolve our uncertainty. We can normally look at forms in space again and again and in this way precisely determine their particular configuration. That is not true of configurations in time. A time configuration is available for examination only in retrospect. …

 

Because we are constantly presented with whole spatial configurations, we readily think in such terms.… Our experience with spatial forms also gives us great intuition about the “missing pieces”….

 

By contrast, we often overlook time configurations and treat successive steps in a temporal development as individual events…. In contrast to the rich set of spatial concepts we can use to understand patters in space, we seem to rely on only a few mechanism of prognostication to gain insight into the future.

 

The primary such mechanism is extrapolation from the moment. In other words, those aspects of the present that anger, worry, or delight us the most will play a key role in our predictions of the future….

 

Two factors come together in extrapolations from the moment: [bullets added]

§         first, the limited focus on a notable feature of the present
[Example from page 110: “Fixation” on this “brings with it the danger that too much significance is ascribed to present circumstances.” – A Hong Kong tourist in typhoon season expects “the colony’s imminent water end.” On the other hand, a resident sees it “as unremarkable in the context of an entire year’s weather.”]

§         and, second, extension of the perceived trend in a more or less linear and “monotone” fashion (that is, without allowing for any change in direction).
[Example from page 110: “”Fixation on linear future development may prevent us from anticipating changes in direction and pace.”]

 

Our ultimate concern in this chapter is how people form their ideas of the future. If we can identify the typical difficulties people have in dealing with time and in recognizing temporal patterns, we can suggest ways to overcome these difficulties and to improve temporal intuition.

110, 111-112, 116-117

Children, and many an adult, will be amazed at the answer to the following problem. There is one water lily growing in a pond with a surface area of 130,000 square feet. In early spring, the lily has one pad, and each lily pad has a surface area of one square foot. After a week, the lily has two pads, after the following week, four pads. After sixteen weeks the pond is half covered. How much longer will it take before the whole pond is covered? [The answer: if constant growth, “only one more week.” Dörner covers other examples, including AIDS, and experiments.]

 

A quantity is said to be growing “exponentially” when its value at any time … is its previous value multiplied by a particular number, the same number each time. … In a linear process, a quantity increases by the same amount, not the same multiple, at each step. … A number of psychological experiments have demonstrated that an incapacity to deal with nonlinear time configurations is a general phenomenon.

 

Clearly, … people tend to badly misjudge non-linear [exponential] growth.

126-128

[Dörner covers that experts in a field can make errors in projections, not just laypeople. He notes that with experts “feeling still comes into play” ,for example, in the selection of the mathematical model and its parameters. (pp. 126-128) He concludes the section with this statement:]

What I am saying about professional prediction should not be misunderstood, however, as an attack on prognosticators. I do not know how good or bad business or industry predictions are in general. I want only to call attention to the psychological weaknesses [those “feelings”] to which even rational professional predictions is prone.

128

When we have to cope with systems that do not operate in accordance with very simple temporal patterns, we run into major difficulties. (p. 128)  [If you want more detail on these experiments, click here.]

131

[Dörner focuses on the success of a “good participant” who tries a setting and waits to see what happens before changing the setting again.]

Participant 27a always waits a fairly long time before adjusting the regulator, and as a consequence, slowly develops a feel for the proper setting. He gradually lowers the settings and finally succeeds in bring the storeroom down to the desired temperature.

134-135

The difficulties the participants had in coming to a correct understanding of the basically simple laws governing the system emerge clearly in the hypotheses the participants developed about the connection between the temperature and the regulator settings [one of the computer simulations]…. [One of the major “categories” of false thinking is the] “magical” hypothesis based on “local experience.”

136-137

Our participants developed rituals [based on “magical hypotheses” when removed from “frequent and telling feedback.”] [I]n situations where feedback is not frequent and where the intervals between action and feedback are longer, we can expect ritualizations to wax luxuriant.

156-165

[Thinking systemically is possible. Dörner covers a variety of strategies, including “reverse planning,” a method he attributes as a “standard method” for “mathematicians and logicians.” “Reverse planning” requires figuring out where you want to be and backing up from there—two complex actions, but hopeful. He suggests how to use such approaches as:

§         “[N]arrowing the problem sector” so you can remedy a smaller problem, with Dörner covering methods from pages 157 to 159, including “culling unsuccessful strategies,” something “particularly effective in helping us overcome deeply entrenched patterns of though.”

§         Using analogy, including with “time lags”
“In the refrigerated-storeroom experiment, the essential point for participants to grasp was the time lag between an action and the effect produced by that action. A participant who saw an analogy between setting the regulator [for temperature] at a new value and sending bills to customers (‘I do not get my money instantly either’) may not have developed an earthshaking idea, but he did hit on the one idea he needed to solve his problem.” (p. 160)

§         Setting up “’a redundancy of potential command,’ that is, many individuals who are all capable of carrying out leadership tasks within the context of general directives”
Note: “…problem solvers working under stress used the personal pronoun I more…. Does this finding suggest a tendency to resort to centralized regimes in stressful situations.”]

164

[Covers how some hide behind planning activities.]

170-171

… this is an example of how experience does not always make us smart. Experience can also make us dumb.

 

The effects of “Methodism”¾the unthinking application of a sequence of actions we have once learned¾[can be disastrous]….

 

Methodism is dangerous because a change in some minor detail that does not alter the overall picture of the situation in any appreciable way can make completely different measures necessary to achieve the same goal.

176

[Covers the type of participants who resort to such phrases as I’ve done xyz action. “There’s nothing more that I can do.”]

177-179

[P]eople look for and find ways to avoid confronting the negative consequences of their actions. One of these ways is “ballistic behavior.” A cannonball [unlike a rocket] behaves ballistically. Once we have fired it, we have no further influence over it….

 

[W]e would expect that rational people faced with a system they cannot fully understand would seize every chance to learn more about it and therefore behave “nonballistically.” For the most part, however, the experiment participants did not do that. They shot off their decisions like cannonballs and gave hardly a second thought to where those cannonballs might land….

 

If we never look at the consequences of our behavior, we can always maintain the illusion of our competence. If we make a decision to correct a deficiency and then never check on the consequences of that decision we can be believe the deficiency has been corrected. We can turn to new problems. Ballistic behavior has the great advantage of relieving us of all accountability.

193-

Geniuses are geniuses by birth, whereas the wise gain their wisdom thorough experience. And it seems to me that the ability to deal with problems in the most appropriate way is the hallmark of wisdom rather than genius.

 

If that is so, then it must be possible both to teach and to learn how to think in complex situations. Some of the results presented in this book show that people can respond to circumstances and learn to deal with specific areas of reality….

 

Rather simple methods can, however, improve our ability to think. [Dörner describes several strategies for education and his “plea” for changes. The final quotation reveals his fundamental hopefulness.]

195-196

[I]n the real world, the consequences of our mistakes are slow in developing and may occur far from where we took action. After a long delay or at a great distance, we may not even recognize them as the results of our behavior. [bold added]

199

What matters is not, I think, development of the neglected right half of the brain, not the liberation of some mysterious creative potential, and not the mobilization of that fallow 90 percent of our mental capacity. There is only one thing that does in fact matter, and that is the development of our common sense.

 

If You Want More Detail on Pages 128-136: What Do We Do When We Must Cope with More Than “Very Simple Temporal Patterns”?

Dörner provides extensive examples of errors and details when people must deal with complex patterns in time.

When we have to cope with systems that do not operate in accordance with very simple temporal patterns, we run into major difficulties. (p. 128)

 

[Dörner uses an experiment with participants having the challenge of needing to save food supplies by adjusting a “residential thermostat that need a certain amount of time before it brings a room to a constant temperature.” They had instructions for the thermostat and those instructions were accurate. The experiment includes recordings of the “participants thinking out loud during the experiment.” (pp. 128-134) He identifies the hypotheses in “three categories”:]

§         The first and largest category consists of “magical” hypotheses. The participants say, for example, “Twenty-eight is a good number”…. “Odd numbers are good.” “You shouldn’t use multiples of ten”….
Magical hypotheses are probably the result of overgeneralizations on the basis of local experience. [The participants tried a number and the temperature changed.] Given the nature of the system, this intervention probably had little bearing on the effect observed. But the participant is pleased. He notes the “connection” between his setting and the temperature change in the right direction, and he generalizes from this. (p. 134)

§         The hypothesis in the second category look something like this: “Increments of five and ten have different effects.” …   (p. 135) [Dörner notes this category results “rituals.”] All we have to do is execute the rituals correctly. At this point, our actions are almost completely divorced from external conditions. We no longer pay any attention to what is happening in the outside world. All that matters is the ritual….(p. 136)

§         [T]he third mode: “You have to set the regulator high to lower the temperature.” “High settings produce low temperatures….”(p. 136) The participants who voiced these hypotheses no longer trusted the instructions or the experiment director but suspected instead they they were the victims of some malevolent deception.” (p. 136)

 

[To summarize, Dörner notes that these participants had data in “short lag times.”] This means that the tendencies we observed in the storeroom experiment will be much more pronounced in real situations. In the real world, people tend even more to generalize from local experience, to ritualize, and to believe that no rationally comprehensible principle is at work and that they are the dupes of some mean-spirited practical joke.” (p. 137)

 

 

 

 

For information or problems with this link, please email using the email address below.

WCJC Department:

History – Dr. Bibus

Contact Information:

281.239.1577 or cjb_classes@yahoo.com

Last Updated:

2012 06/04

WCJC Home:

http://www.wcjc.edu/