Site icon Wil Selby

Thinking in Systems – Summary and Insights

Below are some of my key takeaways from reading the book, Thinking in Systems, by Donella H. Meadows. If you are interested in more detailed notes from this book, they are available here.

Thinking In Systems

Summary:

A brief synopsis of the book is reprinted below from Amazon:

Thinking in Systems is a concise and crucial book offering insight for problem solving on scales ranging from the personal to the global. Edited by the Sustainability Institute’s Diana Wright, this essential primer brings systems thinking out of the realm of computers and equations and into the tangible world, showing readers how to develop the systems-thinking skills that thought leaders across the globe consider critical for 21st-century life.

Some of the biggest problems facing the world―war, hunger, poverty, and environmental degradation―are essentially system failures. They cannot be solved by fixing one piece in isolation from the others, because even seemingly minor details have enormous power to undermine the best efforts of too-narrow thinking.

While readers will learn the conceptual tools and methods of systems thinking, the heart of the book is grander than methodology. Donella Meadows was known as much for nurturing positive outcomes as she was for delving into the science behind global dilemmas. She reminds readers to pay attention to what is important, not just what is quantifiable, to stay humble, and to stay a learner.

What is a System?

The author defines a system as “an interconnected set of elements that is coherently organized in a way that achieves something.” A system is composed of elements, interconnections, and a function or purpose. The elements that compose a system can be tangible or intangible. The interconnections are the relationships that hold the elements together. The purpose of a system is expressed through the operation of the system. Purposes are deduced from behavior, not from rhetoric or stated goals.

Systems produce their own pattern of behavior over time. Systems thinking allows us to model a system, observe it, and understand the relationship between the system’s structure and behavior. System behavior reveals itself as a series of events over time. We need to be able to see how the events accumulate into dynamic patterns of behavior so that we can have behavior-level understanding as opposed to shallow event-level understanding. Long-term behavior provides clues to the underlying system structure and structure is the key to understanding not just what is happening but why.

All Models Are Wrong, But Some Are Useful

Everything we think we know about the world is a model. Our models usually have a strong congruence with the world but our models fall short of representing the world fully. The value of a model depends not on whether its driving scenarios are realistic (since no one can know that for sure) but on whether it responds with a realistic pattern of behavior. We can reflect on several questions to assess the value of a model:

Since our models are inherently inaccurate, we often are surprised when events in the real world don’t match our expectations. There is no single, legitimate boundary to draw around a system. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion. We have to invent boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them. Modeling complex systems is challenging. Complex behaviors of systems often arise as the relative strengths of feedback loops shift, causing first one loop and then another to dominate behavior. Many relationships in systems are nonlinear.

The Language of Systems

The author talks about how written language is insufficient to describes systems.

Words and sentences must, by necessity, come only one at a time in linear, logical order. Systems happen all at once. They are connected not just in one direction, but in many directions simultaneously.

Pictures are better than words for systems. You can see all the parts of a picture at once. This reminded me of Scott Berkun’s preference for affinity diagrams over lists in his book, Making Things Happen. He described how lists promote linear thinking and become hard to manage at scale and affinity diagrams are more powerful.

They help people see ideas as fluid and tangible things that can be moved around and easily reorganized. This fluidity helps people to question their assumptions, see new perspectives, and follow other people’s thoughts.

Systems theory developed tools to model systems. The models consist of stocks, flows, and feedback loops. A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A flow is the material or information that enters or leaves a stock over a period of time. A feedback loop is formed when changes in a stock affect the flows into or out of that same stock. There are balancing feedback loops and reinforcing feedback loops.

Balancing feedback loops are goal seeking or stability seeking. Each tries to keep a stock at a given value of withing a range of values. A reinforcing feedback loop enhances whatever direction of change is imposed on it. Reinforcing loops are found wherever a system element has the ability to reproduce itself or to grow as a constant fraction of itself. They are self-enhancing, leading to exponential growth or to runaway collapses over time.

Delays in feedback loops are critical determinants of system behavior. They are common causes of oscillations. A delay in a balancing feedback loop makes a system likely to oscillate.

Developing Highly Functional Systems

Resilience, self-organization, and hierarchy – these are the properties of highly functional systems.

Resilience is a measure of a system’s ability to survive and persist within a variable environment. The ability to restore or repair themselves.

The capacity of a system to make its own structure more complex is called self-organization. Self-organization produces heterogeneity and unpredictability. It is likely to come up with whole new structures, whole new ways of doing things.

Hierarchical systems evolve from the bottom up. The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers. Complex systems can evolve from simple systems only if there are stable intermediate forms. The resulting complex forms will naturally be hierarchic.

To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total design – there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing. Hierarchies are brilliant systems inventions, not only because they give a system stability and resilience, but also because they reduce the amount of information that any part of the system has to keep track of.

People often mismanage systems, prioritizing productivity or stability. Systems also need to be managed for resilience and that may not be easy without a whole-system view. Like resilience, self-organization is often sacrificed for purposes of short term productivity and stability. Self-organization requires freedom, experimentation, and a certain amount of disorder. This can be intimidating to people and threaten existing power structures. Insistence on a single culture shuts down learning and cuts back resilience. Any system that can’t self evolve won’t succeed in our highly variable environment.

At the end of the day, the objective of a business is to grow and increase market share so that its operations becomes ever more shielded from uncertainty.

Adjusting System Behavior

The book describes a variety of leverage points that can be used to influence the behavior of a system. As a Product Manager, I found these concepts helpful for developing strategies to enable a cross functional team to achieve our desired outcomes more efficiently.

The arrangement of the stocks and flows of a system can have a large effect on how the system operates. The only way to fix a system that is laid out poorly is to rebuild it if you can but this is often the slowest and most expensive kind of change to make in a system. Therefore, the leverage is in understanding its limitations and bottlenecks, using it with maximum efficiency, and refraining from fluctuations or expansions that strain its capacity.

Product managers don’t often have formal authority over multiple departments so our ability to restructure the organization is limited. When working with other business functions, some will be more effective than others. Recognizing this, you can determine where you can delegate and where you may need to spend more time and energy coordinating on joint efforts.

More leverage can be found in the information flows of a system. Information flows describe who does and does not have access to information. Missing information flows is one of the most common causes of system malfunction. There is a systematic tendency on the part of human beings to avoid accountability for their own decisions. That’s why there are so many missing feedback loops.

As Product Managers, we don’t build the product directly but we do provide a wide context to our product team to enable them to make impactful product decisions. As Marine General Jim Mattis says, “What do I know? Who needs to know it? Have I told them?” We’re working on complex problems that require a variety of skills sets to tackle. We can’t be in every conversation or decision making and that wouldn’t be scalable anyway. Sharing information with the right people at the right time can ensure the team stays aligned as we evolve our understanding of the problem or solution we are focused on.

Lastly, the author states that one of the most powerful ways to influence the behavior of a system is through its purpose or goal. That’s because the goal is the direction setter of the system. System have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce.

We often set goals using frameworks such as OKRs, which define goals as specific, measurable outcomes we are hoping to achieve with the product. Measurable goals makes it easy to define what success looks like and continually track our progress towards our goals. One failure mode is that teams can over-optimize for one metric at the detriment of other important metrics, resulting in unexpected consequences. One tactic is to pair a goal metric with one or more counter metrics. These are metrics that might be negatively impacted by achieving your goal metrics. Monitoring the counter metrics can help prevent you from getting caught by unintended negative consequences of over-optimizing for goal metrics.

Bounded Rationality

Bounded rationality is defined as “the logic that leads to decisions or actions that make sense within one part of a system but are not reasonable within a broader context or when seen as part of the wider system.” Said another way, people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system. The world presents us with multiple examples of people acting rationally in their short-term best interests and producing aggregate results that no one likes.

This is one of the key assumptions in modern economic theory. Adam Smith assumes first that homo economicus acts with perfect optimality on complete information, and second that when many of the species homo economicus do that, their actions add up to the best possible outcomes for everybody. Behavioral economists, such as Daniel Kahneman, would argue that people don’t always make the best decisions. Therefore, the government or other institutions should nudge people to make decisions that serve their own long-term interests or the interests of the group. They should also be protected from others who may deliberately exploit their weaknesses. As Donella Meadows notes, we need to step outside the limited information that can be seen from any single place in the system and getting an overview. The state or other institutions can have this broader context and the ability to redesign the system to improve the information, incentives, disincentives, goals, stresses, and constraints that have an effect on specific actors.

Exit mobile version