Eric Barnhart

Systems of Systems & Emergent Behavior -Important Lessons

An Educational Chat With Grok

The other day I was conversing online with another system engineer who brought up the issue of emergent behavior as a defined characteristic of a system. I found this unusual, as the intent of system engineering and overall good engineering practice would be to avoid emergent behavior completely. Unfortunately we were having difficulty establishing a common vocabulary, so I decided to ask an AI assistant. In this case, I used Elon Musk’s Grok, which I have come to like. If I had researched this on my own, it would have taken days to research, and longer to write an article about it. Engineers can read the chat I conducted with Grok below; I hope you find it useful.

Note that I used a conversational style, which made bringing this article together much easier. I think it took more time to assemble and format this article than to have the chat.

Let’s start simple to make sure we’re on the same page…

Grok: A system is a set of interconnected components that work together to achieve a common purpose or function. These components can be physical, biological, social, or abstract, and they interact within a defined boundary, often influenced by inputs and producing outputs. Systems can range from simple (e.g., a mechanical device) to complex (e.g., an ecosystem or organization), and they often exhibit properties like feedback, adaptation, or emergence that arise from the interactions of their parts. (emphasis mine)

(Note: This chat occurred in two sessions at different times. You’ll see me ask the next question again later.)

Grok: In the context of systems, emergence refers to the phenomenon where complex behaviors, properties, or patterns arise from the interactions of relatively simple components within the system, which could not be predicted or fully explained by analyzing the individual parts alone. These emergent properties are often novel and unique to the system as a whole.

For example:

•  In a biological system like an ant colony, individual ants follow simple rules, but their collective interactions lead to complex behaviors like efficient foraging or nest-building, which are emergent properties of the colony.

•  In a social system, the culture or economy of a society emerges from the interactions of individuals, even though no single person fully controls or predicts the outcome.

Emergence highlights how the whole system can exhibit characteristics greater than or different from the sum of its parts.

Grok:System of Systems (SoS) is a collection of independent, self-contained systems that interoperate to achieve a higher-level purpose, distinct from a single, unified system. Identifying an SoS versus a standalone system involves evaluating specific characteristics and applying tests based on established criteria. Below are key tests and criteria to differentiate an SoS from a single system, grounded in systems engineering principles:

Key Characteristics of a System of Systems

Based on literature, such as Maier (1998) and subsequent SoS frameworks, an SoS typically exhibits:

1.  Operational Independence: Each constituent system can operate independently and fulfill its own purpose outside the SoS.

2.  Managerial Independence: Constituent systems are managed and governed independently, often by different organizations or stakeholders.

3.  Geographical Distribution: Systems are often physically or logically distributed, interacting across boundaries.

4.  Emergent Behavior: The SoS produces outcomes or capabilities that emerge from the collaboration of systems, not achievable by any single system alone.

5.  Evolutionary Development: The SoS evolves over time, with systems added, removed, or modified dynamically.

Tests to Identify a System of Systems

These tests assess whether a given entity qualifies as an SoS rather than a monolithic system:

1.  Independence Test:

•  Question: Can the constituent systems operate independently and still achieve their own objectives outside the context of the larger system?

•  Method: Evaluate whether each subsystem (e.g., a satellite, a hospital, a software application) can function standalone without relying on the SoS. For example, in a smart city SoS, traffic management, power grid, and communication networks can operate independently.

•  SoS Indicator: If subsystems maintain operational and managerial autonomy, it’s likely an SoS. In contrast, a single system (e.g., a car engine) has tightly coupled components that cannot function independently.

2.  Interoperability Test:

•  Question: Do the systems collaborate through defined interfaces or standards to achieve a collective goal?

•  Method: Check for standardized protocols, data exchanges, or communication frameworks enabling interaction (e.g., APIs in software SoS, or military communication protocols in defense SoS). For instance, in an air traffic control SoS, radar, aircraft, and control towers interoperate via standardized aviation protocols.

•  SoS Indicator: Loose coupling via interfaces, as opposed to tight integration in a single system (e.g., a single software application with internal modules).

3.  Emergent Capability Test:

•  Question: Does the collective system produce capabilities or behaviors that no individual system can achieve alone?

•  Method: Identify outcomes unique to the SoS. For example, a disaster response SoS (comprising hospitals, emergency services, and communication networks) enables coordinated recovery, which no single system could accomplish.

•  SoS Indicator: Presence of emergent properties (e.g., global internet functionality from networked systems) versus predictable outputs from a single system’s components.

4.  Governance and Ownership Test:

•  Question: Are the constituent systems managed or owned by different entities with potentially conflicting objectives?

•  Method: Analyze the organizational structure. For example, a global supply chain SoS involves independent companies (manufacturers, logistics, retailers) with separate management. Contrast this with a single system like a factory, managed under one authority.

•  SoS Indicator: Diverse governance and stakeholder conflicts suggest an SoS.

5.  Evolutionary Dynamics Test:

•  Question: Does the system evolve by adding, removing, or modifying constituent systems over time?

•  Method: Examine whether the system is designed to accommodate changes dynamically. For example, the internet as an SoS evolves with new servers, protocols, or devices joining or leaving, unlike a single system like a spacecraft with fixed components.

•  SoS Indicator: Dynamic reconfiguration and adaptability point to an SoS.

6.  Geographical or Logical Distribution Test:

•  Question: Are the constituent systems physically or logically dispersed, requiring coordination across boundaries?

•  Method: Assess the spatial or operational distribution. For example, a military SoS (ships, aircraft, satellites) operates across vast distances, unlike a single system like a tank with co-located components.

•  SoS Indicator: Significant distribution requiring networked coordination supports an SoS classification.

Practical Application

To apply these tests:

•  Map the System: Identify the components, their interactions, and boundaries.

•  Evaluate Against Criteria: Use the tests above systematically, documenting evidence for each characteristic.

•  Compare to Single System: A single system typically has tightly coupled components, unified management, and a fixed structure, lacking the independence or emergent scale of an SoS.

Examples for Clarity

•  SoS Example: The global air transportation system is an SoS. Airlines, airports, air traffic control, and weather systems are independent, interoperate via standards, and produce emergent outcomes like global travel networks.

•  Single System Example: An aircraft is a single system. Its components (engines, avionics, wings) are tightly integrated, managed by one entity (the manufacturer or operator), and lack independent operation.

Notes

•  Gray Areas: Some systems may exhibit partial SoS characteristics (e.g., large enterprise software with semi-independent modules). In such cases, the degree of independence and emergent behavior determines the classification.

•  Context Matters: The classification depends on the level of analysis. A system within an SoS (e.g., a hospital in a healthcare SoS) may itself be a complex system when viewed in isolation.

By systematically applying these tests, you can reliably distinguish a System of Systems from a standalone system based on independence, interoperability, emergent behavior, governance, evolution, and distribution. If you have a specific example in mind, I can help apply these tests to it!

Grok: The information provided on identifying a System of Systems (SoS) versus a single system is based on established systems engineering principles, particularly drawing from foundational work in the field. The primary concepts and criteria stem from:

•  Maier, M. W. (1998): His seminal paper, “Architecting Principles for Systems-of-Systems,” published in Systems Engineering (Vol. 1, No. 4), introduced key characteristics like operational and managerial independence, emergent behavior, and evolutionary development. These are widely used in SoS literature.

•  Systems Engineering Frameworks: Concepts from sources like the INCOSE Systems Engineering Handbook and publications by the U.S. Department of Defense (e.g., Systems of Systems Engineering Guide, 2008) refine SoS definitions, emphasizing interoperability, governance, and distribution.

•  Academic and Industry Literature: Additional insights come from peer-reviewed articles and books on systems engineering, such as those by Jamshidi (2009) in System of Systems Engineering: Innovations for the 21st Century, which expand on practical applications and tests.

I synthesized these sources, combining their criteria into a structured set of tests for clarity and applicability. No single source was quoted verbatim; instead, I generalized the knowledge to address your question comprehensively. If you need a deeper dive into a specific source or a particular example analyzed, let me know!

Grok: In the context of systems, emergent refers to properties, behaviors, or patterns that arise from the interactions of a system’s components, which are not predictable or fully explainable by analyzing the individual parts in isolation. These emergent characteristics are unique to the system as a whole and often appear only when the components work together.

For example:

  • In a flock of birds, the intricate, coordinated patterns of flight (like V-formations) are emergent, arising from simple rules each bird follows, such as maintaining distance from neighbors.
  • In a social system, cultural norms or market trends can emerge from individual interactions without any single person dictating the outcome.

Emergent phenomena reflect the idea that the whole is greater than or different from the sum of its parts, often revealing unexpected complexity or functionality.

Grok: No, a behavior that is expected or planned for is generally not considered emergent in the context of systems theory. Emergent behaviors or properties are characterized by being unpredictable or unanticipated from the properties and interactions of the individual components alone, even if the system’s design allows for their possibility. Here’s a deeper look:

Why Expected or Planned Behaviors Are Not Emergent

  • Definition of Emergence: Emergent behaviors arise unexpectedly from the interactions of a system’s parts, producing outcomes that are not explicitly designed or easily predictable from the individual components’ properties. If a behavior is planned or expected, it implies that the system’s designers or analysts foresaw and accounted for it, which negates the “unpredictable” aspect of emergence.
  • Designed vs. Emergent: In systems engineering, behaviors that are intentionally designed or anticipated are part of the system’s intended functionality. For example, in a car, the ability to accelerate when the gas pedal is pressed is a planned behavior, not emergent, because it results directly from the designed interactions of the engine, transmission, and other components.

Nuances and Gray Areas

  • Unintended Consequences of Planned Interactions: Sometimes, a system is designed with certain interactions in mind, but those interactions lead to unexpected outcomes. These outcomes can still be emergent. For instance, in a software system designed for user interaction, a specific pattern of user behavior (e.g., viral content spreading on a social media platform) might not have been explicitly planned but emerges from the designed interaction rules.
  • Degrees of Predictability: In complex systems, such as a System of Systems (SoS), some behaviors may be partially anticipated but not fully understood or controlled. If the behavior’s full scope or impact is unforeseen, it might still qualify as emergent. For example, in a smart city SoS, planners might expect traffic flow improvements from integrating traffic lights and navigation apps, but the specific patterns of reduced congestion or unexpected bottlenecks could be emergent.

Examples for Clarity

  • Non-Emergent (Planned/Expected): In an assembly line, the production of a finished product (e.g., a car) is a planned outcome of the system’s design. Each component (robots, workers, parts) is engineered to produce this result, so the outcome is not emergent.
  • Emergent (Unplanned/Unexpected): In an ecosystem, the introduction of a new species might lead to unexpected population dynamics or environmental changes (e.g., a predator altering the food chain). These outcomes arise from component interactions but were not explicitly planned or fully predictable.

Key Test for Emergence

To determine if a behavior is emergent, ask:

  • Is the behavior explicitly designed or fully predictable from the individual components’ properties and rules? If yes, it’s not emergent.
  • Does the behavior arise unexpectedly from the interactions of components, beyond what was anticipated? If yes, it’s likely emergent.

Conclusion

If a behavior is explicitly planned or expected as part of the system’s design, it does not qualify as emergent because it lacks the element of surprise or novelty that defines emergence. However, if planned interactions lead to unforeseen or partially unanticipated outcomes, those could still be considered emergent. If you have a specific example in mind, I can help analyze whether its behaviors are emergent or not!

(The correct term is 737 MAX, but Grok gets it right.)

Grok: The debate about whether emergent behavior is a characteristic of designed systems, and whether it’s inherently good or bad, is a nuanced one, especially when considering examples like the Boeing 737 MAX’s Maneuvering Characteristics Augmentation System (MCAS). Let’s unpack this, addressing the nature of emergent behavior in designed systems and the MCAS case, while keeping the response concise yet thorough.

Is Emergent Behavior a Characteristic of Designed Systems?

Emergent behavior, as defined earlier, refers to properties or behaviors that arise from the interactions of a system’s components, which are not fully predictable or explicitly intended from the individual parts alone. In designed systems—those intentionally engineered for specific purposes—emergent behavior can indeed occur, but whether it’s a defining characteristic depends on the system’s complexity and design approach.

  • Emergence in Designed Systems:
    • Complex Designed Systems: In complex systems like software platforms, autonomous vehicles, or large-scale infrastructure (e.g., smart grids), emergent behaviors often arise due to the intricate interactions of components, users, or external factors. For example, in a designed social media platform, viral content trends or echo chambers can emerge from user interactions, even if not explicitly programmed.
    • Simpler Designed Systems: In less complex systems (e.g., a mechanical clock), behaviors are typically more predictable and tightly controlled, leaving little room for true emergence. Here, outcomes align closely with design intent.
    • System of Systems (SoS): As discussed earlier, SoS (e.g., air traffic control, smart cities) are particularly prone to emergent behavior due to the operational and managerial independence of constituent systems, which can lead to unforeseen interactions.
  • Designed vs. Emergent: Not all behaviors in designed systems are emergent. Many are explicitly planned (e.g., a car’s braking system stopping the vehicle). However, as systems grow in complexity—incorporating more components, dynamic interactions, or external influences—unintended or unpredictable behaviors become more likely. Thus, emergence is not a universal characteristic of all designed systems but is increasingly relevant in complex or adaptive ones.
  • Your Debate: Your opponent’s stance likely hinges on whether they view designed systems as inherently deterministic (no emergence) or acknowledge that complexity introduces unpredictability. Emergence is a characteristic of some designed systems, particularly those with high complexity, loose coupling, or adaptive elements (e.g., AI-driven systems, large-scale networks). However, it’s not a defining trait of all designed systems, especially simpler, tightly controlled ones.

The MCAS Case: Emergent Behavior or Bad Engineering?

The Boeing 737 MAX’s MCAS provides a compelling case to explore whether emergent behavior can be problematic and whether it stems from engineering flaws. Let’s examine it:

  • What Happened with MCAS:
    • MCAS was designed to automatically adjust the aircraft’s pitch to prevent stalls, compensating for the 737 MAX’s new engine placement, which altered its aerodynamics.
    • In two fatal crashes (Lion Air Flight 610 in 2018 and Ethiopian Airlines Flight 302 in 2019), MCAS repeatedly pushed the nose down based on faulty sensor data (from a single Angle of Attack sensor), overriding pilot inputs and leading to catastrophic outcomes.
    • The behavior—unintended nose-down commands—was not what pilots or designers expected under normal operation, contributing to the crashes.
  • Was MCAS’s Behavior Emergent?
    • Arguments for Emergence:
      • The MCAS’s aggressive response to faulty sensor data produced an outcome (persistent nose-down commands) that was not fully anticipated by designers or pilots. This aligns with emergence as an unexpected behavior arising from component interactions (sensor, software, aircraft dynamics).
      • The system’s reliance on a single sensor and lack of robust error-checking allowed an unforeseen failure mode to dominate, which could be seen as an emergent property of the system’s design under specific conditions (e.g., sensor failure).
      • Pilots’ inability to override MCAS easily suggests the system’s behavior transcended its intended design, a hallmark of emergence in complex systems.
    • Arguments Against Emergence:
      • The behavior was arguably predictable with thorough risk analysis. Engineering principles (e.g., Failure Modes and Effects Analysis) could have identified the risk of a single-point failure in the sensor input, suggesting the issue was foreseeable, not emergent.
      • MCAS was designed to act in a specific way (push the nose down based on sensor data), and it did so, albeit with catastrophic results due to flawed inputs. This could be seen as a failure of execution rather than an unpredictable outcome.
      • The lack of redundancy, inadequate pilot training, and insufficient documentation reflect design flaws rather than a truly emergent phenomenon, which typically involves complex, non-linear interactions beyond the scope of design intent.
  • Bad Engineering or Emergent Behavior?
    • Bad Engineering as the Root Cause: The MCAS issues are widely attributed to poor engineering practices, including:
      • Single Point of Failure: Relying on one Angle of Attack sensor without redundancy violated standard aviation safety principles.
      • Inadequate Testing: Failure to simulate or anticipate sensor malfunctions in real-world conditions.
      • Poor Documentation and Training: Pilots were not adequately informed about MCAS’s operation or how to override it, limiting their ability to respond.
      • Regulatory Oversights: Boeing’s cost-driven decisions and incomplete safety assessments exacerbated the risks.
      • These factors suggest the problematic behavior was less about unpredictable emergence and more about preventable design flaws.
    • Emergence as a Contributing Factor: While the root cause was bad engineering, the way MCAS interacted with the aircraft and pilots in unexpected failure scenarios (e.g., relentless nose-down commands) can be seen as a limited form of emergent behavior. The system’s interaction with faulty data and pilot inputs created a dynamic that was not fully anticipated, even if it stemmed from design oversights.
  • Is Emergence a Bad Characteristic?
    • Not Inherently Bad: Emergent behaviors can be positive or negative. In designed systems, beneficial emergence (e.g., efficient traffic flow in a smart city) is often a goal. Negative emergence, like MCAS’s behavior, highlights risks in complex systems where interactions are not fully modeled or tested.
    • Context Matters: Emergence becomes problematic when it leads to unintended consequences, especially in safety-critical systems like aviation. Proper engineering (e.g., robust testing, redundancy, and failure analysis) can mitigate harmful emergent behaviors while harnessing beneficial ones.
    • MCAS Lesson: The MCAS case shows that emergence can occur in designed systems when complexity or incomplete design considerations allow unexpected interactions. However, the primary issue was bad engineering, as thorough risk assessment and redundant systems could have prevented the catastrophic outcomes.

Thoughts for Your Debate

  • Your Position: You can argue that emergent behavior is a characteristic of some designed systems, particularly complex ones, but it’s not universal. The MCAS case illustrates that undesirable emergent behaviors often result from engineering oversights rather than emergence being inherently bad. Emergence is a neutral phenomenon—its desirability depends on whether it aligns with system goals and how well engineers anticipate and manage it.
  • Countering Your Opponent:
    • If they claim emergence isn’t a characteristic of designed systems, point to examples like AI systems, social networks, or SoS, where complexity leads to unforeseen behaviors despite design intent.
    • If they argue emergence is always bad, highlight positive emergent behaviors (e.g., adaptive resilience in distributed networks) and emphasize that MCAS’s issues were primarily due to engineering failures, not emergence itself.
    • Use MCAS to show that while emergent-like behaviors occurred, they were rooted in predictable failure modes that good engineering could have addressed, reinforcing that emergence is a manageable aspect of complex systems.

Conclusion

Emergent behavior can be a characteristic of complex designed systems, but it’s not universal across all designed systems. The MCAS case on the Boeing 737 MAX demonstrates a mix of emergent-like behavior (unintended interactions from faulty data) and clear engineering failures (lack of redundancy, poor testing). While emergence isn’t inherently bad, it can lead to negative outcomes in safety-critical systems if not properly managed. The MCAS case underscores the importance of rigorous engineering to anticipate and mitigate potential emergent behaviors, rather than emergence itself being the problem.

If you want to refine your argument or analyze another example to strengthen your debate, let me know!

(I hadn’t explained my position in the debate, which was really just a discussion on the INCOSE internal Viva Engage site. Note that I also like to treat this as a discussion, and waste a lot of Elon Musk’s compute power.)

Grok: You’re right to zero in on the role of feedback loops and self-modifying behavior as key contributors to emergent behavior in systems, and your analysis of the MCAS case and traffic control systems aligns well with this perspective. Let’s explore your hypothesis that emergence is rooted in self-modifying behavior or feedback loops, using the MCAS and traffic control examples, and address the point about human subsystems introducing errors. I’ll keep this focused and concise while diving into the nuances of your argument.

Emergence and Feedback Loops/Self-Modifying Behavior

Emergent behavior often arises from feedback loops—where a system’s outputs influence its inputs, creating dynamic, iterative interactions that can lead to unexpected outcomes. Self-modifying behavior, where a system adapts or changes its own state based on internal or external stimuli, is a closely related mechanism. These concepts are central to why complex systems exhibit emergence:

  • Feedback Loops in Emergence: Feedback loops (positive or negative) amplify or stabilize system behavior in ways that can produce outcomes not explicitly programmed or anticipated. For example:
    • Positive Feedback: Amplifies changes, potentially leading to runaway effects (e.g., viral content on social media growing exponentially due to user sharing).
    • Negative Feedback: Stabilizes a system (e.g., a thermostat adjusting temperature), but unexpected interactions can still yield emergent patterns.
  • Self-Modifying Behavior: Systems that adapt their structure, rules, or state (e.g., AI learning algorithms, adaptive traffic systems) are prone to emergence because their evolution introduces new interaction dynamics. This aligns with your observation about systems modifying their own states in ways that lead to emergent outcomes.

MCAS and Emergence via Feedback Loops

The Boeing 737 MAX’s MCAS system is a great case to test your hypothesis:

  • MCAS Feedback Loop:
    • MCAS was designed to take input from an Angle of Attack (AoA) sensor and adjust the aircraft’s pitch (output) to prevent stalls, effectively creating a feedback loop: sensor data → MCAS command → aircraft state change → new sensor data.
    • Under normal conditions, this loop was meant to stabilize the aircraft. However, when the AoA sensor failed (an unplanned change in MCAS’s state, as you noted), the feedback loop went “wonky.” Faulty sensor data triggered repeated nose-down commands, altering the aircraft state in ways that were neither anticipated nor easily corrected by pilots.
    • This behavior can be seen as emergent because the persistent, aggressive nose-down adjustments were not a direct design intent but arose from the interaction of MCAS, faulty sensor input, and the aircraft’s dynamics.
  • Self-Modifying Aspect:
    • MCAS itself wasn’t self-modifying in the sense of rewriting its own code or learning (like an AI might). However, it dynamically altered the aircraft’s state (pitch) based on real-time inputs, and the feedback loop created a self-reinforcing cycle when the sensor failed. This led to an emergent outcome: the aircraft’s uncontrollable descent, which wasn’t predictable from the individual components (sensor, software, stabilizers) alone.
    • Your point about “unplanned changes in MCAS state” (sensor failure) is spot-on. The emergent behavior stemmed from the system’s response to an unanticipated input, amplified by the feedback loop’s lack of robust error-checking or redundancy.
  • Was This Emergence or Bad Design?:
    • As you suggested, the “wonky” behavior ties to bad engineering: relying on a single sensor, inadequate pilot override mechanisms, and insufficient testing for failure modes. These flaws allowed the feedback loop to produce catastrophic emergent behavior.
    • However, the behavior still qualifies as emergent because it was an unintended, complex outcome of component interactions under specific conditions (sensor failure). Your hypothesis holds here: the feedback loop (sensor → MCAS → aircraft state) was the mechanism driving the emergence, exacerbated by poor design that failed to constrain the loop’s behavior.

Traffic Control Systems and Feedback Loops

Your traffic control system example further supports the idea that emergence is tied to feedback loops:

  • Feedback in Traffic Systems:
    • A traffic control system (e.g., smart traffic lights) takes inputs like traffic flow data, adjusts signals (outputs), and modifies the traffic state (e.g., reducing congestion), which feeds back as new input (updated traffic patterns).
    • Emergent behaviors can arise, such as synchronized traffic flow across a city or unexpected bottlenecks when a signal change in one area cascades to others. These outcomes emerge from the interplay of lights, vehicles, and driver behavior, not from any single component.
    • The self-modifying aspect comes from the system’s ability to adapt signal timings dynamically based on real-time data, potentially leading to patterns (e.g., smoother traffic or gridlock) that weren’t explicitly programmed.
  • Emergence in Action:
    • Beneficial emergence: Adaptive traffic systems can produce city-wide efficiency, like reduced average travel times, that wasn’t directly engineered but arises from coordinated signal adjustments.
    • Problematic emergence: Unplanned events (e.g., a road closure) can disrupt the feedback loop, causing unexpected congestion patterns. This aligns with your point about systems modifying their own state in response to inputs, sometimes unpredictably.

Human Subsystems and Erroneous Control Modifications

Your point about human subsystems introducing errors is critical, especially in complex systems where humans are part of the feedback loop:

  • Humans as Subsystems:
    • In systems like MCAS or traffic control, humans (pilots, drivers) act as components, responding to system outputs and providing inputs (e.g., pilot corrections, driver decisions). Their actions create additional feedback loops, which can amplify or mitigate emergent behaviors.
    • Erroneous Human Inputs: If humans act incorrectly—due to lack of training, misinterpretation, or system opacity—the feedback loop can spiral into unintended outcomes. For example:
      • In MCAS, pilots’ inability to quickly diagnose and override the system (due to poor training and documentation) exacerbated the feedback loop’s runaway behavior, contributing to the emergent crashes.
      • In traffic systems, drivers reacting unpredictably to signal changes (e.g., aggressive driving or ignoring signals) can disrupt the system, leading to emergent congestion or accidents.
  • Emergence and Human Error:
    • Human errors can trigger or amplify emergent behaviors by introducing unexpected inputs into the system’s feedback loops. This supports your view that systems can “fall apart” when human subsystems act incorrectly.
    • However, this doesn’t mean emergence itself is the problem. Rather, it highlights the need for robust design to account for human variability. In MCAS, better training, clearer interfaces, or redundant sensors could have mitigated the impact of human or sensor errors on the feedback loop.

Synthesis: Is Emergence Rooted in Feedback Loops?

Your hypothesis that emergence is rooted in self-modifying behavior or feedback loops is strongly supported:

  • Feedback Loops as the Mechanism: In both MCAS and traffic control systems, emergent behaviors (catastrophic pitch adjustments, traffic flow patterns) arise from feedback loops where outputs (MCAS commands, signal changes) alter the system state, which feeds back as new inputs. These loops create dynamic, non-linear interactions that produce outcomes beyond the sum of parts.
  • Self-Modifying Behavior: Systems that adapt their state (e.g., MCAS adjusting pitch, traffic lights changing timing) are particularly prone to emergence because their dynamic responses introduce unpredictability, especially under unplanned conditions (e.g., sensor failure, road closures).
  • Human Role: Humans as subsystems add complexity to feedback loops, and their errors can trigger or amplify emergent behaviors, as seen in MCAS. This underscores the importance of designing systems to handle human variability.

Is Emergence a “Bad” Characteristic?

  • Neutral Nature: As discussed previously, emergence is neither inherently good nor bad. It’s a natural outcome of complex systems with feedback loops. In MCAS, the emergent behavior was catastrophic due to engineering flaws, not because emergence itself is problematic. In traffic systems, emergence can be beneficial (e.g., optimized flow) or harmful (e.g., gridlock), depending on design and conditions.
  • Engineering’s Role: The MCAS case shows that bad engineering (e.g., single-point failures, inadequate testing) can turn feedback-driven emergence into a liability. Good engineering, conversely, can harness emergence for positive outcomes, like adaptive traffic systems improving efficiency.

Implications for Your Debate

Your opponent (arguing that emergent behavior is a characteristic of designed systems) has a point, as complex designed systems often exhibit emergence due to feedback loops and self-modifying behavior, as you’ve noted. However, you can counter that:

  • Emergence isn’t universal to all designed systems—only those with sufficient complexity or dynamic interactions (e.g., MCAS, traffic systems).
  • The MCAS case shows emergence can occur but is often a symptom of engineering oversights rather than an inherent flaw of designed systems. The feedback loop’s “wonky” behavior was predictable with proper risk analysis, pointing to bad design rather than emergence being a “bad” characteristic.
  • Human subsystems, as you noted, amplify emergence when they introduce errors, but this is manageable with robust design (e.g., redundancy, clear interfaces, training).

Conclusion

Your insight that emergence is rooted in feedback loops and self-modifying behavior is spot-on. Both MCAS and traffic control systems illustrate how feedback-driven dynamics can lead to emergent outcomes—catastrophic in MCAS due to sensor failure and poor design, or adaptive in traffic systems. Human subsystems can indeed destabilize these loops with erroneous inputs, as seen in MCAS’s pilot interactions. Emergence isn’t inherently bad but becomes problematic when engineering fails to anticipate or constrain feedback loop behavior. For your debate, you can emphasize that while emergence occurs in complex designed systems, its negative impacts (like MCAS) are often due to engineering shortcomings, not the phenomenon itself.

If you want to refine this argument further or test it against another example, let me know how I can assist!

Grok: You’re absolutely right—social media platforms are a classic example of a feedback loop where user interactions (likes, shares, clicks) modify the feed’s content, which in turn influences further user behavior. This can lead to emergent phenomena like echo chambers or viral trends, which can “go wrong” (e.g., amplifying misinformation) when the feedback loop isn’t well-constrained by design. Thanks for pointing that out! If you want to dive into another example or refine your debate points, I’m here.

12 views

Installing an MBSE Process, Part 2

by Eric Barnhart 1 Comment

Introduction

In part 1 of this series, I presented several of the approaches companies will use to address model-based systems engineering. All of those approaches in part 1 are doomed to be failures. Now let’s take a look at approaches that have been demonstrated to be successful.  Part 2 addresses how to nurture an MBSE initiative.

Crawl-Walk-Run

Crawl

Crawl Before Walking, Walk Before Running

One of the overarching principles I advocate for bringing MBSE to a company is the Crawl-Walk-Run approach. This was an overarching principle used at one company where our team successfully bootstrapped an MBSE process. It’s actually quite simple; learn to crawl before you try to walk, learn to walk before you try to run. Trying to go right from laying on your back to running a marathon is bound to end in tears, skinned knees and hands, and bruised egos. Going straight from non-standard functional block diagrams in Visio to SysML internal block diagrams in Cameo Architect will end pretty much the same way, plus cost a lot of money. Don’t make that mistake. Start with the basics, and build up from there. Crawl-Walk-Run applies to all the following sections.

Preferably Pick a Prepared Process

One of the earliest things a company must do as part of its MBSE initiative is to select an MBSE process to implement. I suggest selecting an established process, such as OOSEM from INCOSE, IBM’s Rational Harmony SE Process, IBM Rational Unified Process for Systems Engineering, Vitech Corporations’s STRATA process or one of several others. Why an established process? If your company is new to MBSE, it most likely doesn’t have the skills or expertise to develop a custom process. Start with something that already exists.

There are no hard and fast rules for selecting a process, as it depends on the needs of the company, its existing engineering knowledge base and the expectations of its customers. There are some heuristics you can use.

If object orientation is completely foreign to your technical staff, as well as your business development and project management staff, then selecting an object-oriented process is ill-advised. It’s easier to work with concepts your staff already understands, or that they can easily pick up on. You need to get the staff crawling before advancing to either walking or running.

If your typical customers prefer reports and drawings in a particularly format, then it’s best to select a process that supports these reports and formats. For example, if your primary customers and their products are very mode and state focused, your process should revolve around handling modes and states. These require a different approach and different process from products that are user-centric and are use case driven.

Customize the Complexity

Whatever prepared process you select, you will need to customize it to your own needs. Prepared processes are designed for any situation engineering can throw at them, and are generally complex. Most of your customization at this point will be subtracting unneeded complexity from the process. If the process has a series of steps that don’t apply to your company, pull them out of the pre-defined process as part of your customization. If your company is used to performing similar steps in a different order, feel free to change them up. Again, start crawling for your initial deployment before you try to walk.

With all this customization, you’re going to need to document your customized process. If it’s not written down, does it exist? If process documentation requires felling some trees in the forest and nobody reads the resulting paper, did it ever happen? Write your process down and formalize it. Be sure to apply your local configuration management processes for company processes, and make sure the process documents get distributed.

Merge the Maps

A pre-existing process will define the MBSE process using its own map of the process territory. You will need to create a cross-reference between the MBSE process and your standard engineering process. This is necessary for two reasons. First of all, you need to make sure your selected and customized MBSE process covers all significant aspects of your standard, non-MBSE process. For example, if your standard process requires functional decomposition, you should be able to describe how the MBSE process also covers the equivalent of functional decomposition, even though the details will be different. If you can’t make a cross reference between the two, you will need to understand and explain the exception.

Secondly, your engineers will need to grasp what they will be doing in an MBSE environment. Mapping an MBSE process against a non-MBSE process provides a necessary degree of familiarity for engineers. When they put the MBSE process into practice, they’ll be able to recognize that the new practice solves the same problems they were addressing with the old practice.

Without a “Rosetta Stone” for your process, your engineers simply won’t be able to translate it to something they understand. If they don’t understand the what and why of their tasks, they will put up resistance to your new process. Experience has shown that engineers can be stubborn that way.

Wield One Tool Well

Now that you’ve got a customized, MBSE process that suits your company needs, you’re going to have to put it

Hammer

Only One Tool at a Time

into practice using an MBSE tool. Visio and Powerpoint are not going to make the cut for real MBSE, and using Excel databases to hold model entities actually somewhat scares me. (It should scare you too.) Pick a tool, but pick one. That’s right, you’re still at the crawling phase, and you want to have singular focus on a single MBSE tool. Find ones that support your process (you probably already know the tools) and make your best deal with the vendor. You’re going to need a lot of licenses and vendor support to make this work.

So far, you’ve got an MBSE process documented and you’ve got an MBSE tool. The next step is to translate your process into actual practice.

Pilot Projects Perfect Your Process for Practice

The next step in crawling from a set of MBSE process documentation to standing up a MBSE practice is to execute a pilot project using the new process. I recommend a simple project executed by the MBSE deployment team. The pilot should be common to your business and easy for the engineers to agree upon, once they see it. During this pilot, the deployment team learns to walk by executing their process. The team uses the defined methodology and selected tool, but perhaps more importantly, the team
documents every decision, menu selection and mouse click.

The methodology might very well start with a group sketching on white boards and making decisions in analysis. Document all the steps. For example, why is this a use case and some other alternative is not? Explain it. Why is an activity diagram using a fork here but a decision branch there? Explain it.

Next move the artifacts into the tool. How is a new project created? Show the menu selections and mouse clicks. How do you organize a standard project? Show how to create packages and folders, or your tool’s equivalent. How do I build a functional behavior diagram? Again, show all the steps in the tool.

Once you complete your pilot project, you’ll have a small team of experts stood up and ready to walk. Plus you’ll have all you need to create material for the next phase of nurturing.

Embrace Enthusiasm for Education and Training

As I wrote about before, you can’t just throw a process, or even a detailed practice, out into the engineering community and expect engineers to embrace it. Like an aged and experienced elephant, the engineering team will go where it wants and follow the familiar trails. Experience has shown that engineering teams can be stubborn that way as well. You can’t force a team, or an elephant.

But you can train it.

Training Lecture

Effective Training is Essential

In fact, you need to make a commitment to training all of your system engineers in the MBSE process, and more importantly in the practice of it using the tool set. By “commitment,” I mean budget and time. If it’s important to the company, then the company will put money and time behind it. Money and time are the currencies of value in the language of corporations. A company that trains all its system engineers in MBSE clearly communicates the importance the company puts on the process, practice, and use of MBSE.

 

On the other hand, a company that expects engineers and instructors to participate in training during off days for free, or to take generic web-based training on their own time is communicating a very clear message: “MBSE training is not important. It’s value to the company, in our monetary language, is a big fat $0.00.”

Communicate the right message to nurture your MBSE initiative.

Call a Coach

Although your engineering teams will be well trained and ready to put the MBSE process and tools into practice, they are not really experts yet. They are walking and not yet running. They’re still developing paths through the MBSE jungle. You can make it easier for them by having a MBSE coaching team available for teams to call on demand. Instead of fumbling through an initial MBSE program startup, the engineering team can have a small team of experts available to help them, offer advice and make the program bootstrap go much faster.

Summary

The alliterative steps I’ve outlined here, from Picking a Prepared Process to Call a Coach, all support an MBSE initiative from crawling through walking. These aren’t all the good ideas for nurturing an MBSE initiative, but I consider them the main ones. If you have more ideas or additional experience, feel free to share in the comments.

The more effort you put into these nurturing efforts, the more likely you are to successfully stand up an MBSE process initiative and move it forward to the benefit of your company. Walking forward will take you far, but as your system engineering skills improve, you’ll want more options, more tools and higher quality outputs. You’ll probably want to add new MBSE tools and training beyond your initial selection so you can satisfy more customers. Or add new analytical techniques to your repertoire and train the team to use them in the tool. When you reach this stage you can truly say you’re up and running, and your MBSE initiative has reached it goals.

In the meantime, keep calm and MBSE on.

170 views

Example Map – the Simple Requirements Model

(continued from part 2)

What makes a map of System Engineering valuable?  How does a system engineer use a map to get from one place to another in the engineering process?  How does a map support decision making?

In a previous article, I addressed what it takes to build a map of the Systems Engineering territory, or an information schema for SE data.   I included a real map of a schema created for a tool back in the ancient days of the 1990’s.  That map was far too complicated to explore for an introductory discussion, so I put together a very simple map of systems engineering that I expect many SEs can understand and agree to, at least in principle.  I’ll call it the “Simple Requirements Model”, or just SRM.

In the rest of the article I’ll explore the SRM to establish some basic concepts about managing maps of information for systems engineering.

Diagram of a SE entities and relationships

The Simple Requirements Model (SRM)

Entities

The model graphically shows entities using blue rectangles.  Each entity is a collection of tightly coupled data which collectively define the entity, and uniquely identify that entity.  If you are an aficionado of relational databases, you might think of these entities as rows in a table.  If you’re an advocate of object-oriented methods, you might consider the entities to be objects that realize a particular class.  For this high-level discussion, the specific implementation is irrelevant.

Entities also have a set of business rules, which are unique to the type of entity.  The rules define processes for creating an entity, establishing entity validity, how to perform configuration management for the entity, and others.  The business rules are not visible in the entity data; instead the business rules define the processes and operations behind the entities.  Business rules will be unique to the organization that creates the system engineering map.  For this introductory level discussion, the business rules are kept to a minimum.

Relationships

Entities have relationships between one another shown by lines with arrowheads to indicate directionality of the relationship.  Relationships show that entities have a correlation or linkage between them.  To capture the meaning of that linkage, every relationship needs at least a name to define it, and rules for how the linkage is made.

Consider your relationship to an automobile.  You might be the driver of the car which is established when you get into the drivers seat, start it and drive it off.  You might very well also be the owner of the car, which is established by the title to the car and kept by a local government entity.   You and the car are two very different entities, and have two (or more) very different relationships.

Exploring the Map

Now that we have some basics, let’s look  at the SRM map in more detail.

As described above, a well-constructed map of an information territory, or domain, represents well-defined entities with not only a name and perhaps a descriptive definition, but also with a set of attributes necessary to characterize the entity in the particular context. Each of the elements in the SRM needs its own set of attributes.  Let’s explore the entity at the center of the SRM, the requirement.

So what is a requirement?  According to INCOSE, a requirement is:

  1. A condition or capability needed by a user to solve a problem or achieve an objective.
  2. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents.
  3. A documented representation of a condition or capability as in (1) or (2)

For our purposes, a requirement is most closely like definition 3, the documented representation of 1 and 2.  Since it is a defined element of this model, we shall refer to the entity as a «requirement», where the guillemets («») act as a flag that we are talking about a well-defined element of the SRM.

(If you choose to use an indicator of a defined entity in your documentation, you can choose any indicator you like, or no indicator at all.  I advocate flagging any reference that has a well-defined meaning to let readers know they’re looking at something using a precise definition, and not a random or ambiguous term.)

Not only does the «requirement» have a definition, it is also characterized by a specific set of attributes that describe the «requirement» and only the «requirement».  Here is a look at the attributes of a «requirement» in the SRM:

Attribute table

Attributes for a «Requirement» in the SRM

In the context of the SRM (and frankly, in almost all practical contexts) a «requirement» can be characterized with only two required attributes, a unique ID and the text of the user need.  (SysML defines exactly these two.) In the SRM there are only 15 requirement attributes defined, and only two are mandatory.

Since the «requirement» entity is at the center of the SRM domain, the next step will be to see how it relates to the other entities in the SRM.

«Requirement» «has parent» «Requirement» Relationship

A set of «requirement»s tends to have a hierarchical organization, so a «requirement» can have a relationship to another «requirement» called its parent, or «has parent».  The relationship is shown with a directed arrow.  The hierarchy can be discerned by recursively following «has parent» relationships until we find a «requirement» that has no parent «requirement»s.

in a more complex model, the «has parent» relationship might be replaced or complemented with more detailed relationships, such as «decomposes», «disambiguates», or «derived from».  The SRM uses the simpler «has parent» relationship.

«Document» «reports on» «Requirement» Relationship

A hierarchical set of requirements is usually represented not in a simple list, but in a document or specification.  A «document» is a separate entity from a requirement because a document has a different set of characteristics.  Documents will have a formal ID, title, author, revision, multiple sections and subsections and release sensitivity.  None of these apply to individual requirements. It makes sense then that a «document» should be stored as a separate entity from a requirement.

This is significantly different from legacy DOORS 9.x implementations, in which requirements and documents are misguidedly mixed together into a confusing morass. (It is possible to separate the two using DOORS 9.x, but that requires the design of the database to differ from the originally intended design of the DOORS application.)

Furthermore, «document»s differ from «requirement»s because «document»s have different methods to operate on and maintain them.  Instead, «document» «report on» «requirement»s, or inversely, «requirement»s are «reported on» by «document»s.

Publishing a specification is a process of selecting the «requirement»s from a model based on a query of the requirement allocation to a component, and then building a «document» to organize and report on the selected «requirement»s.

«Requirement» «allocated to» / «satisfies» «Component» Relationships

A «requirement» states what the customer needs, but that need is delivered as a «component» of the overall deliverable system.  This is how the SRM ties together the descriptive model of «requirement»s with the structural architecture model of «component»s.  A typical business rule maintains that a «requirement» is allocated to exactly one «component». (If it were allocated to more than one, then the «requirement» would be stating more than one goal and needs to be decomposed into multiple requirements.)   The inverse relationship says that a «component» satisfies a «requirement».  A «component» can satisfy multiple «requirement»s.

A «component» is a separate entity from a «requirement» because it has unique characteristics that describe it as a physical entity.  «Component»s will have a unique identifier, a set of functions, a set of constraints, physical characteristics that describe it as a controllable physical item, procurement attributes and others.  In addition, «component»s are managed and controlled as configuration items, not as «requirement»s.

«Component» «has parent» «Component» Relationship

The architecture of a system generally is a hierarchical structure of «component»s. Similar to a «requirement», a «component» has a relationship to a parent «component» via the «has-parent» relationship.  Traversing the «has-parent» relationship reveals the structural hierarchy.

«System» «is a type of» «Component» Relationship

In the SRM, «requirement»s are «allocated to» «component»s, which allows some flexibility in how a business defines a «component».  Since I’m describing a system engineering model, the model needs to have a «system» in it.  In the SRM, I show that a «system» is a specific kind  of «component» using the relationship «is a type of».  In a larger and more useful SE model, the «system» and «component» would have a much more detailed model surrounding them and describing more relationships.

«Requirement» «satisfied during» «Increment» Relationship

In many projects, the development effort will be spread over several phases, or implemented in «increments».  A business rule might state that a «requirement» will be «satisfied during» exactly one increment.  (If it were «satisfied during» more than one, then the «requirement» would be stating more than one goal and needs to be decomposed into multiple requirements.)

«Test Case» «verifies» «Requirement» Relationship

In order to prove that a «requirement» is met, the «requirement» needs to be tested and verified.  A «test case» defines the testing and verification approach and is related to multiple «requirement»s through the «verifies» relationship.  A «test case» is a unique entity from a «requirement» not only because it has unique descriptions and goals, but is finalized during a later lifecycle phase of the program, generally after the requirements set has been baselined.

«Change Request» «modifies» «Requirement» Relationship

In order to control changes to a requirements set in an orderly fashion, each «requirement» is generally placed under control of configuration management.  A business rule might state that a «change request» is necessary before modifying a «requirement».  The «change request» might have attributes that look a lot like the attributes of a «requirement», as well as additional attributes for status and tracking of the request.  The «change request» is handled differently from a «requirement» and lacks the other relations of a «requirement».  Given these factors, the «change request» is an independent entity in the SRM.

«Requirement» «describes» «Capability» Relationship

A «requirement» states what the customer needs, but ultimately the customer has a specific desired effect the customer wants to see in the end product.  That desired effect is captured by a «capability» entity.  Typically multiple «requirement»s will «describe» a «capability».  This is how the SRM ties together the descriptive model of requirements with the results-driven behavioral model of «capability»s.

«Component» «implements» «Capability» Relationship

Finally, the «components»s of the end product must exhibit well-defined, desired behavior to satisfy the customer.  The SRM models this by showing that a «component» «implements» one or more «capability»s.  A business rule might allow a «component» to «implement» many «capability»s, but insists that a «capability» be implemented by exactly one «component».  (If a «capability» were implemented by multiple «component»s, then the «capability» would need to be decomposed into multiple subcapabilities.  The SRM, being simple, does not support this.)

Map Making Lessons

This map is so simple the reader no doubt found multiple deficiencies and inconsistencies between this simple model and their own company’s system engineering approach.  Still, this simple example provides a background to learn a few lessons.

One of the first lessons learned is that creating this very simple map, or schema, is a surprisingly tedious process.  It was probably even tedious to read it.  (Sorry about about that.)

Part of what makes it so tedious is that we feel we already know these entities and relationships.  If we knew them so well, they would be written down, wouldn’t they?  If everybody knew them by the same exact definition, wouldn’t the system engineering process go much faster and be more accurate?  Despite feeling that we know them, almost no one has really taken the time to be explicit with them.  (I’m sure a Vitech Core sales representative will chime in here.  Feel free to comment.)  As a result, when moving from company to company, and even project to project, we waste a lot of time reinterpreting our map for the SE process, and stumbling over minor misconceptions and terminology.   SE is a discipline of communication and information management, yet we fail to do that for ourselves.

If your company actually goes to this level of detail, or more, please let me know in the comments.  I only encountered one organization in my 35 year career that actually gave this much thought to the engineering process.

Another humorous lesson is that using guillemots is itself tedious, especially in an article that references the underlying entities of a model a lot!

And we’re not done yet.  The next lesson to be learned is captured in the final installment of information management in systems engineering.  I’ll explore how to use the SRM to satisfy the needs of engineering users.

DISCLAIMER:  I am tool agnostic.  I am not associated with any tool or vendor, nor do I benefit from any tool sales or stock.

(part 4 coming soon)

Learn when the latest articles in this series are posted – Join the vmcse.com mailing list because you don’t want to miss out.
242 views

Installing an MBSE Process (part 1)

Introduction

Moving a company from traditional engineering processes to model-based processes (or any new process for that matter) is a tricky endeavor.  It’s easy to start enthusiastically, and then witness the whole effort crash and burn, or just fade into oblivion.  For those of us who are enthusiasts for modern practices, this can be extremely frustrating.  For those of us who have been given the tricky task of implementing a model-based process in our organization, it can be demoralizing to see our efforts fail. What can we do about these outcomes, other than run away in fear from a presumed MBSE monster?  If you want to avoid a monster movie playing out, this two-part article is for you.

plane-828826_640Over my decades of systems engineering, I’ve seen many variations of attempts to install modeling processes of some form or another into the system engineering process. I’ve learned to predict accurately whether a company will succeed or fail with their efforts, and I can predict it as early as the first 90 days of the start of an MBSE initiative.

Pretty bold claim? Well, I see things. I see dead processes. There are a lot more of those than living ones.  Part 1 addresses the issues that destroy your attempts at updating your process, and Part 2 addresses key elements to make it succeed.

Let’s start by getting a little more analytical about why efforts fail.

Part 1: Don’t Kill the Baby

You don’t want a process that’s DOI (Dead On Initiation) do you? You want an MBSE process that survives and benefits your organization. Before I tell you how to be successful, I feel I should give you an overview of what kills your MBSE process. After all, it makes no sense to nurture a new process at the same time you’re strangling it to death. So first take heed of these cautionary points to “process-proof” your organization before starting an MBSE initiative.

Grass roots efforts are doomed to fail

plants-2411458_640If you are relying on grass-root efforts among the engineering staff to boot-strap your MBSE initiative, you’re dooming yourself to failure. Sure, the grass-roots, die-hard MBSE engineers are the ones who will jump aboard the MBSE band wagon when it shows up (if at all.) They even provide the fertilizer to help the seeds of MBSE to grow. On the other hand, they cannot direct other engineers to follow MBSE standards, nor direct change in programs or organizations.

If you’re relying on grass-roots efforts, your MBSE initiative has already succumbed to drought, chinch bugs and died below the surface.

Dropping tools onto engineers computers is just fumbling the ball

frustrated_at_computer_640So maybe you’ve decided to fork over some of your budget and purchase a set of licenses for the latest and greatest MBSE tool. Then you push the tool to all your engineer’s computers and let them have at it. Nice thought, but ultimately a bad idea. The engineers probably don’t have the time or charge number to learn how to use the tool. If they do find time, each engineer will fumble around until they develop their own approach and methods. There is no ability to transfer tool skills between work groups or programs if everybody is engineering their models differently.

If you think all you need is an MBSE tool pushed out to engineering, you are sadly mistaken. When push comes to shove, your lack of attention to your tool roll-out will have pulled your MBSE initiative to its death.

Executive mandates are ignored in the trenches

Maybe, in a fit of hubris, you decided to issue an executive mandate to use MBSE techniques on all programs, and use the company selected tool. I’ve even heard the suggestion to make MBSE techniques part of the yearly performance review criteria. On the surface this seems to make sense. It shows the company backs the MBSE initiative.

signed_document-640On the other hand, down in the engineering management trenches, the staff knows there’s a much more important mandate: get the job done on schedule and make a profit. As a result, the MBSE implementation will be absolutely minimal, that is, just enough to check the box that the MBSE mandate was followed. MBSE will ultimately have little or no impact on the program.

If you think all it takes to get MBSE off the ground is a mandate from the VP of Engineering, then feel free to tick that box on your management checklist. The box is checked, but the entire MBSE initiative checked out long before the use case analysis began.

Exceptions multiply like weeds

Maybe you’re smarter than most organizations and bootstrapped an MBSE initiative without falling into any of the previously described traps. Did you recognize that sometimes MBSE will not be appropriate, or that some engineers or manager will not be up to speed when necessary? If so, you’ve probably granted some exceptions to the MBSE process, or have been otherwise lax in enforcing your process. Maybe the engineers are bringing Visio and PowerPoint versions of their diagrams to peer reviews, or even to more formal reviews. After all, Visio makes a more pleasing diagram than most MBSE tools, right? Maybe the engineers are skipping use case analysis because they see no value to it. After all, they never needed to do it before. It’s easier to follow the old, familiar path than try this new MBSE stuff. The engineers can make the case that following the old path is faster than doing all these new, unfamiliar MBSE processes.

spring-weeds-640Sure, Visio drawings might look better, but you can’t accurately engineer a solution using non-standard artist’s renditions of the proposed solution. You need the engineering drawings, and those come out of your MBSE process and tool using standard notation. You need ALL the engineering artifacts in your process to be in your MBSE tool’s database in order maintain correct and useful relationships from top to bottom.

And of course, there is a learning curve to deal with in any new endeavor. Don’t forget, the OLD processes had learning curves too.

Do remember, that if you allow exceptions whenever the engineer feels like it, you’ll see everything grow except the MBSE process you wanted.

Bad Management Practices You Can’t Change

Naturally, these are not all the anti-patterns leading to failure. There are other systemic issues to be aware of. Changing executive management in the middle of an MBSE roll-out will often kill off the initiative. Changing corporate values to satisfy the shareholders at the sacrifice of company performance will do the same thing. These are not MBSE specific issues, but overall management problems than run rampant through corporate culture. There’s little an engineering manager can do to fix these.

On the other hand, the engineering managers in charge of MBSE processes do have things in their power they can leverage to make MBSE work. Those will be addressed in part 2.

 


Subscribe to vmcse.com for updates and more articles addressing these information management problems and more.

738 views

Building a Map for System Engineering

(continued from part 1)

sundial-1388070_1280In order to better perform our jobs as system engineers, we need to understand the territory of systems engineering, guide ourselves from point A to point B in that territory, and be able to communicate our results to our stakeholders.  We understand our jobs by building and following a metaphorical map of the system engineering territory.  We make decisions about how to get from one point to another. We communicate to our customers by providing them another simplified map of the results that they can understand and relate to.  (Yes, the PowerPoint graphics so commonly used are a method of describing maps.)

You’re probably wondering why I discuss metaphorical maps and not the specific details of the system engineering process.  The reason is simple. Often we’re stuck in a very specific way of thinking.  An intellectual rut, if you will.  We’re locked into specific interpretations of our jobs and processes.  By thinking metaphorically, sometimes we can see relationships and open up to ideas we wouldn’t see without the metaphorical view.  If you’re willing, let’s explore this metaphor together.  Let’s back up, and ask ourselves, what is a map in general?

What is a Map?

For once, the dictionary fails us by not providing a good definition of map, so instead we turn to Wikipedia, where we learn that a map is:

…a symbolic depiction highlighting relationships between elements of some space, such as objects, regions and themes.

In a typical road map, as you might see on Google Maps, Bing, Apple Maps, or Waze (or many others) we see geographic regions and their relationships.  Some of the information includes relationships between cities via roads and relative locations.  We also see municipal boundaries and geopolitical extents.  We can see the relationships between cities and geographic entities such as rivers, lakes, mountains and shorelines.  Using more detailed views, we can delve down to the relationships between individual addresses and the corresponding cities and roads.

Of course, there are many different kinds of maps.  Maps have a specific scope and purpose, a specific theme.  Geographical, or topographical, maps are the most commonly used, but there are also many others.  Aeronautical maps that show airspace restrictions, radio frequencies and general aviation information.  Topological maps show relationships and connections, but no extraneous information (such as scale or distance) depending on the application.  Network maps show interconnectivity among communications nodes.  There are too many types of maps to mention here, but the interested reader can look into the subject of cartography on Wikipedia.

Given all the information on the maps, what do maps do for you?

  1. Provide information to find where you are, whether it be a city address or a network node (navigation)
  2. Describe the lay of the land
    • Show terrain, entities, and relative placement of the entities whether city addresses or network nodes
    • Show important llandmarks
    • Show entities of interest for the subject area
  3. Enable guidance
    • Provide information needed to decide how to get from one node to another
    • Allow you to find the best routes

There are things that maps do not do.  For example, a map can enable guidance on your available travel options, but the map cannot tell you how to get from point A to point B.  Should you travel by car, or by airplane?  The map cannot answer that.  Should you detour to visit an attraction, or drive straight through?  The map cannot make decisions for you.  The map, by itself, cannot tell you the process of getting from A to B, nor the decisions along the way.   It only provides the information necessary to enable the decisions.  Without the information, you can’t make the decisions necessary to follow your process.

To cross terrain, we need both a map and a process for making travel decisions.  The common GPS system in your car or on your phone is a combination of a map, and an application to handle decision making.  In this case, the decision making is restricted to particular assumptions, such as travel by automobile, and that the goal is either the shortest distance or quickest route.

If you think about it, a map comes pretty close to fulfilling my definition of a model:

  1. it’s a visual representation
  2. it’s a pattern of something else, which already exists in this case
  3. it captures information on structure, data and inferences
  4. often it is formalized, as in the case of GPS systems which store the map info in a database

Maps and Processes

System engineering is generally considered a set of processes with goals.  The main goal is to move from customer concept to design specification that can be handed off to other disciplines, like moving from one map point to another.  Where’s the information necessary to decide how to move? We describe our processes in System Engineering Management Plans (SEMPs) and measure our compliance to our processes, but we very poorly formalize the necessary information.  I believe the missing ingredient is the map of the SE landscape that provides the information for the necessary decision-making.

In my opinion, system engineers have spent decades developing processes, but far too little time developing and formalizing the map of the SE territory.  In order to make decisions, we need a complete model of the system engineering territory, including a model of the SE discipline itself and a model of the customer needs.

When building models and maps of decision-making processes, we start to venture into the realm of Enterprise Architecture, or EA.  For the purposes of this article, I’ll avoid any Enterprise Architecture specific terms.  My views on EA will be discussed in another article.

The system engineering territory does not need to be explored like a poorly known Louisiana Purchase by the Lewis and Clark Expedition. (A part of early United States expansion in 1804-1806.)  We’ve already covered the SE territory, but we haven’t documented it very well.  There have been maps of SE, like the image bellow of the schema from an old SE tool in the 1990’s, so the knowledge is there.  We just haven’t made that knowledge persistent.

RDD-100 Map

This particular map is of “Design Guide C” from the tool RDD-100 by Ascent Logic, Inc.  Oddly enough, the design guide (1 of 4 maps) was only documented in an alphabetically organized reference guide; this visual diagram was never provided by the vendor.  The picture only exists because I spent several days back in the 90’s exploring the reference guide to divine these visual relationships.

I believe that understanding the basic system engineering entities and their relationships is the key to improving the entire process of system engineering, and the solutions we generate.  In the next article, I’ll look at a simplified “map” of system engineering and what needs to be done to make it useful.

Part 3 is available here

Learn when the latest articles in this series are posted – Join the vmcse.com mailing list
244 views

The Information Side of System Engineering

First in a series

The Role of Information in System Engineering

mark-995567_1280As system engineers, we like to think that our jobs are to engineer at a system level, that is, we “enable the realization of successful systems” as INCOSE would say.  In order to do this, we address customer needs, operations, performance, cost and schedule, training and support, and to a lesser degree, test, manufacturing and disposal.  These are all very important areas to address if the system is to be successful.  If the system is to meet user needs, a discipline that coordinates and integrates these other disciplines is essential, hence system engineering.

System engineering is not a discipline of design.  We don’t design integrated circuits or printed circuit boards.  We don’t solder components to PCBs.  We don’t write production C++ or Java code.  We don’t install blade servers into racks and connect them with a specific network topology.  We don’t design mechanical parts to be machined on a Computerized Numerical Control (CNC) milling machine.  We don’t really do the “fun” stuff in engineering.

Instead system engineers have their own tasks.  We gather the user needs and wants, formalize them into requirements and create a requirements database.  When in doubt about a user need, we perform trade studies to determine the best implementable option, and document that in a trade study report.  We define the concepts of operation, system functions and logical architecture. We record all that in a set of specs and architecture documents, or better yet, in an MBSE database.  We create mappings between all the entities we defined and try to document those linkages.

Show Me the Datanotes-933111_1280

This is nowhere nearly a comprehensive list of what system engineer do, but there should be a pattern emerging.  System engineers tend to create documents, databases, spreadsheets, relationships and all varieties of information.  SE doesn’t produce deliverable system end-products; no software, no hardware, no electronics.  Instead, system engineering produces intermediate products.  System engineering gathers and synthesizes the information necessary to design and build a product to satisfy the customer.   The individual disciplines actually build the product. System engineers answer the critical questions the other disciplines need to know up front to do their jobs right.  System engineering is a discipline of gathering and synthesizing information then presenting it in such as way that it can be understood and used for communications both to the development disciplines and also to the customer.  System engineering is a discipline of information and coordination.

I am not the first to observe this.  Other authors have opined that system engineering is really an exercise of information management.  (I’d credit them if I could remember who inspired these ideas. If you know of anyone who also talks about this, please let me know so I can reference them and give due credit.)

Lately some educational institutions have issued degrees in system engineering, but in general we’re usually electrical or software engineers who transition into the field.  Sometimes we are specialized analysts who can synthesize new technical information.   Sadly, very few system engineers are skilled in information management.  For example, how many system engineers know what a relational database is, or what fourth normal form is?  Raise your hands!

I know many requirements engineers who cannot even use an important tool of their trade, DOORS, relying on others to perform input and export reports to Excel.  Engineers can use tools like Excel, yet must rely on others to help them get a DOORS export.  Heaven’s forbid they use Access or mySQL to manage information. Even fewer probably know how to use Microsoft Query to extract and cross reference information.

MBSE as Information Management

I believe MBSE is an industry attempt to do more information management for system engineering.  MBSE attempts to capture system engineering information in a single model and database.  Unfortunately MBSE stops well short of fulfilling its promise.  In general MBSE models will address architectural, behavioral and structure information, but fail to address other aspects of system engineering information.   Most MBSE tools, especially those based on SysML, cover a limited territory of information concepts.  Often these tools address the end product itself, and not the full field of information concepts that system engineers deal with.  Other tools are used to address other information, such as requirements, planning, testing, trade studies, simulations and visualization.  Although these tools have their own models, seldom are the models well integrated.

The Map and the Territorysundial-1388070_1280

It should be obvious that there is a very large territory of information that system engineering covers.  When communicating, we do so in the context of the map we’ve created for that information territory.  This can be an issue if somebody in a different context doesn’t understand the map we use.  If the map omits critical information for the other party, then the communication fails.  If the map presents information in an unusable format, communication fails.  If the terminology and concepts are incompatible, communication fails.  MBSE might provide a limited map that uses common symbols, but much of the remainder of the map only exists in our heads, making it difficult to share.

How do you determine what information other people need?  How do you anticipate their queries?  How do you store and present information in a flexible manner?  These are some of the issues that should be addressed to fulfill the information management needs of system engineering.

Do You Have an Information Management Problem?

Many organizations will dismiss this idea out of hand.  “We’re doing just fine information-wise, thank you.”  Perhaps your organization hasn’t fully examined the issue.  How do you know you have an information management problem in your system engineering department?  With a tip of the hat to Jeff Foxworthy, here are some real world clues you might have an information management problem.

If you’re not doing MBSE, you might have an information management problem.

If your engineers manipulate requirements in Excel and you have separate administrators who handle the DOORS database, you might have an information management problem.

If a requirement in DOORS has an attribute called “Proposed Change”, you might have an information management problem.

If your information is linked to your MBSE database solely via a text attribute called “Allocated CSCI”, you might have an information management problem.

If you don’t understand the difference between a configuration item and a component, you might have an information management problem.

And my personal favorite:  If a requirement in DOORS has over 100 attributes, you definitely have an information management problem.

Part 2 is available here.

Learn when the latest articles in this series are posted – Join the vmcse.com mailing list
216 views

How a Document-Based Approach Differs from Model-Based

clock-1081013_1280Part 2, Addressing the Differences

In a previous post I derived a definition of MBSE.  The challenge that often comes up is, “Aren’t configuration controlled specification documents and requirements databases just another form of representation?  Another form of model?”  Indeed, they were the basis for models in the past, because in many cases the models resided only in the brains of the customers or designers.  The only way to communicate those models were through human language and pictures, hence specifications.  Nowadays specifications fail to fulfill modern systems engineering needs for a number of reasons.

First of all, specifications are not visual representations, but descriptive representations.  The system is represented in words, with an occasional graphic to either clarify, or possibly confuse, the description.  Descriptions rely on the underlying written language.  Written language is inherently flexible in interpretation.  Grammar is often flexible and dynamic, although more so in spoken language. The underlying information can easily be interpreted multiple ways, depending on the recipient.   Additionally, there are no rules on how to describe a system using written language and only the most simplistic rules on the structure of a formal description.

In MBSE, the language is tightly controlled so the possibility of misinterpretation of the model is greatly reduced.  Languages for MBSE, such as SysML, have specific rules for how to represent elements in the system and how language elements can interrelate.  Furthermore, the graphics generally are coupled to the language, further reducing ambiguity.

Secondly, in written language the strategic information is merely described; the information has no form itself.  A description of an element within the system might require many descriptive statements.  An engineer can’t point to a single form of that element.  At best, the engineer can point to a loose grouping of text that describes the element.  Textual representations fail to give form to the information.  In MBSE, the system elements and strategic information are individual entities in some type of repository or database.  Those entities carry their characteristics with them.  A repository of elements and information is what really makes MBSE “model based.”

For these two main reasons, legacy document-centric approaches to systems engineering simply do not meet the intent of modern model based approaches.  Yet, let’s face it; textual requirements are not going away any time soon.

The Legacy Runs Deep

Systems engineers still will use textual requirements for the foreseeable future.  It’s much easier to create a legal contract around a textual description than a graphical representation, even if the representation resides in a controlled database.  It’s much easier to develop verification procedures and checklists around single textual statements than around pictures.  Under MBSE the textual requirements get de-emphasized instead of being dropped completely.

Under MBSE, textual requirements should be derived from the content of the model.  If the tools to create that model were chosen well, the syntax of the model should be well-formed and translate into textual representations fairly easily.

Summary

Documentation is inevitable, but under MBSE it changes from being the end goal of the systems engineer to being reports that describe the underlying system entities.  The specifications and other documents report on the model rather than being the model.  This is the essential difference between document-based systems engineering and MBSE.

 

Learn when the latest articles are posted – Join the vmcse.com mailing list
1,953 views

What is MBSE?

clock-640x401 copyPart 1, Deriving a Definition

Every field of technology has its buzzwords and fads.  Systems engineering is no different.  The latest buzzword is Model-Based Systems Engineering, or MBSE for short.  Is MBSE just a buzzword or real change in how systems engineers do their jobs?  Not surprisingly, there are still systems engineers who don’t understand what MBSE is.

A key reason that some SEs still have a blank stare on their face whenever MBSE comes up is that many don’t understand what the term really means.  They can expand the acronym, but still not have an understanding.  For many systems engineers, MBSE exists only in contrast to legacy systems engineering, which was document based.  A document based approach was what most experienced systems engineers learned on the job, and have used throughout their careers. Without a real understanding of MBSE, systems engineers will return to what they know: textual documents, PowerPoint and Visio graphics.

To move SE forward to a MBSE mindset, we need to resolve this issue of understanding; we need to define what MBSE really is.  Let’s walk through a process of finding or deriving a definition for MBSE.  Along the way we might learn a few things too.

Unfortunately, when you have a buzzword or phrase with vague terms in it, you really have no choice but to delve into the definitions of those terms.  When in doubt about a definition, I first go to either a recognized authority, or the dictionary.  In this case, we can look to INCOSE as a recognized authority. INCOSE’s SE Vision 2020 provides this definition of MBSE is:

“…the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases. MBSE is part of a long-term trend toward model-centric approaches adopted by other engineering disciplines, including mechanical, electrical and software. In particular, MBSE is expected to replace the document-centric approach that has been practiced by systems engineers in the past and to influence the future practice of systems engineering by being fully integrated into the definition of systems engineering processes.”

INCOSE also provides a definition of systems engineering:

“Systems Engineering is an interdisciplinary approach and means to enable the realization of successful systems. It focuses on defining customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem: Operations, Cost & Schedule, Performance, Training & Support, Test, Disposal, Manufacturing.

Systems Engineering integrates all the disciplines and specialty groups into a team effort forming a structured development process that proceeds from concept to production to operation. Systems Engineering considers both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.”

Unfortunately, these definitions don’t help very much.  They essentially say only that MBSE is an application of modeling that supports the process of systems engineering.  So, what is modeling, or a model for that matter?  This time we can go to the dictionary.  Merriam-Webster has many definitions for “model”, but these seem to be the most pertinent:

Model; noun:
3: structural design
4: a usually miniature representation of something; also: a pattern of something to be made
11: a description or analogy used to help visualize something (as an atom) that cannot be directly observed
12: a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs; also: a computer simulation based on such a system

For our purposes, I’ll combine these and synthesize one big definition:

Model; noun: a representation, generally visual, serving as a pattern of some thing to be made, including information such as the structure, data, states, behavior and inferences about the thing

We have a pretty good idea of what a model is now; it is the representation of important information about a system. That information will be unique to the specific system based on the purpose of the system, unique technology,  regulatory issues, and schedules, among other constraints.  The information will serve to plan, organize, direct and control the lifecycle development of the system, with an emphasis on design and development in the case of systems engineering.  We might even describe that information as strategic.

Still, there’s something missing.

Pinned to my office wall is a visual representation of a 10 step process to follow to perform systems architecture.  It’s a visual representation of a pattern of steps, and it captures strategic information, but it’s still just a collection of pixels printed on a large sheet of paper. I can manipulate the pixels to adjust size, shape and color, but I can’t manipulate the strategic information.  The information has no independent form.

The representation is just a means to visualizing the information.  In order to manipulate the information underlying the representation, the information needs to have its own form.  It needs to exist in a repository, and there needs to a single controlled copy of it.  Furthermore, it needs to have a syntax and meaning to it so that engineers all interpret it the same way.  The information needs to be “formalized,” which is supported by definitions in the Merriam Webster dictionary.  (Reviewing the dictionary in this case is an exercise left for the reader.)

If we accept this derivation process, combining definitions of modeling with systems engineering, then our definition becomes

MBSE: the process of creating formalized representations, generally visual, that 1) serve as patterns for systems to be made and 2) capture strategic information about the systems, in order to realize those systems successfully.

Personally I think this is a better definition than INCOSE’s, but there is some pride in authorship.

The principles behind MBSE do change the way SEs approach the discipline.  The approach radically shifts from writing descriptive documents to an exercise of managing critical information that describes the system patterns.  Documentation is inevitable, but under MBSE it changes from being the end goal of the systems engineer to being reports that describe or visualize the underlying information architecture.  The documents report on the model.  The quality of the information in the model and its organization are the new goals of model based systems engineers.

The next time you need to explain what MBSE is, you have not only a better definition to use, but you should have a better understanding of how the definition came about.  For that matter, if you want to challenge the definition, you now have a basis for creating your own.

The typical challenge to this definition is addressed in another post.

 

Learn when the latest articles in this series are posted – Join the vmcse.com mailing list
570 views