Simulated Digital Shrapnel: Using Mission-Level M&S to Quantify Cyber Survivability in Full-Spectrum Environments

ChatGPT-generated image of fighter jet flying through code.
ChatGPT-generated image

Modern warfighting environments are increasingly complex and characterized by a convergence of kinetic and nonkinetic threats that challenge the survivability of aircraft and weapon systems across multiple domains. To help address this challenge, Congress mandated in the 2022 National Defense Authorization Act that the Department of Defense (DoD) expand survivability and lethality testing to include evaluation against a range of threats, including kinetic; cyber; electromagnetic spectrum (EMS); directed energy (DE); and chemical, biological, radiological, and nuclear (CBRN) effects. The Act also defined full-spectrum survivability as “a series of assessments of the effects of kinetic and nonkinetic threats on the communications, firepower, mobility, catastrophic survivability, and lethality of a covered system” [1].

Meeting this mandate requires the ability to integrate different threat models, data sources, and testing results into a coherent analytical framework. Most cyber survivability and cybersecurity analyses use methods such as qualitative analysis and control selection to manage risk, which do not easily integrate with the survivability analysis techniques used in more traditional threat domains, such as kinetic weapons. Cyber threats should instead be treated more like kinetic survivability threats, using an approach such as that of Aircraft Cyber Combat Survivability (ACCS) [2].

Once a similar analysis methodology is used for cyber, the different threat areas still need to be combined. Modeling and simulation (M&S) provides a practical means to achieve this integration [3]. M&S environments can be cost-effective, repeatable, and scalable and can serve as a potential integrator for full-spectrum survivability evaluation. For example, M&S can capture the interdependencies between threat areas, such as when a cyber attack disables a radar or electronic countermeasures system, thus rendering an aircraft more susceptible to subsequent kinetic engagements.

Mission-level simulations, such as the Advanced Framework for Simulation, Integration, and Modeling (AFSIM), provide a viable means to measure cyber survivability within a full-spectrum survivability environment, albeit one that must be validated using test and exercise data. This approach can provide mission-based quantitative analysis that can be used in both system and mission engineering to improve our expected mission performance, and the results can be validated using a combination of testing and exercises.

Risk Model

Effective measurement of cyber survivability requires a risk model that quantitatively links cyber threats to mission outcomes. The framework used in this article adopts the traditional definition of risk used in the DoD [4]:

Risk = Impact × Likelihood

Risk then can be measured in terms of Expected Mission Loss (EML), which is calculated by multiplying impact and likelihood. Within the context of aircraft and weapon system survivability, impact in the preceding equation is the mission-level consequences of a successful cyber attack, such as degraded mission effectiveness, loss of capability, or mission failure. In addition, likelihood represents the probability that an adversary successfully executes a cyber attack capable of producing those effects.

These two major components can then be broken down into four total components to yield the risk model that is referred to as four-factor (4F) (shown in Figure 1) [5].

Figure 1. Four-Factor Risk Model.
Figure 1. Four-Factor Risk Model.

Single system loss captures the percentage of mission capability that is lost if a cyber attack against a system is successful, while percent systems impacted is the percentage of systems that are expected to be affected by an attack. For example, an attack might affect only one system, half of the fielded systems, all the fielded systems, or any other percentage. Likelihood includes two terms: likelihood of attack launch and likelihood of attack success. Likelihood of attack launch represents the likelihood that an adversary will choose to launch a particular cyber attack. Likelihood of attack success is the probability that an adversary will achieve some mission impact given that the adversary chooses to launch a particular cyber attack. This parameter can be challenging to determine, but we can use probabilistic attack trees to further decompose and analyze this probability.

Probabilistic Attack Trees

Traditional cyber risk assessments often rely on ordinal scoring methods, producing values such as “low,” “medium,” or “high,” which lack transparency, repeatability, and quantitative rigor. Probabilistic attack trees provide a more structured and defensible approach to quantifying the likelihood of a cyber attack’s success by representing the logical relationships between potential attack paths, system vulnerabilities, and adversary objectives. Each leaf in an attack tree represents a discrete event or precondition necessary for a cyber attack to succeed, while branches depict how those events combine to produce an overall probability of mission impact [6].

Probabilistic attack trees can be created using three primary sources of information: (1) historical or design-based data, which provide empirically grounded probabilities for events such as insider threats or component failures; (2) simple linear models, which leverage human-informed statistical models to estimate attack success probabilities when direct data are unavailable; and (3) subject-matter expert (SME) assessments, which fill residual knowledge gaps using structured elicitation methods [6]. An example attack tree is shown in Figure 2.

Figure 2. Example Notional Probabilistic Attack Tree.
Figure 2. Example Notional Probabilistic Attack Tree.

Within the risk model framework, attack tree likelihood of attack success is multiplied by the other three 4F risk model terms to produce EML values. These parameters quantify the anticipated mission degradation due to cyber effects, allowing direct comparison with other threat domains, such as kinetic or EMS attacks. When validated through test and exercise data, these EML results provide a robust mechanism for linking cyber survivability metrics to mission risk.

MQ-99 Berserker Example

The MQ-99 Berserker is a completely notional unmanned aircraft system (UAS) created as a plausible early-concept vehicle for illustrating cyber survivability analysis and design tradeoffs. In the conceptual stage of design, the MQ-99 is a medium-sized, long-range autonomous UAS with an internal payload bay able to carry four GBU-39 small-diameter bombs (SDBs) or two AIM-120 advanced medium-range air-to-air missiles (AMRAAMs) [7]. The vehicle is launched from a transportable Electromagnetic Launch Runway System (ELRS) and supports mission updates from a ground station or airborne commander. In air-to-air loadouts, the Berserker acts as a force multiplier, loitering and receiving targeting direction from an airborne commander.

The conceptual architecture was built out in a model-based systems engineering tool and emphasizes cyber-physical interfaces: mission and flight computers, navigation, communications, propulsion and fuel-management controllers, and the ground control station. To illustrate defendable and resilient design, the concept includes a hardened cyber sentinel responsible for traffic monitoring, device resets, and transition of flight-critical systems into backup modes when malicious activity is detected.

To support quantitative risk analysis, a full set of probabilistic attack trees was constructed for the MQ-99. These attack trees map multiple attack pathways into explicit leaf events scored using historical data, simple linear models, and SME elicitation. There were more than 350 nodes in the full set of attack trees, and they led to 21 distinct cyber attacks grouped into 8 risk groups.

Mission-Focused M&S

Scenario Modeled

The 21 cyber attacks were incorporated into a larger full-spectrum threat scenario, including a range of kinetic and nonkinetic threats. This scenario was unclassified and used purely notional threat parameters. The modeled scenario was a strike mission simulated in AFSIM in which two MQ-99 Berserkers penetrated contested airspace to engage six surface targets using SDBs. The unmanned strike package was tactically controlled by a manned fighter, with an E-3 Sentry providing battle management and sensor support. Precise threat placement was randomized for each trial to sample a wide range of threat geometries.

The defended environment included 2 medium-range surface-to-air missiles (SAMs), 1 short-range SAM, 2 high-power microwave (HPM) emitters, 1 high-energy laser (HEL), 1 GPS-jamming electronic warfare (EW) system, and the 21 discrete cyber-attack events targeting the MQ-99s. The scenario was architected for high-volume Monte Carlo experimentation: individual threat elements or entire threat groups could be toggled on or off to isolate contributions to mission outcomes.

Each simulation run recorded the number of targets destroyed and number of Berserkers lost. This allowed easy aggregation into mission-performance statistics and conditional probabilities. The analysis conducted more than 1.2 million Monte Carlo runs to reduce error margins, producing statistically robust distributions of mission success and platform attrition to support sensitivity analysis, EML estimation, and risk attribution across specific threat domains.

Noncyber Results

While the focus of this article is on the cyber results, noncyber threat effects are presented in Figure 3 for comparison. The blue bars on the left capture the improvement in mission performance gained measured by the number of additional targets that were destroyed over the baseline case with all threats. For example, if the GPS jammer was removed from the scenario, almost 10% more targets were destroyed. This approach of deleting one threat at a time was important as it captured the contribution of the threat system in conjunction with all the other threats. The orange bars on the right capture the improvement in performance gained as measured by how many fewer MQ-99s were lost vs. the baseline case, and the black error bars present the 95% margin of error.

Figure 3. MQ-99 Threat Category Results.
Figure 3. MQ-99 Threat Category Results.

In this notional scenario, it can be seen at a glance that the SAMs are the most impactful threats, although all the threats have some effect. If the breakdown is done by individual threat system, further insights can be gained. For example, the medium-range SAMs are most lethal to the MQ-99, while the short-range SAM has the greatest impact on targets destroyed since it intercepts many SDBs on their way to targets.

Cyber Attack Modeling

For cyber attack modeling, likelihood inputs to AFSIM were specified as 90% confidence intervals, capturing uncertainty in each attack parameter, then converted into Gaussian mean and standard deviation values for Monte Carlo execution. Three key probabilistic inputs were defined for each cyber pathway: (1) the likelihood of attack success as determined from attack tree-derived probabilities, (2) the likelihood of adversary employment of that attack type during the mission, and (3) the percentage of Berserker systems expected to be affected should the attack succeed. The values of these inputs can be determined via intelligence-based estimates, cyber test data, the results of exercises involving the subject threats and systems, or SME judgements. These parameters were applied at run-time to determine whether an attack occurred as well as which systems were impacted.

The resulting cyber effects were then scripted into AFSIM as degradations to specific system functions, ranging from minor sensor inaccuracies to complete loss of navigation, communication, or weapon-release capability. A representative sample is shown in Table 1, which summarizes how some of the cyber attack effects were implemented in the simulation.

Table 1. Example Cyber Attacks Simulated in AFSIM. Table 1. Example Cyber Attacks Simulated in AFSIM

These scripted mission-relevant degradations provided dynamic mapping of cyber effects to mission outcomes, linking probabilistic risk estimates to observable mission degradation within the AFSIM environment.

Cyber Survivability Results

The results from removing each of the 21 cyber attacks, one at a time, are detailed in Figure 4.

Figure 4. AFSIM 21 Cyber Attack Results.
Figure 4. AFSIM 21 Cyber Attack Results.

Note that the scale of the Y axis is highly expanded, and the top of the chart is only 2% EML. The black error bars represent the 95% margin of error, or the range within which the result would be expected to fall 95% of the time if the experiment was repeated numerous times. The large number of model iterations was largely required to shrink this error, since the probabilities associated with most of the cyber attacks were extremely small.

Several of the attacks, such as R1-0 and R3-1, predominantly affected the number of targets hit vs. MQ-99’s lost, which makes sense, as R1-0 affected targeting accuracy and R3-1 inserted additional short-range SAMs in the target area that were mostly targeting the incoming SDBs. R3-3 resulted in more MQ-99’s being destroyed as the Pk’s of the medium-range SAMs were increased. R6-6 looks strange, as fewer targets were hit, but fewer MQ-99’s were destroyed when this attack was removed. This actually makes sense, as R6-6 was a false fuel-low warning that caused affected MQ-99’s to turn around and return to base before entering the target area. Once again, all of these attacks were purely notional and not intended to represent any real attacks, but they do illustrate how a wide range of attacks can be modeled.

Additional Considerations

Verification and Validation

Similar results to those previously listed, on real systems, will only be useful if they approximate what is actually going to happen in a contested cyber and full-spectrum threat environment. DoD policy rightly requires that any M&S model go through appropriate verification and validation, and these models should not be exempted.

However, cyber models are harder to compare to a baseline than physics-based models, which can be more easily verified with physical testing. In this case, cyber testing is a good starting point to ensure that the probabilities reflect the real difficulty of the modeled attacks. Single attacks can also be scaled up on cyber ranges where attackers and defenders interact. Finally, and most importantly, the entire approach can be validated by applying it against large-force exercises. The expected exercise vulnerability periods normally include some without simulated cyber attacks and some with them. The model’s results should be congruent with what is seen in such testing and exercises, and the models should be updated as appropriate, while noting that even the large-force exercises have limitations in their ability to replicate real combat.

Using With System and Mission Engineering

The results of this type of analysis can also provide a quantitative foundation for supporting both systems engineering and mission engineering trade studies. Within systems engineering, the ability to model and quantify cyber and full-spectrum effects enables designers to evaluate alternative design approaches and mitigation strategies early in the life cycle, when changes are least costly and most impactful. To illustrate, a range of MQ-99 radar cross section (RCS) reductions was compared against the mission gain of designing a new SDB carriage mechanism that enabled the carriage of six instead of four SDBs. Increasing the number of SDBs carried produced a greater mission gain, with 9.5% more targets hit and 5% fewer MQ-99 lost, than RCS reduction, with 1% more targets hit and 9% fewer MQ-99 lost. Simulating proposed design changes can help decision-makers optimize their available resources.

At the mission engineering level, the same simulation data can be extended to compare platform and capability options across different scenarios. Analysts can assess how combinations of manned and unmanned assets, alternative communication architectures, or new electronic protection measures influence aggregate mission success in contested environments. In this scenario, adding four air-launched decoys was simulated, along with increasing the accuracy of the SDBs or reducing the SDB’s RCS. On the mission engineering side, adding the air launched decoys was the clear winner, as it resulted in 12.5% more targets hit and 18% fewer MQ-99 lost. All of these excursions, while purely notional, illustrate the types of tradeoff analysis that can be done linking technical design choices directly to operational performance.

Conclusions

Mission-level simulation tools such as AFSIM provide significant potential for improving the evaluation of full-spectrum survivability. By integrating cyber, kinetic, electromagnetic, and directed-energy threats into a unified digital environment, AFSIM enables analysts to explore how combinations of threat domains interact to affect mission outcomes. A key technique in this approach is the systematic isolation of individual threat contributions, removing one threat at a time while holding the others constant, to quantify the marginal impact of each threat on overall mission success. This technique provides a transparent and repeatable framework for identifying dominant threat drivers and validating mitigation strategies.

This study illustrates that cyber survivability can be meaningfully measured within such a framework, provided that cyber attack mechanisms likelihoods are quantified and their system-level consequences are well understood. Translating cyber effects into concrete events such as degraded targeting, communications loss, or false fuel indications allows cyber survivability to be analyzed alongside traditional threat types. Doing so transforms cyber vulnerabilities from abstract qualitatively measured risks into quantifiable mission-relevant risk, consistent with other established survivability analysis methodologies, which enables program managers to make better systems and mission engineering decisions.

However, the utility of M&S-based assessments does depend on verification and validation. The fidelity of modeled outcomes must be supported by empirical evidence derived from cyber testing, cyber ranges, and large-force live-fly exercises. Such events provide the operational data necessary to calibrate attack probabilities, effect magnitudes, and behavioral responses within the simulation environment. Establishing repeatable, data-driven correlations between simulated EML and observed test and exercise outcomes is therefore an essential next step.

Future work should focus on implementing this approach on real systems. The systems that are most appealing are those that already have robust M&S built, as adding the cyber effects to existing mission models should be relatively easy and inexpensive compared to starting a new M&S effort. Ultimately, validated mission-level simulation can evolve into a quantitative engine for survivability evaluation, providing the DoD with a consistent, scalable means to assess and enhance system resilience across all threat domains.

About the Author

Dr. William “Data” Bryant is a cyberspace defense and risk leader who is a Technical Fellow for Modern Technology Solutions, Incorporated (MTSI). His diverse background in operations, planning, and strategy includes more than 25 years of service in the Air Force, where he was a fighter pilot, planner, and strategist. Dr. Bryant helped create Task Force Cyber Secure, served as the Air Force Deputy Chief Information Security Office, and helped to develop Aircraft Cyber Combat Survivability with Dr. Robert Ball. He holds multiple degrees in aeronautical engineering, space systems, military strategy, and organizational management and has authored numerous works on various aspects of defending cyber physical systems and cyberspace superiority.

References

  1. 2022 National Defense Authorization Act, 118th Congress, §4172 (p. 2566) and §4173 (p. 2567).
  2. Bryant, W. D., and R. Ball. “Developing the Fundamentals of Aircraft Cyber Combat Survivability.” Parts 1–4, Aircraft Survivability, spring 2020 (part 1), summer 2020 (part 2), fall 2020 (part 3), and spring 2021 (part 4).
  3. Bryant, W., C. Fisher, D Boseman, and J. Ivancik. “Digital Technology—A Universal Integrator—Enabling Full-Spectrum Survivability Evaluations.” Naval Engineers Journal, vol. 136, pp. 189–198, spring 2024.
  4. Committee on National Security Systems. Committee on National Security Systems (CNSS) Glossary. CNSSI No. 4009, Washington, DC, p. 169, 2015.
  5. Brown, A., W. Bryant, E. Moro, and M. Standard. “The Unified Risk Assessment and Measurement System (URAMS) Guidebook: Version 3.0.” Edited by W. Bryant, www.mtsi-va.com/weapon-systems-cybersecurity/, pp. 50–60, 2023.
  6. Bryant, W. D. “Predicting Cyber Attack Probability Using Probabilistic Attack Trees.” ITEA Journal of Test and Evaluation, vol. 46, no. 4, December 2025.
  7. Bryant, W. D. “The Unified Risk Assessment and Measurement System (URAMS) Guidebook: Version 2.0.” www.mtsi-va.com/weapon-systems-cybersecurity/, pp. 18–19, 2022.
By:  William “Data” Bryant

Read Time:  12 minutes

Table of Contents

Aircraft Survivability Journal

Archives

Scroll to Top