By: William D. Bryant and Robert E. Ball


  • Learning Objective 3 — Describe the Three Primary ACCS Terms
  • Learning Objective 4 — Describe the Essential Actions and Events in the ACCS Kill Chain
  • Learning Objective 5 — Describe the ACCS Probabilistic Kill Chain


In Part 1 of this continuing series describing the development of the new Aircraft Cyber Combat Survivability (ACCS) design discipline (published in the spring issue of the Aircraft Survivability journal [1]), we examined the possibility that a cyber antiaircraft weapon could be developed and used effectively against modern, computer-controlled aircraft. Because of the similarities between the elements and the operations of the postulated cyber antiaircraft weapon to the similar elements and operations of the traditional kinetic energy (KE) antiaircraft weapons, such as guns with their ballistic projectiles and guided missiles with their proximity-fuzed high-explosive (HE) warheads, we used the existing fundamentals of the well-established Aircraft Combat Survivability (ACS) design discipline to begin the creation of a new ACCS design discipline. This new discipline, whose goal is the same as the ACS goal (the early identification and successful incorporation of those cyber survivability enhancement features [SEFs] that increase the combat cost-effectiveness of an aircraft), can then be used to design our military aircraft to better survive cyber antiaircraft weapon attacks.

We began our development of ACCS in Part 1 by identifying the three major elements of a cyber antiaircraft weapon based upon the analogous elements in the KE weapons. The three cyber weapon elements consist of (1) the software warhead, with its malware or malicious computer code, known as the weapon’s malfunction mechanism; (2) the weapon’s aircraft detection and tracking subsystem, referred to as the cyber radar; and (3) the warhead transporter subsystem, referred to as the cyber missile. This identification was followed by the description of how the warhead on a cyber weapon can kill an aircraft in flight by exploiting the aircraft’s internal cyber systems to execute the malicious computer code that causes internal critical components to malfunction, leading to one or more critical component “kills” within the flight- or mission-critical aircraft operational systems.

In this second part of the series, we first describe the actions and events that occur when one KE weapon attacks one aircraft, known as a one-on-one scenario in a man-made hostile KE environment. This is followed by a description of the ACS kill chain, consisting of the sequence of six essential scenario events that lead to a kill of the aircraft, either mission or attrition, which we then convert into the ACS probabilistic kill chain by introducing the probability that the event occurs, and the complementary probability that the event does not occur, to each of the events in the kill chain. Next, we define three of the most fundamental terms in ACS and their extension to the analogous terms in ACCS before turning to a description of the various cyber elements that could be involved in a cyber attack on an aircraft in a one-on-one scenario that takes place in a man-made hostile cyber environment.

With all of that in place, we are then able to develop the ACCS kill chain and corresponding probabilistic kill chain, which describe the sequential process of a cyber attack on an aircraft and can be used to determine the probability that the attack was successful in causing either a mission kill or a permanent kill of the aircraft. Finally, we make a brief examination of the general use of probabilistic kill chains in survivability modeling and simulation (M&S) and discuss the validity of the numerical probabilities used, with some recommendations on a more effective application.

(Note that the material describing the ACS terms and concepts throughout this article is largely taken from the second edition of the textbook The Fundamentals of Aircraft Combat Survivability Analysis and Design [2].)


For a KE weapon, a typical one-on-one scenario begins with our single aircraft flying toward, into, or over territory defended by an active KE antiaircraft weapon—for example, a ground-based guided missile system (such as the one shown in Figure 1) that is searching for aircraft to attack using radar, the weapon system’s target detection and tracking element. If the aircraft is detected by the searching weapon, the detected aircraft’s location is then tracked, and the aircraft is identified and targeted (if hostile).

Figure 1. A 2K12 Kub Surface-to-Air Missile Launch (U.S. Army Photo).

Subsequently, if the targeted aircraft enters into the weapon’s engagement zone, the enemy may engage the aircraft by launching a guided missile, with its HE warhead, toward the detected aircraft. The launched guided missile (the warhead transport element) will then fly out toward the targeted aircraft. Eventually, the missile may come sufficiently close to an intercept with the aircraft such that the proximity-fuzed HE warhead on the missile may detonate. One or more of the fragments and the blast from the detonation (the warhead damage mechanisms) may then physically hit the aircraft, or the missile with an unexploded warhead may make a direct hit on the aircraft, followed by the detonation of the warhead on or inside the aircraft. The final phase of the attack scenario consists of the response of the aircraft to all of the damage mechanism hits.

In the end, the aircraft either survives the one-on-one encounter with the KE weapon (and continues on its mission unimpeded) or doesn’t (and is instead forced to abort the mission due to damage suffered by one or more of the aircraft’s mission critical components—which is a mission kill—or eventually crashes due to damage suffered by one or more flight-critical components and is permanently killed—which is an attrition kill).  Those aircraft that are forced to abort the mission return to base, where any combat-caused physical damage may be repaired.

In ACS, the one-on-one scenario just described is said to take place in a man-made hostile KE environment. This environment is dynamic over time, starting with the active search for aircraft to attack; and it includes the attacker’s antiaircraft weapon, any supporting equipment (such as command and control elements), and the actions and any consequences of the actions taken during the scenario (such as searching for aircraft and firing a gun or launching a missile at a detected hostile aircraft) that must be contended with by the aircraft if it is to survive while operating in this potentially lethal hostile environment.


The classic one-on-one aircraft kill chain for the KE antiaircraft weapon consists of the time-wise sequence of the weapon’s actions and the subsequent scenario events that are essential to causing an aircraft kill. The typical one-on-one kill chain is shown in Figure 2a, where the essential scenario events are in blue on the left side of the figure, with time starting at the top and moving down. Likewise, the weapon’s actions are in black on the right side of the figure. The time duration between any two sequential events is referred to as a phase or chain link between two events. The assumption is made in this kill chain that the warhead transporter makes a direct hit on the aircraft.

Figure 2a. ACS Kill Chain.

As shown in the figure, the weapon must first be active and searching for any aircraft that enter into the defended area. Second, the active weapon must successfully detect and identify any intruder aircraft. Third, the detected hostile aircraft is tracked; a fire control solution is obtained; and a KE warhead transporter, such as a guided missile with its onboard warhead, must be fired or launched toward the aircraft. In the fourth phase, the fired or launched warhead transporter must fly out toward an intercept with the aircraft. Fifth, given a successful intercept, the warhead transporter must hit the aircraft with the warhead’s damage mechanisms, which can either be a direct hit by the transporter or an intercept properly oriented and close enough to the aircraft such that the warhead is effective, as in the detonation of a proximity-fuzed HE warhead such that one or more warhead damage mechanisms hit the aircraft. Sixth, and finally, the hit aircraft must be killed by the warhead’s damage mechanism hit(s).

Note that, as discussed in Part 1, the term “killed” can mean a “mission kill” that prevents the aircraft from accomplishing its mission or a permanent or “attrition kill,” where the aircraft is physically destroyed either by the hit(s) or by a subsequent crash. Thus, moving down the KE kill chain, the six essential events can be seen—from the initial activation of the weapon in preparation for the entry of any aircraft into the defended area; to the detection and identification of an aircraft; to the engagement of the aircraft (the firing or launch of a warhead transporter); to the transporter intercept of the aircraft; to the hit of the aircraft by the warhead damage mechanisms; and, finally, to the kill of the aircraft. If, after successfully proceeding down the chain to a particular phase, the next event does not occur, then none of the following actions and events can occur because the “chain” is said to be broken at that link and the aircraft is not killed (i.e., the aircraft has survived the attack).


For our aircraft to be more survivable when flying in a man-made hostile KE environment, we need to design and operate it so that the sequence of kill chain events is less likely to be successfully completed. However, before we can turn to identifying the aircraft’s potential SEFs that may be able to break the chain, we first need a way to measure how effective each feature is in breaking the chain so that we can balance cost and performance or effectiveness.

What we need is a measure that an attack on an aircraft by an enemy KE weapon is successful from the enemy’s point of view. However, whether there will be a successful outcome for each of the events in the kill chain for a particular case is unknown in advance. There are simply too many unknown variables. As a result, we need to rely on event outcome probabilities. To accomplish this, we add the probability a successful event occurs, given the occurrence of the previous event, and the complementary probability a not successful event occurs, given the occurrence of the previous event, to each event in the kill chain. Figure 2b illustrates the series of kill chain events with the accompanying probabilities of success and failure and is an example of a tree diagram in probability theory. It is referred to here as the probabilistic kill chain or the kill chain with probabilities.

Figure 2b. ACS Probabilistic Kill Chain.

The two conditional event outcome probabilities for each event are mutually exclusive and exhaustive. For example, in the first event in the kill chain—weapon active—PA represents the probability that the weapon is active and is searching for aircraft. Conversely, 1 – PA = PCA, which is the complementary probability that the weapon is not active and therefore is not searching for aircraft. The question is asked, “Weapon Active?” If yes, then the tree diagram moves down to the right to the next branch in the tree. If no, the tree diagram moves down to the left, and the aircraft survives. The important aspect of this phase is that an aircraft survives while operating in a man-made hostile environment when the defending KE weapon is not active and, therefore, cannot detect and eventually kill our aircraft. So, any friendly operations that suppress or destroy threat weapons before they can attack an aircraft are SEFs.

The second event in the kill chain involves the two possible outcomes, with complementary probabilities, of an aircraft being detected, given that the active weapon is searching, PD|A, and an aircraft is not detected, given that the weapon is searching, 1– PD|A = PCD|A. There are, of course, a number of aircraft design features and operational actions that can increase the probability that the weapon will not detect the aircraft (e.g., low aircraft signatures and the actions from stand-off electronic countermeasure equipment, such as noise jamming). The subsequent sequence of events in the chain have a similar format. Finally, note in Figure 2b that the first five events capture the susceptibility phase of ACS and the last event covers the vulnerability phase (defined in Table 1).

It is important to note here that, while this probability tree diagram is simple and mathematically sound, we do not mean to imply that it is simple to determine the actual numerical probability that an aircraft will be killed. Determining each probability in the chain can be extremely difficult. Furthermore, the engagement considered represents a simple one-on-one scenario, but many engagements are far more complex and involve multiple weapons and aircraft. The main value of this model is that the more we know how a weapon kills an aircraft, the more likely we can develop SEFs that enhance an aircraft’s survivability while operating in a man-made hostile environment. With this greater knowledge, we hope to reduce any antiaircraft weapon’s effectiveness by the maximum extent possible through the use of cost-effective SEFs. Although this probabilistic kill chain model is not meant to be a way to determine precise probabilities, if used appropriately and with caution, this model can certainly help with M&S and the selection of the most effective SEFs.


Table 1 contains definitions of three fundamental ACS terms (left column) and the analogous ACCS terms (right column). These definitions are the foundation of the ACS discipline.

We need to specifically address the meaning of the ACCS term “aircraft cyber vulnerability” here because of the different ways the traditional cybersecurity community uses the word “vulnerability.” One of the most fundamental concepts of ACS is that it breaks combat survivability into two parts—reducing aircraft susceptibility (which is an indication of how easy it is to hit an aircraft with a KE war-head’s damage mechanisms) and reducing aircraft vulnerability (which is an indication of how easy it is to cause a mission or attrition kill once an aircraft has been hit). This distinction between “how easy it is to hit an aircraft” and “how easy it is to kill an aircraft given that it is hit” is essential not only in understanding ACS but in developing ACCS.

We have noticed that other disciplines (such as cybersecurity) have not adopted this two-phase model. The Committee on National Security Systems (CNSS) Glossary [3] used by Department of Defense cybersecurity professionals has no definition for “susceptibility” but instead defines “vulnerability” broadly enough to cover both “before a hit occurs” and “after a hit occurs” during an attack. Vulnerability is defined there as a “weakness in an information system, system security procedures, internal controls, or implementation that could be exploited by a threat source.” This lack of distinction between avoiding getting hit and withstanding a hit contributes to a common issue with traditional information technology (IT) systems—they may have strong outer defenses, but once an adversary gains access, there is normally little in place to prevent attackers from achieving their objectives.

A recognition of this difficulty can be seen in the increasing discussion of “cyber resiliency.” In some modern discussions of traditional IT systems, “cybersecurity” can be thought of as being focused on reducing cyber susceptibility, while “cyber resiliency” is roughly analogous to reducing cyber vulnerability. In ACCS, we chose to achieve survivability by reducing both cyber susceptibility and cyber vulner-ability primarily because there are very different engineering design considerations between designing a system to not get cyber hit and designing a system to continue functioning at an acceptable level after getting cyber hit. It appears that this distinction is also now being recognized by many in traditional cybersecurity.


Figure 3 illustrates the man-made hostile environment associated with a cyber weapon in a one-on-one scenario. The aircraft in the center of the figure is the potential target for the cyber weapon, and the cyber attacker can send a cyber missile on any of the available cyber pathways (shown by the dashed and solid lines). The aircraft’s “attack surface” shown is a generic example of some of the types of pathways available to cyber attackers, but the details will of course change for specific aircraft, scenarios, and configurations.

Figure 3. Typical Aircraft Attack Surface.

For ACS, the attack surface itself is the actual exterior surface of the aircraft that can be hit by a damage mechanism. For the cyber weapon, the attack surface typically consists of all the connection points between the aircraft and some external location in cyberspace, whether that is a wired connection or an antenna listening to a portion of the EM spectrum. These connection points then lead to the various internal cyber systems on the aircraft. The attacker does not have to send a complete warhead through the same access point; the attacker may instead send parts of it through different access points and potentially activate or trigger the assembled warhead through another completely different access point.

The attacker also does not have to immediately activate the weapon’s malfunction mechanism once the launched cyber missile hits the aircraft’s attack surface. The cyber missile can be sent at any time before or during the aircraft’s mission, and then the adversary can choose to trigger the malfunction mechanism at the most opportune time to cause the most effective aircraft kill, such as when the aircraft is in flight and has a reduced ability to withstand the malfunction effects that could lead to a crash.


The kill chain for cyber weapons is similar to the kill chain for the KE weapons, but it requires a few modifications to account for some of the differences between the kinetic and cyber weapons discussed in Part 1 [1]. Figure 4 shows the ACCS probabilistic kill chain after adding in the event probabilities to the ACCS kill chain.

Figure 4. ACCS Probabilistic Kill Chain.

Note that there are still six events, the first of which is that the adversary has an active cyber weapon, with a potentially effective malfunction mechanism, that is searching for the, or a, target. With KE ACS, it is assumed, based upon system threat analyses, that the enemy has a particular antiaircraft weapon being modeled—such as an SA-8, which is a well-known system that many countries employ. Because cyber weapons are so much harder to discover, the probability that an adversary has actually developed one is included within this first probability that the weapon is active, because a weapon that does not exist cannot be active. For example, a nation with a nascent cyber attack program and no significant aircraft industry will likely have a lower probability of being able to develop a complex cyber weapon that can kill an aircraft than a nation with both a highly developed cyber attack and avionics industry.

In the second step of the kill chain, the adversary detects the aircraft in cyberspace using its detection and tracking element, referred to as a cyber radar. The connection between the attacker and the aircraft does not have to be “live,” as the attacker’s cyber radar signal may have to cross air gaps” where there is no continuous connection between the attacker and the aircraft. Combat aircraft do not typically have a persistent connection to the Internet, and consequently many of the potential pathways shown in Figure 2 may not be available. Defenders should not be too complacent, however, as numerous examples have shown how determined attackers can jump across seemingly “impregnable” airgaps. Probably the most famous example is the Stuxnet attack, in which the attacker was able to somehow work his way onto air-gapped centrifuge controllers in an extremely secure facility [4].

In addition, because modern combat aircraft typically need to communicate with numerous off-board systems to be effective, their attack surface—or the number of pathways to access their internal systems (i.e., establish a connection and be able to communicate with the “cyber bubble” inside the aircraft)—is typically larger than might initially be thought.

In the third step of the kill chain, the adversary, using the cyber radar, determines what pathway to the target will be used and then launches the cyber missile to transport the cyber warhead to the aircraft. A complex cyber weapon typically has a section of code designed to cause the malicious effect an attacker is trying to create (the cyber warhead with its malfunction mechanism), with that code wrapped within another section of code designed to transport the warhead to its destination along the chosen pathway, gain access to the internal cyber systems, and implant the malfunction mechanism within the aircraft’s internal cyber systems.

In the fourth step, the launched cyber missile transports and delivers the cyber warhead to the target’s attack surface. It subsequently attempts to access the internal cyber systems and modify the code by implanting the malfunction mechanism using the code inside the cyber missile. This step is analogous to the target intercept on the kinetic ACS kill chain because it fulfills the same basic function of delivering the warhead— cyber or kinetic—to, or into, the target aircraft.

The fifth step is the activation of the cyber weapon’s previously implanted malfunction mechanism, which is the cyber equivalent of the aircraft being hit by the KE weapon’s damage mechanisms (i.e., the cyber hit) and thus is the boundary or demarcation between the ACCS susceptibility phase and the vulnerability phase of the scenario.

It is important to note here another difference between kinetic and cyber weapons. For KE weapons, there is normally no significant time delay between the end of a successful transporter intercept with the target aircraft and hitting the targeted aircraft with the warhead’s damage mechanisms when the warhead detonates in proximity to the aircraft or physically impacts the aircraft’s skin. On the other hand, as noted previously, the cyber weapon’s malfunction mechanism can remain dormant and wait for as long as the adversary desires once it is implanted. It is as if a surface-to-air missile were shot at an aircraft and embedded itself inside without anyone noticing, but then waited 3 years until a conflict started before detonating.

However, because most in-flight combat aircraft are difficult to access from cyberspace, activating or triggering a weapon in place when desired can be extremely difficult for an attacker. If the triggering mechanism is too easy, the malfunction mechanism may be triggered early, in which case the defender will find out about the weapon and remove it. Conversely, if the triggering mechanism is too hard, the triggering may fail, and the weapon will have no effect. There is also always the danger that a defender will stumble on an implanted malfunction mechanism by accident during routine operations or that some change to the system (such as an update) will prevent the weapon from being effective.

The final step in the kill chain, the eventual response of the aircraft to the activated malfunction mechanism(s), is that the aircraft is either killed by the component and system malfunctions caused by the triggered cyber warhead or it isn’t. Further discussion of these malfunctions and a few of the ways they can result in an aircraft mission or attrition kill can be found in Part 1 [1].

The main value of this ACCS probabilistic kill chain model is that, like the ACS probabilistic kill chain model, it leads to a greater understanding of the various steps a cyber weapon must go through to be successful. With this understanding, we can attempt to reduce a cyber antiaircraft weapon’s effectiveness by searching for and selecting those cyber survivability enhancement features that result in a cyber survivable, combat cost-effective aircraft. It is not meant to be a way to determine precise ACCS probabilities, which can be more difficult to determine than the ACS probabilities. However, when used appropriately and with caution, this model can certainly help with scenario M&S, as described in the following section.


Note that while the ACCS probabilistic kill chain shown in Figure 4 is principally useful as a way to understand the sequence of events during a cyber attack on an aircraft, it can also aid in the M&S of cyber attacks when searching for the “best” CSEFs by providing an estimate of the likelihood an attack by a particular cyber weapon with a specific CSEF on a particular aircraft will be successful in killing the aircraft.

While modeling the probability of a cyber attack is extremely difficult, it is simply too important to ignore. We believe the ACCS probabilistic kill chain, with its kill chain events and set of probabilities, has the possibility of turning into a promising approach when used with care and an understanding of the potential pitfalls. It’s important to note here that the probabilities of each event in the ACS kill chain can be estimated by modeling the physics of each event and then verifying the models with live fire testing. Unfortunately for ACCS, determining the probabilities for each event in the ACCS kill chain is far more challenging. As noted in Part 1, cyber weapons are complex and interactive, so it is hard to capture or determine the probability that an attack will be successful in a laboratory experiment. In addition, there is little historical data to draw upon. Thus, the uncertainty in many of the probabilities will likely be much higher than for the kinetic weapons.

Fortunately, there is an approach to mitigating these problems for both the ACS and ACCS probabilistic kill chains, Applied Information Economics (AIE), which can be used to measure and develop probabilities that can handle both high levels of uncertainty and highly limited data sets. The AIE process, which is detailed in a text by Mr. Douglas Hubbard [5], focuses on using a variety of measurement concepts and techniques such as probability distributions, expert calibration, and Monte Carlo simulations (which is discussed later) to generate useful results in environments with high uncertainty.

Because uncertainty will be so high, the probabilities needed to go into either the ACS or ACCS model should incorporate how exactly or inexactly those values are known. A simple way of accomplishing this is to rely on a 90% confidence interval (CI) on a probability distribution instead of a single point value. So, instead of stating that in a particular scenario, the probability that an adversary would be able to implant a cyber weapon’s malfunction mechanism given a cyber missile launch (PI|L) is 0.472, a 90% CI might be 0.3–0.7. This means that there is a 90% chance the actual probability lies between 0.30 and 0.70.

The shape of the probability distribution also matters a great deal, and it could take any number of shapes including a uniform distribution, the famous normal distribution, a power law distribution, or any number of other possibilities. The shape will depend on a number of factors about the underlying data, and while the normal distribution may be a good place to start in the absence of more detailed information, it can be highly inaccurate for some types of data, especially where there are overpowering “outliers.” (Note, for a detailed explanation of the types of data that are not modeled well with the normal distribution, see Mr. Nassim Taleb’s text The Black Swan:  The Impact of the Highly Improbable [6].)

Because there is such a small amount of hard data on cyber as an antiaircraft weapon, the probabilities in Figure 4 will largely need to be determined by experts in each level of the kill chain. Employing subject-matter experts to determine probabilities is extremely common, but it also has the potential to add large amounts of error due to the human tendency to be overconfident in assessing our own accuracy.

For example, from 927 tests in which participants were asked to assign 90% CIs to general knowledge questions (i.e., they should have gotten 90% of the questions right), they actually only got 53% of the answers correct, showing that they were significantly overconfident in their accuracy [5]. Fortunately, there is a proven way to increase the accuracy of expert predictions, by teaching them techniques to improve their accuracy and overcome typical psychological biases. Thus, as the level of calibration of experts can easily be measured by a series of tests, only calibrated experts should be used to determine the 90% CI values for the various probabilities in Figure 4.

Using CIs instead of point values provides much more useful probabilities and greater information about the possible range of values; however, the probability distributions cannot be simply multiplied together like point values. Fortunately, a technique called a Monte Carlo simulation, which was invented during the Manhattan project, easily allows the combination of distributions of probabilities [2 (p. 848), 7]. In such a simulation, random values are selected according to the probability distribution for each value, and the result is calculated. Then the process is repeated thousands of times, and the results are averaged.

This type of simulation is a widely used method in the finance and insurance industries for handling data with significant uncertainty, and simple Monte Carlo calculations can easily be done on a basic spreadsheet program, such as Excel. For our purposes, the 90% CI values are input into a spreadsheet that then calculates a probability distribution using whatever distribution is selected for each of the probabilities in Figure 4. The spreadsheet then runs the scenario across however many data points are desired and provides an overall probability of a weapon systems kill given the CIs and distributions entered. Just as importantly, the spreadsheet also provides a measurement of the overall uncertainty of the final results given the uncertainty of the inputs.


In this second part of the ACCS series, we have given the definitions of three of the most fundamental terms in ACS and their extension to the analogous terms in ACCS. We have also:

  • Described the scenario actions and events that occur in the one-on-one scenario in a man-made hostile KE environment.
  • Described the ACS kill chain, consisting of the sequence of six essential scenario events that can lead to a kill of the aircraft.
  • Converted the ACS kill chain into the probabilistic kill chain by adding the probability that the event occurs and the complementary probability that the event does not occur to each of the events in the kill chain.
  • Provided a description of the various cyber elements that could be involved in a similar cyber attack on an aircraft in a man-made hostile cyber environment.
  • Described the ACCS probabilistic kill chain analogous to the ACS probabilistic kill chain.
  • Examined the use of ACS and ACCS probabilistic kill chains in M&S regarding the validity of the numerical probabilities used, with some related recommendations for improvement.

Part 3 of this series will present the fundamentals and processes for enhancing the survivability of aircraft when threatened by a cyber weapon. The fundamentals and processes will be based upon the six ACS susceptibility reduction concepts and the six vulnerability reduction concepts that have been developed for KE weapons.


Dr. William D. “Data” Bryant is a cyberspace defense and risk leader who currently works for Modern Technology Solutions, Incorporated (MTSI). His diverse background in operations, planning, and strategy includes more than 25 years of service in the Air Force, where he was a fighter pilot, planner, and strategist. Dr. Bryant helped create Task Force Cyber Secure and also served as the Air Force Deputy Chief Information Security Officer while developing and successfully implementing numerous proposals and policies to improve the cyber defense of weapon systems. He holds multiple degrees in aeronautical engineering, space systems, military strategy, and organizational management. He has also authored numerous works on various aspects of defending cyber physical systems and cyberspace superiority, including International Conflict and Cyberspace Superiority: Theory and Practice [7].

Dr. Robert E. Ball is a Distinguished Professor Emeritus at the Naval Postgraduate School (NPS), where he has spent more than 33 years teaching ACS, structures, and structural dynamics. He has been the principal developer and presenter of the fundamentals of ACS over the past four decades and is the author of The Fundamentals of Aircraft Combat Survivability Analysis and Design (first and second editions) [2, 8]. In addition, his more than 57 years of experience have included serving as president of two companies (Structural Analytics, Inc., and Aerospace Educational Services, Inc.) and as a consultant to Anamet Labs, the SURVICE Engineering Company, and the Institute for Defense Analyses (IDA). Dr. Ball holds a B.S., M.S., and Ph.D. in structural engineering from Northwestern University.


  1. Bryant, William D., and Robert E. Ball. “Developing the Fundamentals of Aircraft Cyber Combat Survivability: Part 1.” Aircraft Survivability, spring 2020.
  2. Ball, Robert E. The Fundamentals of Aircraft Combat Survivability Analysis and Design. Second edition, American Institute of Aeronautics and Astronautics, 2003.
  3. Committee on National Security Systems. “Committee on National Security Systems (CNSS) Glossary.” CNSSI No. 3009, April 2015.
  4. Zetter, Kim. Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon. New York: Crown Publishers, 2014.
  5. Hubbard, Douglas W. How to Measure Anything: Finding the Value of “Intangibles” in Business. Third edition, Hoboken, NJ: Wiley, 2014.
  6. Taleb, Nassim Nicholas. The Black Swan: The Impact of the Highly Improbable. New York: Random House, 2012.
  7. Bryant, William D. International Conflict and Cyberspace Superiority: Theory and Practice. New York: Routledge, 2015.
  8. Ball, Robert F. The Fundamentals of Aircraft Combat Survivability Analysis and Design. First edition, American Institute of Aeronautics and Astronautics, 1985.