By William D. Bryant and Robert E. Ball


  • Learning Objective 6 — Describe the Cyber Susceptibility Reduction Concepts (CSRCs)
  • Learning Objective 7 — Describe the Cyber Vulnerability Reduction Concepts (CVRCs)


In Part 1 of this series, we introduced the cyber weapon as a potentially new category of antiaircraft weapons that can attack and kill aircraft in flight and that we believe can be included under the broader tent of the current kinetic energy weapon (KEW)-focused Aircraft Combat Survivability (ACS) discipline [1]. The ACS fundamentals proved to also be foundational in Part 2 when describing the operations of a cyber weapon attacking an aircraft in a sequenced probabilistic kill chain [2]. In this third part, we turn from examining cyber as an antiaircraft weapon to describing what aircraft developers, designers, and operators can do using the concepts of the ACS discipline to make our aircraft more survivable when under attack by cyber weapons.

When the development of the ACS discipline began about 50 years ago, the focus was on surviving while operating in an environment made hostile by the antiaircraft KEWs, such as guns and guided missiles [3]. Early in the development of ACS, 12 broad concepts that can enhance an aircraft’s survivability in combat were identified [4]. These 12 concepts are called Survivability Enhancement Concepts (SECs).

Six of the SECs enhance an aircraft’s survivability by reducing the aircraft’s susceptibility—threat warning, noise jammers and deceivers, signature reduction, expendables, threat suppression, and tactics—and are thus known as Susceptibility Reduction Concepts (SRCs). (Recall that susceptibility reduction enhances the capability of an aircraft to avoid the man-made hostile environment.) The remaining six SECs reduce the aircraft’s vulnerability—component redundancy with separation, component location, passive damage suppression, active damage suppression, component shielding, and component elimination—and are referred to as Vulnerability Reduction Concepts (VRCs). (Recall that vulnerability reduction enhances the capability of an aircraft to withstand the hostile environment.)

Over the past 100+ years, nearly all of the hundreds, if not thousands, of design and operational features that have enhanced an aircraft’s combat survivability have unknowingly (before the advent of the ACS discipline) or knowingly (after the advent) been based upon 1 of the 12 ACS SECs. These features are referred to as Survivability Enhancement Features (SEFs). (For examples of the SEFs used on the A-10, F/A-18A, and UH-60A, see pp. 135–140 of The Fundamentals of Aircraft Combat Survivability Analysis and Design, Second Edition [5].)

For example, one of the original vulnerability reduction SECs (a VRC) is “passive damage suppression.” In this VRC, any physical damage, or the effects of the damage, caused by a hit by a KEW warhead damage mechanism (such as a bullet, high-explosive case fragment, incendiary particle, or blast) on an aircraft is passively prevented or suppressed to the extent that the aircraft withstands the hit. One example of an SEF that uses this passive damage suppression VRC is the placing of explosion-suppression foam into the wing fuel tank that will passively prevent an internal tank explosion if the tank is hit by a penetrator with incendiaries.

A different example of the passive damage suppression VRC is the use of self-sealing fuel lines and fuel tank bladders that passively suppress the leakage of fuel from any bullet or fragment-penetrated line or tank by using self-sealing materials that swell in the presence of fuel. (Note that that the swelling of the self-sealing material is considered to be passive because no additional capability to sense damage is required.)

The point here is that there are two different SEFs using one SEC. In general, then, the broad SECs “say what to do,” and the individual SEFs “say how to do it.” A logical question to ask then is, “Can the ACS SECs be used in our extension of the ACS discipline to the new Aircraft Cyber Combat Survivability (ACCS) discipline?” For example, can the passive damage suppression VRC (i.e., the passive suppression of the physical damage, or the effects of the damage, caused by KEW damage mechanisms) be used in the ACCS discipline, where the aircraft receives one or more cyber hits by malfunction mechanisms intended to cause critical component malfunctions that result in an aircraft mission or attrition kill due to the loss of essential mission or flight functions? Our answer is yes; as described in the following text, the passive damage suppression SEC, as well as nearly all of the other ACS SECs, can be used.


Table 1 presents the definitions of the ACS and the analogous ACCS terms for an SEC and SEF, and Table 2 lists the current ACS SECs and the new corresponding analogous CSECs for the ACCS discipline.

Note that the ACCS CSECs track closely with the ACS SECs, but there are some areas where the differences between the KE and cyber weapons require some changes. One example is that potential cyber correlations to the KE world’s expendables (e.g., chaff, flares, and expendable transmitters) such as honeypots and honeynets align more closely with deception, and there is no KE correlation to the cyber world’s cybersecurity hardening (preventing access to the aircraft’s internal cyber systems through the use of cyber defenses) in the KE world. Other than those two exceptions, each of the ACS SECs has a close analogy in a corresponding ACCS CSEC.

The CSECs listed in Table 2 and described in following text are deliberately broad and intended to raise awareness of the types of designs or actions that are possible. Finally, the methodology used to search for and develop specific CSEFs that are the implementation of one of the CSECs into a particular design will be described in Part 4 using the probabilistic kill chain developed in Part 2.

Cyber Susceptibility Reduction Concepts (CSRCs)

Susceptibility is driven by how easily an adversary can accomplish the first five steps in the ACCS probabilistic kill chain, as discussed in Part 2. This includes having an active weapon, detecting the aircraft, launching the weapon, intercepting the aircraft and implanting the weapon, and finally triggering the weapon. Thus, the five CSECs to lower susceptibility each focus on reducing the probability that an attacker can successfully complete one or more of these links in the attack kill chain.

Situational Awareness (SA)

As noted in Part 1, when an aircraft is being targeted by kinetic weapons, there are a number of different possible indications of that targeting, such as gunfire, launched missiles, or electronic signals used by the KEWs that can be detected by the targeted aircraft. Cyber weapons can have similar emissions and effects that can be observed and reported to operators, but few of our aircraft today are deliberately designed to provide awareness to pilots and operators that they are under attack by a cyber weapon. Attackers will go out of their way to hide their effects because once they are detected, their attacks can normally be neutralized by defenders relatively quickly. So, despite what is often depicted in Hollywood movies, operators should not expect to see the infamous skull and crossbone images or Internet meme “All your aircraft are belong to us” messages appearing on their cockpit displays.

There are three main categories of potential CSEFs under the situational awareness CSEC: (1) education and reporting systems, (2) exterior information technology (IT)-based monitoring systems watching data flows onto and off of our aircraft, and (3) monitoring systems built into the design baseline inside the skin of the aircraft. Note that the cost and difficulty of adding situational awareness CSEFs, along with the potential effectiveness of the techniques, ramp up significantly across the three categories.

The cheapest and least effective CSEFs focus on human-based detection and reporting. One way that an attack might become apparent is that there is a malfunction in the systems on the aircraft or those systems might begin to respond in unexpected ways. Unfortunately, these signals of a cyber attack could easily be missed by most modern pilots, who are well-accustomed to the routine (and non-cyber-related) malfunctions and unexpected and unusual behaviors of today’s complex aircraft. In fact, there are few military aircraft in the skies today that do not have some known deficiency documented, and none is without its own special set of quirks.

Also, remember that attackers will go out of their way to hide the observable effects of their attacks whenever possible. Stuxnet provides an excellent example of attackers working diligently to make it appear to the centrifuge operators that everything was fine, when it most definitely was not [7]. High-level aircraft attackers should be expected to operate in a similar fashion and ensure that built-in test (BIT) modules or other automated tests all report that nothing is wrong.

However, even if an attacker intends to stay unnoticed, the complexity of modern systems also means that the attacking weapon may have observable effects that the attacker did not intend. These effects are only of value to defenders if operators and maintainers understand that cyber attacks are a possibility and can have these kinds of observable effects so that they report them. While awareness in the field is increasing, there is still much to be done before the operational community at large is aware of potential attacks, before reporting systems are in place, and before personnel know how to use them. On many legacy systems, creating reporting systems and educating operators and maintainers in what to look for may be the best that can be accomplished in the short term; however, traditional-IT based monitoring systems can be implemented in the medium term.

The second, and more challenging, category of the situational awareness CSEFs are those that rely on monitoring the data that flow through the traditional-IT systems that surround an aircraft. While there are numerous tools available from the traditional-IT world to accomplish this monitoring, finding weapons that are packaged and deliberately hidden inside normal traffic can be extremely difficult.

Most of the current monitoring systems are essentially signature-based, which means they rely on knowing what they are looking for, which only works after a particular weapon has been discovered somewhere. Experts can also look at data flows and logs using various techniques, and this approach will likely be the focus of most cyber defense of weapons systems in the medium term until specialized tools can be designed and built into aircraft in the longer term.

One special category of aircraft that could be more effectively defended by human cyber operators in the near term includes aircraft that are large enough to actually have a cyber defender on board, especially those whose mission systems mostly consist of traditional-IT based equipment. Command-and-control aircraft, such as the Airborne Warning and Control System (AWACS), are one obvious possibility.

The third, and most challenging, category of situational awareness CSEFs are those that are built into aircraft systems “inside the skin.” For most military aircraft, current cyber defenses cannot be easily integrated into the system. As has been discussed in earlier parts of this article series, cyber-physical aircraft are normally architected much differently than traditional-IT systems, and airworthiness requirements will make adding defenses challenging. However, there are several types of automated solutions that could be used to watch for cyber attacks.

One possible method is to monitor critical deterministic functional behavior, software code, and data that should not change. This monitoring can be accomplished using various methods, but the result is analogous to the famed practice (or SEF) in underground mining in which canaries were brought into coal mines to alert miners (via their own death) of the presence of a potentially fatal dose of toxic fumes and thus provide time for the miners to escape. Obviously, this type of approach would have to be designed into the baseline of the system and would need to be protected from attack itself. If attackers could easily subvert the system to report “everything is fine,” then no significant advance would be made.

Another more challenging, but potentially more effective, approach is to develop more comprehensive automated monitoring solutions that monitor traffic and behavior across the total system, as opposed to selected key areas. Various types of “smart agents” can do this monitoring at speeds that humans cannot match. Systems are already in development that show promise to monitor 1553 and other aircraft data bus or network traffic. Certain categories of attacks are easier to detect—such as malformed messages or subsystems executing commands that they should not (e.g., a radio overwriting the mission computer’s operating  system)—while other types of attacks are harder to detect. Much can be learned from the last few decades of back and forth between attackers trying to hide and defenders trying to find attacks in the traditional-IT space although the differences between aircraft and traditional-IT systems remain significant.

Some of those differences have been leveraged to create another approach to SA, which is broadly referred to as attestation. Similar to hashing in the traditional-IT world, attestation is a method to verify that avionics hardware and software have not changed from a baseline. A hash is a string created by a mathematical algorithm that is unique to whatever data were fed into the algorithm. If anything in the code changes, the hash will also change. This verifies that the data have changed in some way; however, it requires having first validated the correctness of the original code before the hash was applied. A secure hash can only determine if something changed, not that it was correct from the start. Attestation accomplishes the same function for aircraft and relies on a combination of timing measurements, injecting data into the system and reading the output.

Signature Management (SM)

Awareness of an ongoing cyber attack is important, but avoiding the attack by “hiding” the aircraft from the attacker using signature management is even better. Signature management can be thought of as analogous to stealth and low-observable features used in the KEW world. As the kill chain in Part 2 illustrated, an attacker must “find” or detect and locate the aircraft in cyberspace and establish a way to reach that aircraft before they can successfully prosecute an attack. Anything that makes the aircraft harder to detect and track improves the position of the defender.

The most common and traditional way of making aircraft hard to detect in cyberspace is through the use of “air gaps.” An air gap refers to the physical separation of a cyber network or system from some other cyber network. Military aircraft are most often not directly connected to the broader Internet, even when they are on the ground, and measures taken to increase that separation generally make successful attacks more challenging. However, it has been shown numerous times that air gaps are not nearly as effective as a defense as many analysts assume. If a person’s system connects to another system that connects to another system, etc., then the attacker has a way to reach that person. (As a forensics expert from the Air Force Office of Special Investigations once stated, “There is no such thing as an air gap, just high-latency networks.”) Stuxnet again provides a useful and well-known example of a cyber weapon that crossed a seemingly impregnable air gap [7]. In short, whatever air gap defenses one may have in place for an aircraft, they are likely not as strong as those of the Iranian nuclear facility at Natanz.

Strengthening an air gap can be a affordable and effective signature management CSEF, and a good way to start the process is by analyzing the connection points of one’s aircraft to the outside world. One can begin with the assumption that he is connected to the Internet through some intermediate chain of connections and then go looking for them; and just how many and how close those connections are to the aircraft can be surprising. Once an initial analysis has been done, the next step is to bring in a “red team” that has expertise in cyber attacks, as red teams can often find pathways not documented anywhere that should not be there but are.

Improving an aircraft’s air gap is an easily achievable signature management SEF for most designed and fielded systems to accomplish. However, more advanced techniques are also available to systems in design. One possibility includes software defined networking, as that gives the defender the ability to constantly shift his network and addressing scheme so it is difficult for an attacker to find an aircraft because its location in cyberspace continues to shift and move over time. Those types of evasive networks can be challenging and problematic to implement, and so a balance must be struck to ensure defenses do not make the system unusable for operators.

One potential approach that can be borrowed from the electronic warfare community is the Wartime Reserve Mode (WARM). Instead of building a constantly shifting network inside the aircraft, defenders may create the ability to shift the network once at the start of a conflict. If the adversary did not get access to the plan, all of the careful reconnaissance work they did before the conflict may be wasted, and a single shift will be more manageable for defenders and operators while providing a reasonable balance between defense and functionality.


There are many ways that defenders can use deception to make it harder for attackers to successfully attack their aircraft through the essential events in the probabilistic kill chain. If an adversary thinks he is successfully attacking an aircraft when he is actually attacking a false target, the real aircraft is reasonably safe from that specific attack. The most common way this is accomplished in the traditional-IT world is through honeypots or honeynets.

A honeypot is a system that presents itself as a target of interest to an attacker, much like an IR flare (which is a kinetic expendable), but it is actually not a “real” system with operational importance. The honeypot is much like an instrumented “petri dish,” where the defender can observe and learn what the attacker is trying to do.

A honeynet is a collection of honeypots connected together, and they can become very elaborate. Establishing honeypots and honeynets can be accomplished using virtualized systems, and they can also be combined effectively with the shifting networks described previously. Using a constantly shifting cyber network with honeypots can make an attacker’s job extremely difficult.

Once an attacker has been “captured” in a honeypot, the opportunities for defenders are only limited by their creativity and imagination. Depending on their objectives, the defenders may want to appear stronger or weaker than they really are. Defenders can also provide information that their systems are susceptible or vulnerable to particular attacks when they really are not. Ideally, an attacker will think he has successfully implanted his weapon, when all he has really done is demonstrated his attack techniques and provided a copy of his weapon’s warhead (i.e., the malicious code containing the malfunction mechanism) to the defenders. With that copy, defenders can easily check their internal cyber systems and aircraft to ensure the weapon was not successfully implanted somewhere else.

Another example of the value of deception in enhancing cyber survivability is deceiving the attacker into believing that a weapon was implanted when it actually was not. A clever defender who sees a weapon being implanted will stop the weapon from being effective, but forge messages back to the attacker from the weapon that report the weapon is in place and ready to create its effect. The attacker thinks his attack is in place, but when he triggers it, nothing happens.

This result could greatly decrease the attacker’s perceived value of cyber attacks on aircraft, which may also reduce the effort that the attacker puts into those types of attacks in the future. If an attacker is successfully fooled once, he is likely to become hesitant to accept any future information at face value. (A good example of this can be seen in the results of  Eligible Receiver” in Kaplan’s Dark Territory: The Secret History of Cyber War [8].) Even if all a defender can do is inject “noise” or misleading or irrelevant information into the data flowing back to an attacker, that can significantly slow an attacker’s decision-making speed and level of confidence [9]. An attacker can further be slowed by using other techniques to make it harder to access the aircraft.

Cybersecurity Hardening (CH)

SA enables defenders to know they are under attack, SM makes an aircraft hard to find in cyberspace, deception misdirects an attacker, and cybersecurity hardening makes it more difficult for the attacker to access our aircraft’s internal cyber system, modify the code by implanting the malfunction mechanism, and eventually actuate the weapon’s malfunction mechanism in step four of the probabilistic kill chain from Part 2. Cybersecurity hardening is where the focus of cybersecurity has been for many years. So we do not need to provide much detail here on specific techniques of implementation, such as the use of passwords and other personal identification methods, which is well-documented. Instead, we focus on discussing those elements of cybersecurity hardening that are specific to aircraft.

The first, and easiest, place for defenders of our aircraft to start is by hardening the traditional-IT systems that interconnect and interact with the aircraft in some way. Maintenance and mission planning systems are an obvious starting point, but there may be other types of systems as well. If an adversary can access these systems (which are often connected to the Internet to procure updates), then the aircraft is open to attack. Therefore, any system that touches the aircraft should be restricted and secured to the maximum extent possible with stringent cybersecurity controls in place. When general computing systems are so inexpensive, there is no compelling reason to allow IT devices that touch aircraft to do anything other than their dedicated task. Having the capability to open Internet connections or access e-mail is not appropriate for an aircraft support system. As for the details of how to lock down IT systems, there are numerous appropriate standards and best practices available.

Once the IT systems have been locked down, defenders can focus on the aircraft itself. The communications pathways onto and off of the aircraft need to be explored and understood. Many cyber defenders find that there are communications pathways they were unaware of once they start looking, so testing of the actual hardware (a “live fire” test) is preferred over simply looking in the design documentation.

Once the interfaces are known, they should be prioritized for hardening; and all unnecessary interfaces should be physically removed or disabled whenever possible. Necessary interfaces should be restricted to allow interactions to support only those capabilities required for their mission.

In addition, updates and code loaded on an aircraft should be verified as correct and from a trusted source. Our smart phones have been doing this verification for many years now, and they will only load software from trusted sources unless overridden by the user. Unfortunately, our aircraft, for the most part, remain behind in this technology and continue to freely accept any code loaded into their avionics.

A number of techniques are available to verify software, starting from simple hash value verification by the maintainers to more complex certificate verification and public key infrastructure (PKI) systems. The more complex methods can be more secure but have additional implementation challenges. However, there is no reason why maintainers loading code on any system could not verify the code is what was intended by checking a hash provided “out of band” or through a different communications channel than the one that transmitted the software. This verification would require no hardware changes to the aircraft, and though it could be defeated by a determined adversary, it would greatly increase the attacker’s challenge for moderate cost to the defender.

Another level of cybersecurity hardening on aircraft would be to implement a whitelisting solution. This approach would need to be designed into the baseline, and so it is more feasible for aircraft in design or undergoing a major update. A well-implemented whitelisting solution will allow only programs and code that have previously been approved and verified to run on the system. Whitelisting can be a very powerful technique, but it is notoriously hard to implement on traditional-IT systems due to the large number of executable programs and how quickly those programs are updated on traditional-IT systems. For aircraft, however, whitelisting is more easily achievable as there are far fewer executable programs and the pace of updates and changes is much slower. With an effective whitelisting solution in place, if an adversary does get a malfunction mechanism implanted, it will never execute as it is not part of the trusted whitelist. Of course, as with all defenses, a determined attacker can get around this by simultaneously attacking the whitelisting solution, but it once again raises the bar and makes an attack more difficult.

Threat Suppression (TS)

Another way to make an attacker’s job more difficult is to attack and disrupt them while they are attacking you. Conceptually, this is similar to the Suppression of Enemy Air Defenses (SEAD) or Destruction of Enemy Air Defenses (DEAD) in the KEW realm.

While any efforts that reduce an adversary’s attack capability are helpful, there are some structural difficulties driven by the nature of cyberspace that make it more challenging in this domain. Developing cyber weapons can be done in small isolated networks, and they do not require any large supporting infrastructure that can easily be found and targeted. In addition, actually sending the weapons can normally be accomplished from almost any laptop or even a mobile phone with an Internet connection. It thus will continue to be challenging to find adversary attackers before they strike and even harder to verify that all the implanted weapons have been discovered.

None of this is to say that cyber professionals should not be doing the best they can to find enemy attacks before they happen and disrupt them, but the level of expectation on how often that detection and disruption will be successful should be low. One of the more effective things cyber professionals could do with knowledge of upcoming enemy attacks is to inform the defenders of the details, who can then inoculate their systems against the attacks. Unfortunately, this flow of information is difficult when everything in the defense and intelligence cyber world is so highly classified. As David Lonsdale stated in The Nature of War in the Information Age, “The IW arena is among the most highly compartmentalized in the entire US defense establishment. The right hand quite simply does not know what the left hand can do, let alone what it is in fact doing [10].”

Finding ways to provide information to defenders without compromising sources is difficult, but well worth the effort. As Dr. Libicki stated in Cyberdeterrence and Cyberwar, “In this medium, the best defense is not necessarily a good offense; it is usually a good defense [11].” And that defense can be substantially strengthened by information and help from the offensive cyber professionals, as well as those who use and maintain the system.

Training and Tactics (T&T)

Improved knowledge and a better understanding of potential cyber attacks on aircraft for operations, maintenance, and sustainment personnel can make it more difficult for attackers to successfully prosecute attacks at several points in the kill chain. Many attackers in the traditional-IT world rely on users of systems to do things they should not do to enable the attacks, such as bypassing security checks or clicking on links. Unfortunately, many users do not understand the implications of their actions. Few users of aircraft systems in the field would knowingly disable systems designed to defend their aircraft from physical threats, but often today they make their systems more accessible to adversaries in cyberspace because they do not understand the ramifications of connecting something insecurely or accessing a web location or application for convenience that is not strictly required for the mission.

Educating the operational and maintenance communities is inexpensive compared to designing and installing new cybersecurity defenses inside aircraft. However, education and training need to be carefully targeted and focused on the operational audience. An enhanced version of the annual IT focused cybersecurity training is not going to be effective. Maximum effectiveness will be achieved by tailoring the training to both the specific aircraft system the audience works with, as well as examples from the highest level of classification available to the audience. Higher classification levels normally allow for more specificity on what is possible and what has happened both in testing and in real-world incidents. This training should be delivered from credible sources, most likely from the operational community itself, versus outsiders; and the greatest effect will be achieved if cyber attacks become fully integrated into flying training and exercises where bad cyber choices can lead to meaningful mission impact.

Improved knowledge of aircraft cyber issues is only useful if it results in improved tactics and behavior. Improving operators’ knowledge of how cyber attacks work gives them greater ability to design tactics to mitigate or defeat those threats, just like they have historically done in the realm of KEW. History suggests that maintainers and aircrew who understand the threat will be able to develop approaches that will never occur to the cyber experts. Many of those tactics will focus on avoiding the adversary weapon in the first place (i.e., susceptibility reduction), but some may focus on ways to still get the mission done after a hit, which is the realm of vulnerability reduction.

Cyber Vulnerability Reduction Concepts (CVRCs)

Once an enemy cyber weapon has intercepted the targeted aircraft, accessed the aircraft’s internal cyber network, successfully embedded itself, and been triggered, we are now at the last phase of the probabilistic kill chain from Part 2; and the aircraft’s CVRCs are the last line of defense between the enemy cyber weapon and a loss of the mission or aircraft.

Component Location and Logical Separation (CL&LS)

One method to reduce the effect of the malicious functionality of a triggered weapon is to place critical components in locations where they will be hard to access by activated cyber weapons. The simplest approach to separation is to locate critical components on physically separate networks and buses on the aircraft, and often this separation is already part of the design, primarily for airworthiness and reliability reasons. Unfortunately, separate physical networks are expensive and deny the system the benefits of having various devices connected to share data, so there is always a design tradeoff between how much separation vs. connection is desirable. One way to create some separation without the expense of running physically separated networks and hardware is through virtualization.

Virtualization is widely used in the traditional-IT world and enables multiple “computers” to run on the same set of hardware by sharing resources under the direction of a hypervisor that manages the virtual machines (VMs). The hypervisor can also accomplish numerous security and monitoring functions but does become a potential Achilles heel in the system. If an adversary can get access to and control over the hypervisor, that adversary is able to control all the VMs under the hypervisor. So it needs to be carefully protected and monitored.

Virtualization can assist with segmenting and partitioning as it can be used to build multiple systems and networks that are virtually segmented from each other on virtual local area networks (VLANs) that are sharing the same hardware and wires. This can make it hard for a weapon on one VLAN to access a critical component on another VLAN. Of course, VLANs are not a 100% secure defense, and techniques exist for attackers to break out of them; but they certainly make some attacks much more difficult for potentially far less cost to the defenders than building physically separate networks.

Software-defined and -scrambled networks and systems can also provide logical separation to make it even more difficult for cyber attackers to be successful. While these systems would theoretically be difficult to attack, they may also be too difficult to design and build with assurance while maintaining the required reliability and usability in the short term. So they may only be applicable in the far future, if at all.

System Redundancy (SR) With Effective Separation and Diversity

As in kinetic ACS, one does not want an adversary to be able to take out a critical mission capability with one “hit.” In the same way that it was a bad idea to pass all of the hydraulic lines from redundant hydraulic systems through the same location in the tail of some U.S. aircraft used in the Southeast Asia conflict in the 1960s and 1970s, it is also a bad idea to have all redundant cyber-critical components or systems on the same networks and buses.

Fortunately, airworthiness rules have often driven segmented designs, but that protection will only stay relevant if it is preserved. Designers need to be aware of the security implications of cross connecting buses and networks and balance whatever mission need is being enhanced with the loss of security posture.

It is not enough to have multiple versions of the same exact component for redundancy, as is traditionally done in the kinetic world. A bullet that hits a hydraulic pump on the left side of an aircraft will generally not also hit the other pump on the right side. In a cyberspace attack on a modern aircraft, however, any number of identical subsystems could be taken out in a single hit if all the components are accessible across buses and networks due to a lack of logical separation between them. Theoretically, the most effective security performance is provided by multiple components that accomplish the same function using completely different hardware and software. However, the prohibitive cost involved makes this type of redundancy infeasible except in a few specialized cases.

As a less expensive option, other redundancy techniques can be enabled using virtualization. Virtualized systems can build as many redundant VMs as desired assuming there is adequate memory and processing power. The supervisory position of the hypervisor enables it to compare the output of various virtual machines so there could, for example, be three different virtual mission computers. If one of them started providing different output from the other two, it could be taken down and rebuilt as a new VM by the hypervisor. These sorts of systems provide some redundancy, but, as noted previously, the hypervisor is still a single point of failure in this type of architecture.

Malfunction Suppression (Passive and Active) (MSP&A)

The malfunction or functional damage done to an aircraft by a cyber weapon can be suppressed by passive and active measures that tend to reduce the weapon’s effects. Equivalents from the KEW world prevent on-board explosions using foam in fuel tanks and fire detection and suppression systems.

On the passive side, systems should be designed and built to respond securely to unexpected commands. The stores management system should likely never accept a command to “jettison all stores” from the radar altimeter, yet many of today’s system components will accept messages and commands from anywhere on the bus, or have no way to verify that a command really came from where it says it did. There are several technical approaches that can work but will have to be designed into the fundamental architecture in most cases. It is possible that specialized security devices might be able to accomplish some of these functions if added into legacy systems, but that is again a challenging engineering problem.

A second major area of passive malfunction suppression is designing systems to deal with unexpected or malformed data. Buffer overflows, where attackers deliberately send more information than a system is expecting, are still common in the traditional-IT world, but for avionics systems, attacking them may be as simple as sending something unexpected that may cause the system to fail. Most avionics systems are thoroughly tested, but only in how to deal with what is expected. “Fuzzing” or sending random or unexpected data should also be accomplished to make sure that the avionics system handles unexpected data securely.

On the active side of malfunction suppression, a subsystem that has been attacked can be cut off from the network, ignored, or shut down. Of course, SA is again key, as the aircraft or operator will only know to do this if they know the system has become corrupted. A second active malfunction suppression technique overlaps with recovery, and that includes the various techniques to reload and restore systems discussed in the next section.

Cyber defenders can also play a role in malfunction suppression, but as ACCS is focused on an aircraft in flight, those opportunities will only apply in certain limited situations. Unmanned aerial systems can and should be actively defended by personnel watching the flow of data onto and off of the air vehicle as well as within the system. Large aircraft may be able to accommodate defenders who are monitoring the systems from a station onboard, as has been mentioned previously. For most manned military aircraft, however, it is a good question if a link to cyber defenders on the ground should ever be built. Any link that is established could potentially be used by an attacker to attack the aircraft in flight if the cryptography is broken or through an insider threat. So, in most cases, the risk is likely not worth the potential benefit.

System Capability Recovery (SCR)

While physical “R2-D2” type robots that pop out of the back of fighters to fix physical damage remain firmly in the realm of science fiction, cyber systems can be designed to recover themselves after cyber weapons activate their malicious functionality. Recovery capability relies on the detection of failure or malicious activity on an aircraft, and so strong SA is a pre-requisite to successful recovery. In some cases, simply resetting a system may clear the effects of an attack. Of course, more sophisticated attacks will attempt to remain persistent, but those persistence features may fail. So it is certainly worth trying.

One possible way to provide recovery capability at reasonable cost is to store current and older versions of software inside an avionics component, or perhaps in another component, on an aircraft. If an attack is detected, the affected component can be overwritten and reloaded, which may remove the cyber weapon’s effects. If the weapon can reinfect the component, an older version of the applicable software can be reloaded instead. Because cyber weapons are so specific and focused, the weapon may not be effective against an older version of the software, which can often still provide meaningful, if potentially degraded, mission capability. In addition, memory is now relatively inexpensive, so storing multiple versions of programs could be built into new or redesigned systems. A design team could also build a special stripped-down basic version of the component’s functionality that could be loaded as well, providing just enough capability while removing whatever avenue the cyber weapon is using to attack. Flight control systems with basic “get home” modes are an equivalent from the traditional aircraft world.

This type of system relies on knowing the aircraft is under attack, and automated versions would likely be most effective. As always, attackers will go after defensive systems, so defenders will need to ensure that attackers cannot simply find and modify the backup versions to include their malicious functionality.

Virtualization provides the ability to not have to wait for a system to reload and reboot, as there could be multiple virtual avionics boxes that are already up and running in case they are needed. This capability could thus provide a seamless transition that the system operator does not even notice as mission effectiveness is preserved. Of course, the details of such a switching system and the logic by which it would switch to a backup will be a challenging design problem with potentially significant airworthiness implications.

Component Elimination or Replacement (CER)

Something that is not there cannot be an attack pathway, and some functionality may be inherently indefensible and not worth the risk. One potential example is a live channel for defenders to monitor aircraft systems in flight, as has previously been discussed. The risk associated with that capability may be so high that the only reasonable answer is not to build it in the first place. It is critical that this type of risk analysis be done early and repeatedly throughout the life cycle to avoid “engineering in risk” that could otherwise be avoided. The earlier these decisions are made, the easier they will be to make. Removing an indefensible system after millions of dollars have been spent on it will be much harder than if done at the conceptual stage, when removing it will actually save money.

One area where there has been continual tension between functionality and security has been in maintenance support systems. Maintenance professionals rightly want to connect everything and share data across devices as quickly as possible to enable them to be more effective and produce more sorties. Security professionals want to isolate everything and restrict data-sharing to prevent attacks. There is a balance that can be achieved between operational effectiveness and security risk. This issue is no different than other performance-risk trades that must be made. Having systems report maintenance information in flight, wireless connectivity on the ground, and data connections between different support systems are all areas that need to be carefully examined through both lenses before appropriate balancing decisions are made. Again, these decisions will be far easier early in the life cycle than later, when large amounts of money and effort have already been expended.

Component Shielding

Component shielding helps components to resist functional damage after a cyber warhead has been triggered and can be thought of as the equivalent of armor to protect against a KEW. Shielding can protect noninfected components from infected ones and also limit the ability of infected components to infect others. Separation is one important element to shield components from attacks coming from other infected components, and generally limiting access is important so most of the elements discussed under cybersecurity hardening can also provide benefit.

Hardware that is designed with a secure root of trust can make it much more difficult for attackers to implant their malicious logic. The techniques for creating these systems are beyond the scope of this article but generally rely on specialized cryptographic hardware built into the devices. This approach has become quite common in high-security traditional-IT devices and even industrial control systems (ICS). The Trusted Platform Module (TPM) chip in most new business-level notebooks is one example, but there are many ways to implement these functions. These types of approaches can make a system much more resistant to numerous categories of attacks.

One method of component shielding more appropriate to aircraft than the traditional-IT world is using various techniques to control writing to a component’s memory. While certificates and code-signing are a good start, because aircraft systems do not normally need to be updated while in operation, physical switches that prevent writing to the system unless activated can be a powerful technique. For example, an avionics box might have a “load software” button that a maintainer must depress on the box next to the loading port before it will accept a new software load. This type of protection can be extremely difficult for an attacker to get around if it is implemented correctly, as attackers often rely on inserting their malicious functionality surreptitiously from systems that are not actually supposed to be loading software.

Defenses must work together and support each other. A physical loading switch has less value if the attacker can get himself inserted into the regular loading process, so hardening maintenance systems and verifying that the code is what was developed by the manufacturer also need to be accomplished. All of the CSECs and CSEFs are intended to work together to support each other to provide an acceptable level of aircraft survivability.


In this third part of this ACCS series, we have examined the 12 broad categories of cyber survivability enhancement concepts that can be used to reduce the probability of a successful adversary attack via the probabilistic kill chain discussed in Part 2. Once again, a tremendous amount of overlap and benefit has been gained from building on the KE ACS foundation, but there are also some areas in which the unique nature of cyberspace and cyber weapons have necessitated some changes.

CSECs on the susceptibility side include situational awareness, which provides knowledge and warning of attacks, while signature management makes it harder for adversaries to find aircraft in cyberspace to launch their attacks. Deception makes it difficult for adversaries to know that they are attacking the right targets or that their attacks will be successful. Cybersecurity hardening makes it harder for adversaries to gain access to aircraft through a host of well-known defensive techniques. Threat suppression uses friendly offensive cyber capabilities to seek out and suppress attackers, while training and tactics can provide cost-effective increases in survivability with no system redesign.

Once a weapon has been implanted and triggered, the vulnerability CSECs come into play. Component location and logical separation make it harder for an adversary to get malicious functionality to critical components, and system redundancy with effective separation and diversity reduces the effectiveness of an attack, along with passive and active malfunction suppression. System capability recovery reduces the duration of an adversary’s effects, and component elimination or replacement can remove entire classes of attacks, while component shielding can make it harder for attackers to move across components.

These concepts do not provide specific implementations for particular aircraft, but instead provide the broad menu from which specific implementations and CSEFs can be selected. Part 4 of this series will develop and describe the process by which aircraft designers can use the probabilistic kill chain developed in Part 2 and the SECs described here to select an optimized set of CSEFs that will provide the level of survivability desired for a particular aircraft in a particular threat environment.


Dr. William D. “Data” Bryant is a cyberspace defense and risk leader who currently works for Modern Technology Solutions, Incorporated (MTSI). His diverse background in operations, planning, and strategy includes more than 25 years of service in the Air Force, where he was a fighter pilot, planner, and strategist. Dr. Bryant helped create Task Force Cyber Secure and also served as the Air Force Deputy Chief Information Security Officer while developing and successfully implementing numerous proposals and policies to improve the cyber defense of weapon systems. He holds multiple degrees in aeronautical engineering, space systems, military strategy, and organizational management. He has also authored numerous works on various aspects of defending cyber physical systems and cyberspace superiority, including International Conflict and Cyberspace Superiority: Theory and Practice [12].

Dr. Robert E. Ball is a Distinguished Professor Emeritus at the Naval Postgraduate School (NPS), where he has spent more than 33 years teaching ACS, structures, and structural dynamics. He has been the principal developer and presenter of the fundamentals of ACS over the past four decades and is the author of The Fundamentals of Aircraft Combat Survivability Analysis and Design (first and second editions) [4, 5]. In addition, his more than 57 years of experience have included serving as president of two companies (Structural Analytics, Inc., and Aerospace Educational Services, Inc.) and as a consultant to Anamet Labs, the SURVICE Engineering Company, and the Institute for Defense Analyses (IDA). Dr. Ball holds a B.S., M.S., and Ph.D. in structural engineering from Northwestern University.


[1] Bryant, William D., and Robert E. Ball. “Developing the Fundamentals of Aircraft Cyber Combat Survivability: Part 1.” Aircraft Survivability, spring 2020.

[2] Bryant, William D., and Robert E. Ball. “Developing the Fundamentals of Aircraft Cyber Combat Survivability: Part 2.” Aircraft Survivability, summer 2020.

[3] Ball, Robert E., Mark Couch, and Christopher Adams. “The Development of Aircraft Combat Survivability as a Design Discipline Over the Past Half Century.” Aircraft Survivability, summer 2018.

[4] Ball, Robert E. The Fundamentals of Aircraft Combat Survivability Analysis and Design. First Edition, American Institute of Aeronautics and Astronautics, 1985.

[5] Ball, Robert E. The Fundamentals of Aircraft Combat Survivability Analysis and Design. Second Edition, American Institute of Aeronautics and Astronautics, 2003.

[6] Ball, Robert E. The Fundamentals of Aircraft Combat Survivability Analysis and Design. Third Edition, in draft.

[7] Zetter, Kim. Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon. New York: Crown Publishers, 2014.

[8] Kaplan, Fred. Dark Territory: The Secret History of Cyber War. New York: Simon and Schuster, 2016.

[9] Libicki, Martin C. Conquest in Cyberspace: National Security and Information Warfare. New York: Cambridge University Press, 2007.

[10] Lonsdale, David J. The Nature of War in the Information Age. London: Frank Cass, 2004.

[11] Libicki, Martin C. Cyberdeterrence and Cyberwar. Santa Monica: RAND Corporation, 2009.

[12] Bryant, William D. International Conflict and Cyberspace Superiority: Theory and Practice. New York: Routledge, 2015.