Lost in Translation: The Need for Comprehensive EA Ontologies in M&S and Testing
by Joshua Wells and Kyle Harrigan
The increasing complexity of modern electronic warfare (EW) system interactions has led to significant challenges in modeling and simulation (M&S), testing, and validation. Underlying many of these challenges is a fundamental and routine disconnect in the very definitions of the terms and expressions often used to depict EW interactions. This disconnect can lead to inaccurate/incorrect modeling, ineffective/inefficient analyses, and greatly complicated validation. While significant investments and progress are being made in the Joint community to address this issue, the lack of common, sufficiently open, and widely adopted standards and approaches remains a challenge.
BUILDING THE EA TREE SWING: A DEFINITION PROBLEM
Perhaps you’ve been there before. After sitting through a days-long detailed design review of an electronic attack (EA) system or concept, participants stand and leave with no consistent understanding of how the system really functions or the desired effect that each EA technique is designed to achieve (much less a detailed understanding of the nuances of each technique or the expected impacts on any specific radar system).
Nonetheless, the M&S teams and systems engineering teams return to their respective facilities to work on their various implementation tasks, based on (often) widely varying views of the system. The expected outcome of their work is a hardware prototype that can be tested against surrogate threat sensors or hardware-in-the-loop (HWIL) assets, as well as a model that accurately represents the physical EA system that can be exercised in fully constructive M&S environments. However, due to the understanding differences among key contributors, there is sometimes little chance of achieving this outcome.
This all-too-common occurrence can be illustrated by the age-old “building a tree swing” analogy, with which many engineers and computer scientists are familiar. The analogy has been modified and applied to a wide range of disciplines (often humorously) over the years, but the basic idea for M&S, testing, and validation is that of a typical team of engineering managers, designers, analysts, programmers, sponsors, and users collaborating to construct a simple tree swing and the disconnects that can occur between what is intended, what is described, and what is ultimately produced (see Figure 1).
Despite its simplicity, the analogy serves as a good reminder of how definition problems early in the codevelopment of EA systems and their respective M&S counterparts may yield a situation with little or no chance of achieving successful validation. Figure 2 provides an initial application of the tree swing analogy to the EA codevelopment process, and Figure 3 further refines this application of the analogy to EA systems, showing some of the many opportunities for differences between M&S and systems interpretations.
First, in the early design stage, sufficient technical detail is often not available to accurately model the prototype EA system. Second, the supporting environments in which the EA system and its corresponding model are evaluated are not identical. Hardware tests are often designed to evaluate prototypes against surrogate sensors. Outputs from the same tests are often also intended as input to EA model validation, but aligning hardware test setups and modeling environments is difficult, as is finding comparable data products.
Designing tests using threat sensor emulators for the purposes of model validation often results in tests that are unacceptably complex or prohibitively expensive. Validation of an EA model against its analogous prototype is an effort of measuring the difference between model and hardware. This measurement requires a precise matching between the model and hardware test environments, including threat sensors. To evaluate the difference between the EA hardware and its model, other potential sources of error must first be eliminated. Unfortunately, this process is often prohibitively expensive in terms of time and effort.
AN IDEALIZED DESIGN PROCESS
Figure 4 shows an idealized design process that helps motivate the use of ontologies. In the ideal process, an unambiguous EA system description is used as a common input for hardware and M&S development, reducing differences in interpretation. Rather than attempting to align system environments, the environments themselves are eliminated. System inputs (normally part of the environment) are explicitly defined and used for both the hardware and model. Finally, common data products that must be producible by the model and the hardware are defined (ahead of development).
By using the types of information and producing it from the same inputs, a direct comparison should be possible between the model and hardware to provide a good validation foundation. This method avoids focusing on replicating effects of techniques on sensors. Instead, it ensures the fundamental behavior of the model matches the behavior of the hardware. The realization of such a design methodology hinges on developing the necessary standardized data products.
WHAT IS AN ONTOLOGY?
While the word ontology has numerous definitions, well-known computer scientist Thomas Gruber defines the term as that which “represents a specific collection of objects, concepts, and the relationships between them.” At a general level, an ontology can be used to describe the breadth of a particular subject. Practically, anytime an EW problem is discussed—whether it be about techniques, interactions, or expected effects—we are implicitly using such a domain of discourse [1].
Incidentally, this idea of considering and evaluating technical “things” and their functionalities with and through the concepts, language, and relationships associated with them is not new. In fact, the “Father of Modern Chemistry,” 18th century French nobleman Antoine Lavoisier, once wrote [2]:
“It is impossible to dissociate language from science or science from language, because every natural science always involves three things: the sequence of phenomena on which the science is based; the abstract concepts which call these phenomena to mind; and the words in which the concepts are expressed. To call forth a concept, a word is needed; to portray a phenomenon, a concept is needed. All three mirror one and the same reality.”
Accordingly, a well-defined ontology that focuses on EA system-sensor interaction can establish a formal language for unambiguously communicating what the EA system is doing and enabling the evaluation of its unique impact on various sensors. A good EA ontology is also capable of being applied not only to modeling systems that support it but to the EA hardware and sensor hardware as well.
On the other hand, without an ontology (implicit or otherwise) that creates specific terminology, meanings, and relationships, there is no guaranteed means of communication among EA systems and sensors. If the scope of the defined ontology is limited, so too is its capability. And limitations in capability create far-reaching challenges in system specification, modeling, implementation, testing, and validation.
ONTOLOGIES IN MODELING
EW representations in M&S are widely varied. EW interactions are often defined at a level of fidelity appropriate for a specific tool, making the definition of a common ontology among all tools difficult. Survivability modeling tools often define EA responses at a detailed (pulse or sample) level, where the impacts of radars can be assessed through emulative signal processing. However, models often use unique custom methods for defining EA, leading to a need to translate those descriptions from some representation format into these custom formats. In the best case, this is a time-consuming and potentially error-prone process; in the worst case, the underlying language may be insufficient to capture the intricacies of the technique.
Mission-level models may only define EA at a so-called “effects” level as appropriate for the large scale of scenarios. A good example of a widely used, open, and accepted standard is the Institute of Electrical and Electronics Engineers (IEEE) 1278.1-2012 Distributed Interactive Simulation (DIS) standard [3]. The DIS standard defines a language by which simulation entities can communicate, and it includes a means for electromagnetic emissions interactions and a Jamming Technique record that can provide details on jamming effects.
More recently, many modeling environments are evolving pulse descriptor word (PDW)-based approaches for exchanging information. This is a positive evolution; however, careful attention must be paid with respect to commonality, robustness, completeness, and efficiency/scalability. Many such examples exist, but releasability constraints become both an issue for citing here and a part of an overall issue that limits collaboration in the space.
AN EA ONTOLOGY
A useful ontology that maximizes interoperability between EA hardware and models should possess three fundamental qualities. The ontology must (1) define fundamental input/output (I/O) information between EA systems and sensors, (2) be capable of describing all known interaction types, and (3) with reasonable certainty, be able to describe future, unobserved interaction types. More concisely, a good ontology should provide a language that is capable of describing all interactions (observed and plausible) between EA systems and sensors.
An ontology focused on interfacing EA systems and sensors should define the fundamental inputs and outputs to them. The fundamental I/O for these hardware systems is radio frequency (RF) energy. The proposed ontology describes the RF signal that is transmitted from one entity and sensed by another. Sensors (be they hardware or modeled) determine how EA transmissions may impact their perception.
An effective ontology must also be flexible enough to describe interactions or signals that have not yet been observed, with reasonable certainty. This flexibility requires defining the interaction class carefully and ensuring the ontology provides full coverage for the class.
EXAMPLE: REPRESENTING EA SYSTEM I/O WITH AN ONTOLOGY
As mentioned, the fundamental interaction mechanism between an EA system and an RF sensor is ultimately RF energy or signals. An ontology for describing RF signal interactions should thus be capable of describing known sensor signal types (e.g., a simple pulsed waveform, a linear frequency modulated pulse [LFMOP] waveform, binary phase-shift keying [BPSK]), as well as yet-to-be-defined signal types. The ontology should also describe these signals in a compact and intuitive manner. Using in-phase and quadrature (IQ) samples is enticing due to the ability to specify arbitrarily complex waveforms, but the primary drawbacks of such a representation are (1) inefficiency, (2) the lack of useful metadata, and (3) little opportunity for manipulation/optimization.
An example ontology attempts to represent signals as efficiently as possible while allowing for ease of use and full coverage of the signal space. The resulting ontology is divided into the three fundamental descriptor types—(1) PDW, (2) Noise Descriptor, and (3) Sampled Signal—as illustrated in Figure 5.
The PDW type handles conventional modulated waveforms. The Noise Descriptor type handles random signals that are best described stochastically. The Sampled Signal type handles signals that may not be well represented by PDW or Noise Descriptor, providing full coverage for any emitted signal.
Layerable augmentations describe types of modulations applied to a base signal, and they can be applied to achieve common effects, such as LFMOP, BPSK, amplitude-shift keying (ASK), and frequency hopping. Using this method, the majority of radar signals can be expressed in an extremely compact form. Support for regularly repeating pulses also exists.
Noise signals are not well described by PDWs. The noise descriptor type characterizes signals by defining the shape of the signal’s power spectral density. Complex overall noise descriptions can be created through a collection of noise descriptors in a single Transmission.
The Sampled Signal type, which uses direct IQ samples, has proven coverage for all signals that can be transmitted and received by sensors and EA systems, though it is considered a “weapon of last resort.” These signals can be combined with other components to form more complex signal descriptions.
The approach is also extensible: PDW and noise types can be extended by adding augmentations or spectrum shapes in future versions.
Note that fundamental signal types can be used as building blocks for more complex signals. As shown in Figure 6, this ability is achieved with a generic Transmission Component, where multiple components can be used to define an overall Transmission. Transmission Components are used to narrow the focus of signal descriptions to regions of time/frequency that contain signal energy, allowing for compressed, efficient representations.
INTERFACE IMPLICATIONS
The proposed ontology naturally translates to a well-defined data structure for conveying information about electromagnetic emissions. With this structure in hand, designing a standardized interface between a modeling environment and an EA system model becomes significantly easier, as illustrated in Figure 7.
Frequently, EA models are supported in modeling environments through direct modification of source code. This approach, however, is not necessary with a proper interface. General modifications are made in the EW Module within the modeling environment. The module conforms to a specific interface. Any EA system model that also conforms to the interface should immediately work in the modeling environment.
EA system models developed to conform to a standardized interface have several advantages beyond modeling environment agnostic development processes. Interface-conforming models can be more easily tested and compared to references, without consideration of the modeling environment. More specific examples of these advantages are provided in the following section.
V&V IMPLICATIONS
A design approach using the described ontology and interface has significant implications for verification and validation (V&V). Standardized data products that are applicable to both EA hardware systems and EA models provide the opportunity to directly compare the behavior of EA systems and their counterpart models.
In the conventional approach, hardware and model verification often occur independently with different interpretations of system behavior. Hardware engineers verify the hardware performs within specifications. Model engineers verify the model behaves as expected based on (likely differing) interpretations of system behavior. Hardware references are often exercised against threat representations that differ from modeling counterparts, and in some cases validation occurs using surrogate sensors.
Model validation often attempts to ensure that the EA model and its effects on threat sensors match the EA hardware and its effects on surrogate threats. This task is difficult because of (1) misalignment in interpretation of the EA description, (2) differences between threat models and threat surrogates (some surrogates or models may not even exist), and (3) differing data products available from models and hardware tests. Even a moderate degree of complexity in the EA system all but guarantees failure of conventional validation.
The proposed approach aims to streamline the verification process by reducing differences in interpretation between EA model development and EA hardware development, as shown in Figure 8. EA system descriptions are described in terms of algorithms and expected responses to predefined stimuli, using the defined ontology to describe each and avoid ambiguity. This approach contrasts with a typical focus on intended delivered effects to victim sensors. It also helps ensure model development and hardware development are independently working toward the same goals.
Similarly, the validation process is simplified by eliminating the reliance on threat sensor models for collecting data. Inputs and outputs for EA model and EA hardware are directly comparable, allowing for the validation process shown in Figure 9. The validation process is allowed to focus on this direct comparison between the EA system model and hardware. Validation of the hardware proceeds as usual, while validation of the model should be performed against an identical set of stimuli. The stimuli can be generated by a threat surrogate or simply defined based on a desired set of known threat waveforms. With all stimuli and responses being described using the same ontology, a direct comparison of responses can be made to evaluate the validity of the model relative to the hardware.
RELEVANT TECHNOLOGIES AND PROGRESS
It should be noted that progress is being made in this general area and that useful building blocks exist in the community to achieve the aforementioned goals. PDW-based approaches for exchanging information have become increasingly ubiquitous both in modeling and hardware, though standards, formats, and features still vary widely.
For example, the Overarching Dynamic Electronic-warfare System Standard Architecture (ODESSA) [4], which is being developed by the National Ground Intelligence Center (NGIC), will “define the radar parameters in a standard format . . . by ingesting these [PDWs] directly.” Additionally, the Naval Air Warfare Center Weapons Division (NAWCWD) has created and is improving a so-called Common Data File (CDF) format to help standardize EW parametric data formats.
The Georgia Tech Research Institute’s (GTRI’s) Technique Description Language (TDL), which is designed to maximize flexibility and minimize development time of techniques on digital radio frequency memory (DRFM) by using a domain-specific language that executes directly on hardware, may also provide additional opportunities to align technique representations between the model and hardware.
Many other efforts in this area are also ongoing throughout the DoD community, but releasability constraints limit what can be discussed about them in this open format.
A WAY AHEAD
As discussed, modern EA systems are extremely and increasingly complex, requiring improved approaches to model development, alignment, and V&V. Moreover, movement to so-called Cognitive EA systems is sure to present additional challenges. The requirement for a near-perfect alignment between the test environment and the modeled environment (including any sensors present) is often not met. Comparable data products are also often not available due to the nature of calculated quantities in models vs. observable phenomena in test environments. The need for improved approaches is thus clear.
A robust ontology can function as a common language among modeling and hardware components and thereby help address problems with the design and testing of EA systems. This ontology can be used to help define system behavior and can, in turn, be used as a verification tool in the design process for both hardware prototypes and models. It can also be used with both the hardware and model to provide data products that are directly comparable, which is a crucial requirement of a sound validation approach. Finally, a good ontology can also encourage the use of standardized interfaces, which can provide transparency in EA model behavior, further facilitating an effective design process overall.
Community-wide agreement and adoption of a robust, widely published ontology for EA can be immensely beneficial for all members of the community, and there are building blocks to start from. The benefits include (1) improving the efficiency and efficacy of interorganizational collaboration; (2) allowing for models to be inspected, modified, or even created by individuals without extensive expert knowledge of M&S environments; and (3) greatly increasing validation efficacy while also reducing complexity and cost. The end result should be models that more accurately predict system performance and aid in the development of EA systems that are both understandable and effective.
About the Authors
Dr. Joshua Wells is a Senior Research Engineer with GTRI. His experience spans image/video/signal processing, target tracking, and power-efficient computing methods, emphasizing algorithmic optimization. His current work focuses on the M&S of radars, EA systems, and related electromagnetic phenomena. Dr. Wells is also an instructor in Georgia Tech’s School of Electrical and Computer Engineering, where he teaches courses on computer engineering and associated physics. He holds a bachelor’s degree in computer engineering from the University of North Carolina Charlotte and a master’s and doctorate in electrical and computer engineering from Georgia Tech.
Mr. Kyle Harrigan is a Senior Research Engineer with GTRI. He has spent the last 20+ years working in the areas of electromagnetic warfare and tactical data links, heavily focusing on M&S for a variety of DoD customers. He is also an instructor with Georgia Tech Professional Education, for which he teaches Basic RF Electromagnetic Warfare Concepts. Mr. Harrigan holds a bachelor’s degree in computer engineering and a master’s in electrical and computer engineering from Georgia Tech.
Acknowledgments
Portions of this work were funded by the Naval Air Warfare Center’s Aircraft Division’s Combat Survivability Division via the Simulated Engagement Analysis Laboratory. A version of the article was previously presented at the 2023 Joint Aircraft Survivability Program Model User’s Meeting [5].
References
- Gruber, T. R. “Toward Principles for the Design of Ontologies Used for Knowledge Sharing?” International Journal of Human-Computer Studies, vol. 43, pp. 907–928, November 1995.
- de Lavoisier, A.-L. Traité Élémentaire de Chimie. MAXTOR, France, 1789.
- Institute of Electrical and Electronics Engineers. “IEEE Standard for Distributed Interactive Simulation—Application Protocols.” IEEE Std 1278.1-2012 (Revision of IEEE Std 1278.1-1995),
pp. 1–747, 19 December 2012. - U.S. Army. “Department Of Defense Fiscal Year (FY) 2024 Budget Estimates: Research, Development, Test & Evaluation, Army RDT&E − Volume III, Budget Activity 6.” March 2023.
- Harrigan, K., and J. Wells. “EA Ontologies for M&S and Test.” Presentation at the Joint Aircraft Survivability Program Model User’s Meeting, Atlanta, GA, 2023.