Real-time software boosts mission- and life-critical credibility

Aug. 2, 2021
High-reliability aspects of real-time operating systems also are improving their capabilities in trusted computing and information security.

NASHUA, N.H. - Real-time software in embedded computing, like real-time operating systems (RTOS) in military and aviation applications, must work quickly and with no errors to ensure mission success. On top of reliability, experts in real-time technology say it must have strong trusted computing and information security for critical and classified data.

“The first trend is that system security is being taken more seriously,” explains Richard Jaenicke, director of marketing for Green Hills Software in Santa Barbara, Calif. “The number of breaches continues to grow each year. One part of the solution is a zero-trust approach where each user and device is verified against approved action in a security policy. That approach contrasts to relying on perimeter defenses such as an initial login.

An executive order issued on 12 May 2021 requires the adoption of a zero-trust architecture by all federal agencies,” Jaenicke points out. “In an embedded system, serious software security starts with a separation kernel, a very small piece of software that is the only software running in privileged kernel mode, and its only function is to enforce the fundamental security policies of data isolation, fault isolation, and resource sanitation between applications. The best separation kernels are formally verified to provide that separation so that higher-level applications can count on the kernel to be non-bypassable, always invoked, and tamper-proof.”

Seeking security

Jaenicke provides the Green Hills INTEGRITY-178 RTOS as a prime example of a secure zero-trust software solution. In late 2020, the U.S. Army selected the
INTEGRITY-178 Time-Variant Unified Multi-Processing (tuMP) RTOS as part of the operating system upgrade to the Improved Data Modem (IDM-401) program.

The IDM-401 digitizes Army aviation and is fielded on every modernized Army helicopter, including the CH-47 Chinook, AH-64 Apache, and UH-60 Black Hawk. The IDM helps connect several different helicopter radios and the Blue Force Tracker transceiver, and enables rapid data transfer. The program supports Open Systems Architecture (OSA), Future Airborne Capability Environment (FACE), and Common Operating Environment (COE) interoperability standards.

“For almost a decade, we have been talking about the problem of shared resource contention and multicore interference, the resulting lack of determinism, and its impact on safety,” Jaenicke says. “An RTOS-level solution has existed for the past five years, and finally we have two multicore avionics systems that have received technical standard order (TSO) authorization after having meet DO-178C for airborne safety and CAST-32A for addressing multicore issues.

Among the avionics companies with TSO authorization is CMC Electronics in Saint-Laurent, Quebec, which has TSO authorization for the company’s PU-3000 avionics computer that can act as a flight director and the MFD-3068 multicore smart display that can be the primary flight display, Jaenicke points out.

“Both systems are authorized to the highest safety rating (DAL A), and both systems depend on the robust multicore partitioning provided by the INTEGRITY-178 tuMP RTOS. Such multicore avionics systems enable greater functionality and further consolidation of flight functions to reduce the number of boxes and the overall size, weight, and power (SWaP),” Jaenicke says.

The Green Hill INTEGRITY-178 tuMP multicore RTOS addresses the interference challenges discussed in CAST-32A with its Bandwidth Allocation and Monitoring
(BAM) capability. BAM was developed to DO-178C DAL A objectives to mitigate the interference risks for the IDM. The INTEGRITY-178 tuMP BAM monitors and enforces the bandwidth allocation of the chip-level interconnect to each of the cores, guaranteeing a particular allocation of shared resources. The supported bandwidth management technique emulates a high-rate hardware-based approach to ensure continuous allocation enforcement.

Embracing current tech

Michel Chabroux, the senior director of product management at Wind River Systems Inc. in Alameda, Calif., says another real-time software trend is to leverage developments that are coming out of the information technology (IT) sector.

“This includes the use of, for example, Rust or WebAssembly beyond traditional C and C++, containers as a software deployment/management aid,” Chabroux explains. “The primary reason is to accelerate development and reduce time to market. Containers for example enables the use of existing IT technologies and infrastructure to deploy and manage software.”

Chabroux also says that the “use of multi-core [processing] is now there for good - many used multi-core hardware in single-core mode.”

Open standards

Wind River’s Chabroux and Green Hills’s Jaenicke say that there is a need for standards like FACE, ARINC, and others. “This trend has been slowly gaining momentum to the point now where most military electronics systems are being specified to follow a Modular Opens Systems Approach (MOSA) as directed by the tri-services memo from [7 Jan. 2019],” Jaenicke says.

“MOSA works its way down to the RTOS in the form of standards such as the FACE technical standard,” Jaenicke continues. “The FACE Technical Standard is an open specification that leverages other open standards. In the operating system segment, that includes ARINC 653 and POSIX. But MOSA requires more than just open standards. It also requires modularity and certification of conformance. The FACE Technical Standard defines a systems architecture that breaks the software into five segments, including the operating system segment, to create the modularity of being able to modify or replace the solution for any segment without affecting the other segments.”

The differences between open-systems standards don’t end there. “Unlike most standards, the FACE Technical standard has a companion conformance test suite and a requirement for independent verification for conformance,” Jaenicke says. “The INTEGRITY-178 tuMP RTOS was the first operating system segment certified conformant to the FACE Technical Standard, edition 3.0, including C, C++, and Ada runtimes on Arm, Intel, and Power Architectures.”

Earlier this year, DDC-I Inc. in Phoenix introduced FACE 3.0 software conformance for the company’s Deos safety-critical DO-1`78 RTOS and Open Arbor development tools running on ARM and x86 processors. The certification covers the FACE Technical Standard Edition 3.0 Safety Base and Security Profiles for the Operating System Segment (OSS).

The Deos RTOS Platform for FACE Technical Standard 3.0 combines the time- and space-partitioned Deos RTOS and SafeMC multi-core technology with
RTEMS (Real Time Executive for Multiprocessor Systems), a mature, deterministic, open systems, hard-real-time POSIX executive.

Deos provides ARINC 653 APEX interfaces and multi-core scheduling. A para-virtualized implementation of RTEMS, which runs in a secure Deos partition, provides POSIX interfaces and scheduling.

The integrated software platform combines the strengths and pedigree of ARINC 653 and POSIX RTOSs, providing industry-standard interfaces and feature set for conformance with the FACE Technical Standard Safety Base and Security and Operating System Profiles in a time and space partitioned, hard-real-time, multi-core execution model.

Deos is a safety-critical embedded RTOS with cache partitioning, memory pools, and safe scheduling to deliver high CPU use. First certified to DO-178 DAL A in 1998, Deos provides a FACE Safety Base Profile that features hard real-time response, time and space partitioning, and both ARINC-653 and POSIX interfaces.

Virtualization

Real-time experts note that hypervisors — a system made of software or hardware that runs virtual machines — are making a mark in this sector. Ian Ferguson, vice president of marketing and strategic alliances at Lynx Software Technologies in San Jose, Calif., told Military & Aerospace Electronics in 2020 that hypervisors are seeing increased usage in mixed-criticality systems.

“Separating out resources that are doing video processes from other resources that are doing time sensitive stuff around GPS networks,” Ferguson says. “Increased use of hypervisors into those elements — that helps partition parts of the software that you have to take through certification and prove that you can isolate that from the other pieces of the system that is running on Linux typically.”

Green Hills’s Jaenicke says that virtualization is tailored to real-time requirements. “Virtualization became popular in servers, and many of the virtualization solutions for real-time systems are based on similar technology instead of being tailored for lower latency and determinism required for real-time systems,” explains Jaenicke. “For example, a Type 1 hypervisor (or even a so-called Type 0 hypervisor) runs directly on the hardware and uses hardware features like a memory management unit (MMU) to enforce isolation of applications in different memory address ranges. That sounds like it should be the highest performance given that it runs directly on the hardware. However, it also means that each and every OS has to run on top of the hypervisor, no matter whether it is an RTOS or a general-purpose OS like Linux or Windows.”

There have been improvements, Jaenicke says. “A better approach for real-time systems is to have the RTOS running on the hardware and then layer virtualization on top of the RTOS only where it is needed to run Linux or a legacy OS. In that way, only the non-real-time applications have to pay the latency and determinism penalty of running on the hypervisor. Note that such an approach actually is more secure because it gets the huge virtualization code base out of the kernel running in privilege mode while still using the MMU and other hardware features to provide application isolation.”

In 2019, Wind River introduced the Helix Virtualization Platform, and was recognized as a Platinum recipient of the Military & Aerospace Electronics Innovators Awards. The Wind River Helix Virtualization Platform, which combines the company’s commercial RTOS and embedded Linux distribution into a software-development system for deployed systems that involve edge computing.

Updating legacy software

This enables other operating systems to run unmodified within the same framework, providing a software development environment across the Wind River portfolio. Wind River Helix means legacy software can remain unchanged while running alongside applications, and it provides a consistent, scalable and agile platform for edge devices.

Helix addresses critical infrastructure development needs, from dynamic environments without certification requirements, to regulated static applications such as in avionics and industrial, as well as systems requiring the mixing of safety-certified applications with non-certified ones, such as in automotive.

The offering comprises VxWorks along with its virtualization technology, integrated with Wind River Linux and Wind River Simics for system simulation. It meets DO-178C, IEC 61508, and ISO 26262 safety standards, and is operating system-agnostic for deployed systems.

“There are some signal processing applications that have very low latency requirements,” says Wind River’s Chabroux. “It is important that an RTOS ensures latency is kept to a strict minimal. Looking at Time Sensitive Networking (TSN) born in the industrial market, we are seeing cases where end-users are asking single-digit nanosecond latency for network traffic.”

Helix also provides multi-core hardware support and availability on Arm, Intel, NXP, and Xilinx silicon platforms that enable 32- and 64-bit guest operating systems.

Lynx Software Technologies’s Ferguson noted in 2020 that certifying multi-core processors for cockpit avionics is difficult because the systems weren’t designed with that task in mind.

“They’re designed for servers, they’re designed for base stations, they’re designed for whatever other workloads in video technology isn’t designed with Lockheed as their primary customer focus,” Ferguson says of multi-core processors. “Certification and how it works around the current flavor around multi-core products are still a big challenge. How do you guarantee determinism on certain things? What happens when you have memory systems that have unpredictable access times and those pieces? There are people that have claimed to have solved multi-core processors for avionics, we are in the camp that thinks it isn’t solved yet. There are things you can do to mitigate it but I think there’s going to need to be more work done into the underlying hardware to get to a place where software can help partner up with hardware to deliver...where the FAA can truly feel comfortable that a multicore system can be certified for all eventualities.” 

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!