Joint efforts between the DoD, government agencies and industry over the past two years have resulted in a collaborative effort to adopt a common platform through the development of an open standard. The flow of this new initiative starts with the government expressing what they need from embedded systems companies and all parties involved working together to achieve those goals. The top objectives are to specify base system architectures for common systems, such as selecting a hardware standard—in this case, the existing OpenVPX standard—and system interoperability. Known as The Open Group Sensor Open Systems Architecture™ or SOSA™, this effort has enabled collaboration across different industry boundaries that were not achievable before. But the question of interoperability across modules remained. Restricting the use and making specific use of OpenVPX slot profiles has helped move that effort forward. Chassis monitoring and management, although long included in other open standards, is now a requirement within OpenVPX as specified by VITA 46.11. (For a brief overview of SOSA and its development, visit our Alphabet Soup Blog)Added capabilities tend to mean increased power requirements. This creates a need for new techniques to cool the modules. Power interface definitions have been normalized in order to simplify power supplies and make them more interoperable at the system level. System management is another area that is changing the network infrastructure, particularly in the realm of normalization of power systems.
The platforms that packaging suppliers design are the base infrastructure needed to build rugged embedded systems. It’s a natural progression then, that as application requirements evolve, so does the framework of the system itself. For example, in many newer systems, high-speed channels are being implemented in the backplane to keep pace with data requirements across the network infrastructure. In order to address the need for timing, radial clocking has been introduced.
FIGURE 1: As system capabilities increase, the needs of the infrastructure must evolve as well.
So, what are some of the implications of these interfaces increasing in speed? First, the backplanes now operate two and a half times faster, from 10 G to 25 G. This requires exceptional signal integrity modeling and simulations, to verify design assumptions. Higher channel speeds drive the need for new, high-speed material that handles loss better, but is also more expensive. Next, better connectors are required, which is why the embedded community has been focusing efforts for some time on high-speed VPX connectors, which are now available, and backward compatible from 10 G to 25 G throughput.
The Ethernet interface has been increasing in speed; today, we're primarily working with 10 Gbps Ethernet over the backplane, trending towards 40 and 100 Gbps down the road. Similarly, we see PCIe moving from Gen 3 at 8 GT/s to Gen 4 at 16 GT/s.
Cooling techniques have also been affected as system capabilities and processing increase. With the traditional VITA 48.1 air- and 48.2 conduction-cooling standards, system designers hit a wall around 60 W to 75 W. New techniques to mitigate higher heat dissipation are needed, such as airflow through cooling, or 48.8, where air is pressed up through the side of the module across the fins and then out the chassis. This allows the module to dissipate up to two times more heat, meaning a range of between 60 W and 130 W is easily handled. VITA 48.4 provides for even greater cooling via liquid flow-through (LFT) techniques.Developments in electronic packaging have also had to keep pace with the changing heat mitigation techniques. Improved technologies in the simple card guide offer new channels for air to flow, for example. Developers of system modules will need to follow suit, in the form of complex heatsinks that may require new manufacturing techniques but will help designers meet the necessary cooling requirements. (Figure 2)
FIGURE 2: New cooling methods are increasing the amount of watts dissipated across high-density embedded systems. Efforts to simplify the number of rails coming out of a power supply also contribute to solving power challenges. In a typical power supply, we'd have +12 V, +5 V, +3 V, and +/-12 V. To normalize and simplify the backplane power supply interface, power was reduced to 12 V, only with 3.3 V AUX, which has required that power supply modules be re-architected. Along with this change, power supply modules have become smart, with VITA 46.11 embedding management capabilities. Now, via IPMB, IPMC and a platform management module, the system can query the power supply and board modules to get ongoing system health monitoring information.
The speeds of the interfaces show a clear trend towards RF and fiber optics. On both the backplanes and the modules, connectors are changing. A fiber optic connector on the module can receive the I/O with a mating connector on the backplane. Likewise, RF bandwidth is increasing and density needs becoming higher, we see new RF connector modules.
As capabilities increase and interoperability becomes more the norms, having clearly defined open standards ensures that all parties involved are working efficiently and effectively toward a common goal. This system of checks and balances also helps pave the way for newer and better innovations, for rugged application to adopt changes in technology standards.
In the past few years, several end-of-life (EOL) announcements in the embedded computing market have both caused angst and opportunity. Making the shift away from a tried-and-true solution always brings with it the need to review not only the mechanical elements of an embedded system, but the integration and networking elements as well. And when that review is forced upon a designer, as in the case of an EOL announcement, it may mean forced choices of not-as-optimum alternatives. Or it could be something different altogether.
Rugged platforms for demanding applications have historically been constrained by the limited operational temperature ranges of high-performance processors and other key system components. These applications often operate in challenging temperatures, and high-performance processors aren’t generally offered with these operational temperature ranges. Until now.