AI-based computing is enabling multiple levels of insights and safety advancements throughout the embedded computing industry. We’re seeing a huge increase in a need for high computation systems that operate in challenging environments, and its AI-based platforms that can handle the processing requirements that enable object detection and tracking, video surveillance, target recognition and condition-based monitoring.
Operating systems based on AI computing provide optimized visualization capabilities to combine video and other vision sensors into one unified viewer application, which can subsequently be utilized for simultaneous localization and mapping of robots.
This sets the stage for more intuitive applications, such as human pose estimation to train robots to follow trajectories, which eventually can be used in autonomous navigation systems, as well as facial feature extraction in automated visual interpretation, human face recognition and tracking. These activities are designed to enhance security and surveillance, motion capture and augmented realty (AR).
Complex GPGPU inference computing at the edge is enabling this visual intelligence, as well, including high-resolution sensor systems, movement tracking security systems, automatic target recognition, threat location detection and prediction. Areas like machine condition-based monitoring and predictive maintenance, semi-autonomous driving and driver advisory systems are also relying on the parallel processing architecture of GPGPU.
Much of the high compute processing taking place within these critical embedded systems relies on NVIDIA compact supercomputers and their associated CUDA cores and deep learning SDKs used to develop data-driven applications. Traffic control, human-computer interaction, and visual surveillance well as rapid deployment of AI-based perception processing are all areas where data inputs can be turned into actionable intelligence.
The NVIDIA Jetson AGX Xavier sets a new bar for compute density, energy efficiency and AI inferencing capabilities on edge devices. It is a quantum jump in intelligent machine processing, marrying the flexibility of an 8-core ARM processor with the sheer number crunching performance of 512 NVIDIA CUDA cores and 64 Tensor cores.
With its industry leading performance, power efficiency, integrated deep learning capabilities and rich I/O, Xavier enables emerging technologies with compute-intensive requirements. Elma’s new Jetsys-5320, for example, employs the Xavier module to meet the growing data processing needs of extremely rugged and mobile embedded computing application. It easily handles data-intensive computation tasks and provides for deep learning (DL) and machine learning (ML) operations in AI applications.
Speeds are increasing, causing board and backplane suppliers to produce new designs capable of 25 Gb/s per lane that support high speed PCIe Gen 3 and Gen 4 designs. Sensors will also start to make use of 100 Gbe to transfer in and between chassis.
When a system is capable of running high performance deep learning-based inference engines, it can reliably perform advanced data and video processing tasks such as object detection and image segmentation of multiple video image streams captured through HD-SDI, Ethernet and USB3.0 cameras, and the like, interfaced through high-speed circular connectors.
Newer software environments will lead to replaceable accelerators and GPGPUs amongst suppliers. In open standards-based environments like The Open Group’s Sensor Open System Architecture™ (SOSA) initiative, high bandwidth local connections required between SBCs and GPGPUs, where two plug in cards (PICs) may form one SOSA module, may need to be scaled to meet growing data needs.
Today’s rugged embedded systems designers are craving mission-critical SFF autonomy with server-class AI processing to deploy in remote locations and overcome challenging connectivity. These systems need real-time responsiveness, minimal latency and low power consumption. Advanced AI systems that facilitate data processing from the edge to across the cloud redefine the possibilities for using rugged, compact technologies in autonomous, harsh and mobile environments.
System integration challenges have changed over the past few years, with new demands being put on manufacturers for integration, troubleshooting and system upgrades. This blog explores how Elma and its partners Interface Concept, Concurrent Technologies and EIZO Rugged Solutions define what partnering means within our ecosystem when working together.
Similar to how cloud computing evolved over the last decade to the de facto way of storing and managing data, Edge AI is taking off. Edge AI is one of the most notable trends in artificial intelligence, as it allows people to run AI processes without having to be concerned about security or slowdowns due to data transmission. And its impact is notable in industrial embedded computing, since it allows platforms to react quickly to inputs without access to the cloud. We asked some Edge AI partners: If analytics can be performed in the cloud, what is the benefit of an Edge AI approach, especially as it’s related to industrial embedded computing?