The rise of robots and self-driving vehicles demand greater intelligence at the local level. Keeping the latency to a minimum, more processing will need to be done at the edge and the compute time must be kept to a minimum by use of more sophisticated techniques.
How Texas Instruments Sitara Processors bring greater Intelligence to the Edge
See how the closer interaction of human beings with cobots and self-driving vehicles will call for greater intelligence at the local level. Consequently, the current set up where the sensing/data acquisition hardware and the data processing/AI side are in isolation from one another will have to change. Keeping the latency to a minimum, more processing will need to be done at the edge and the compute time must be kept to a minimum by use of more sophisticated techniques.
Read on this Texas Instruments’ exclusive technical article covering AI and edge computing, and learn how TI’s family of Sitara processors enable deep learning inference at the edge by running machine learning inference.
We are entering an era that will see ever closer interaction between human beings and technology. On the factory floor, human operatives will start to work alongside collaborative robots (or cobots) – allowing the respective strengths of each to be taken advantage of (with cobots handling the mundane and repetitive tasks, while their organic counterparts take responsibility for the fine tuning and suchlike, where greater subtlety is required). In the automotive world, the migration towards elevated levels of vehicle autonomy is already underway, with the long-term goal of having completely self-driving cars on our roads.
The advent of ‘Big Data’, investment in more capacious SSD data storage reserves and the heftier computational power that is now available have resulted in considerable traction for cloud-based artificial intelligence (AI) – with everyday examples that are familiar to us being product buying recommendations on websites, online automated translation services, and the use of natural language processing by virtual assistants. Here all the heavy lifting can be done in expansive, well-resourced data centres.
The next key phase in the ongoing progression of AI will be how data acquired by various in-situ sensors can be utilised. The only problem here is going to be that having these sensors connecting to the cloud is basically impractical – with power, bandwidth, connection reliability and security implications all needing to be taken into account.
Let’s go back to the cobot and autonomous vehicle examples already given. In both these cases, there are serious dangers associated with the human/machine interactions taking place, and lives may potentially be put at risk if circumstances arise that need to be acted on quickly. Thus, for the reasons just outlined (particularly the latency and reliability aspects, but also due to security vulnerabilities) interfacing with the cloud is not a viable option.
In order to keep the latency to a minimum (as well as issues regarding available bandwidth, etc.), much more of the processing workload will need to be done at the edge, rather than the cloud – and this calls for a radical change of architecture: starting from the sensor, inference at the edge and immediate actuation. The current set up, where the sensing/data acquisition hardware and the data processing and machine learning side are in isolation from one another, simply won’t be a valid strategy. What is needed instead is a more integrated approach where the respective analog (the sensors, signal chain, etc.) and processing elements are more closely aligned.
In an autonomous driving context, machine learning will help improve the safety of road users (vehicle occupants, pedestrians, etc.). Likewise, in relation to cobots, it will dramatically reduce the chances of workers being harmed, as well as aiding the whole collaborative process. This will enable boosts in overall efficiency and productivity.
Employing an appropriate sensor technology
Cobots and vehicles’ advanced driving assistance system (ADAS) implementations will require access to a breadth of sophisticated sensor devices, via which the data they require can be derived in a timely manner and with a high degree of precision. Data concerning various parameters may need to be captured – in particular the position, orientation and movement of objects/humans that the cobot or vehicle needs to aware of, so as to avoid an accident occurring.
In the past, sensor devices were mainly electro-mechanical in nature. They were bulky, expensive and not always that reliable. Innovations in micro-machining, optoelectronics and RF have allowed a new breed of sensors to be introduced over the last couple of decades that have a wealth of compelling characteristics. They are more cost effective to produce, deliver heightened performance benchmarks and can also fit into compact form factors.
Relatively low resolution infra-red (IR) image sensor arrays can be used for occupancy detection purposes, allowing the system to determine if there is a human in the vicinity by picking up on the heat given off. Pyroelectric sensors are low cost means via which to detect human presence through perceiving of movement (though they cannot detect humans when stationary). Laser-based LiDAR imaging will be utilised by automobiles and within the robots sector over the course of the coming years – to gain accurate and constantly updating 3D renderings of the surrounding environment, so that potential collisions can be prevented. The widespread adoption of time-of-flight (ToF) technology is also expected by automotive and robotics manufacturers in the near future. This enables 3D imaging through the emission and subsequent detection of IR beams so that the distance from objects can be calculated in real time.
Another key technology with potential is millimetre wave (mmWave). By employing it, data concerning the position, velocity and direction of movement can be obtained. Texas Instruments’ mmWave sensor ICs incorporate a microcontroller, data conversion and digital signal processing (DSP) functions, and are marketed in both industrial and automotive variants. Specifying them will mean that a cobot is able to determine the position of nearby objects with an exactitude of <100μm.A vehicle’s ADAS system will be able to chart the movement of other vehicles approaching from distances of 60m away with velocities of as much as 100km/hr. Though these devices deliver industry-leading accuracy, they occupy minimal PCB area and have a very low power budget – both of which are key characteristics for edge-based deployment. They also have the major advantage over IR and ToF technologies that they are not dependent on line of sight, but can operate with obstructions that would prevent these other technologies from working (such as walls or the plastic curtains used in factories). Furthermore, they are not affected by dust, fog or smoke in the air.
Figure 1: Example of the TI mmWave sensors available via Texas Instruments
So, there are a multitude of sensors that can be utilised. But, how is the data acquired from these sensors processed? The Sitara AM574x system-on-chip (SoC) devices developed by Texas Instruments have the processing capacity needed to bring greater intelligence to the edge (supporting 10,500DMIPS operation). Each has a pair of Arm® Cortex-A15 processor cores (running at speeds of up to 1.5GHz), along with two floating-point DSP cores, highly suited to power-constrained applications. Pin compatibility between these ICs presents engineers with a scalable platform, so a specific device can be replaced with another option if performance or budget requirements change.
Figure 2: Functional block diagram for the AM574x
The AM5749 device is the flagship IC in this series – with 750MHz DSP cores, 1080p resolution video encode and decode functionality, a 2D graphics accelerator, plus a 3D graphics processing unit (GPU). Thanks to its programmable, 32-bit, dual-core embedded vision engine (EVE) processing subsystem (where each core runs at 650MHz) it can accelerate applied neural network layers – delivering up to 20.8GMACs/s operation. It is equipped to run deep learning inferences even when there are only limited power reserves available (drawing a mere 650mW while idle and 2.5-4W when in full operation). Working in conjunction with the Texas Instruments Deep Learning (TIDL) development flow, it supports both the widely used Caffe and TensorFlow AI frameworks.
The transferring of data captured at the edge to the cloud for processing and then returning it to its origin is too laborious, inefficient and bandwidth consuming to be applicable in most circumstances – particular in time-critical use cases. Moving forward, dedicated hardware that can take trained neural network models and then run them in more constrained edge-based environments will be required.
Embedded analog intelligence will provide systems where the sensors are directly connected to the processor, which is capable of neural network inference at the edge for low latency reaction in combination with power efficiency. By bringing together advanced real-time embedded systems (utilising edge-optimised inference algorithms) with the latest sensor technology, it will be possible to execute deep learning inferences at the edge. This will mean that greater autonomy can be placed at the edge and in doing so, negate the current resource-intensive dependencies witnessed on cloud-based solutions.
Written By: TI Expert Matthieu Chevrier, Texas Instruments