Machine vision infused with innovations becomes instrumental to smart manufacturing
According to Liang, in the foreseeable future, machine vision will not only be used to perform quality inspection but it should also play an active role to allow robots to have human-like vision so that they can easily carry out loading, picking, gripping and packing operations with no need for an intricate guiding process. To achieve such purposes, machine vision systems have to incorporate the ability to collect, analyze and process large amounts of data in real time while supporting close communication with other devices. Therefore, how machine vision can keep up with advancing technologies such as edge computing, OPC-UA, ROS 2 and vision guided robotics (VGR) to fully support smart manufacturing requirements on high efficiency, high precision and low latency has become the next topic and challenge.
Take edge computing for example. The traditional machine vision design has separate camera modules and processing units (industrial PC). However, the rapid growth of data volume imposes increasingly challenging requirements on computational power so now the camera module has to turn into an edge-located computing node that can pre-process data and offload some of the burden from the processing unit.
OPC-UA, a machine-to-machine communication protocol for industrial automation, enables heterogeneous platforms or devices in a smart factory to communicate and exchange data. In the past, machine vision systems engage in communication with PLC, I/O or motion control equipment through various specific protocols or customized functions, making integration very difficult. The availability of OPC-UA will be able to resolve such problems.
The combination of ROS 2 and VGR equips robots or automated guided vehicles (AGV) with machine vision to enhance their efficiency and ability to work in synchronization. ROS is an open-source robotics operating system. The first generation ROS 1 is based on the TCP/IP protocol while the later generation ROS 2 is built on the UDP+DDS architecture and provides more powerful support for real-time data sharing between devices with robust security. Major robot manufacturers worldwide have all implemented support for ROS 2 using common SLAM, Navigation, Perception and Manipulation resources and algorithms. This not only enables problem-free communication across robotic systems but also builds a broad development platform for machine vision. Factories in the future will no longer have independent devices or work stations but instead can connect robotic arms, AGV and other machinery of different brands to realize smart manufacturing for wide-ranging production needs through the new VGR concept.
In addition to the above trends, the development of deep learning should also be considered when we try to figure out where machine vision technologies and applications are headed, commented Liang. Deep learning is not a new technology. It basically imitates the workings of the human brain to carry out recognition, decision-making and prediction through training using a neural network paradigm. Execution of deep learning tasks used to rely on high-performance CPUs but the costs ran high and the processing took a long time. Advancements in GPU technologies in recent years have allowed GPUs to process graphics with a much higher efficiency than CPUs. Deep learning leveraging the GPU's processing power has therefore become a technology with a high cost-performance value. The marriage between deep learning and machine vision can be expected to create far-reaching synergy in the future.
Looking ahead to 2018, ADLINK hopes to provide total solutions designed to enable quick introduction and optimal values for Industry 4.0, smart factory and IIoT applications. For this purpose, ADLINK will highlight edge computing, ROS 2 and deep learning as its focus R&D areas. The integration of these innovative technologies will complement the company's machine vision product portfolio, including smart cameras and image processing systems.