3D machine vision systems are a key enabling technology for the latest industry innovation wave, dubbed “Industry 4.0.” This article explores the history of machine vision and explains how 3D machine vision systems are being used today.
Image Credit: Gorodenkoff/Shutterstock.com
3D machine vision systems can sense and digitally process physical space in three dimensions, feeding live spatial data to computers, machines, and robots. This enables industrial robotics applications that require less supervision and correction from human operators.
Markets are paying attention to the potential of this technology, with analysts recently predicting a CAGR of 9.12% for the 3D machine vision market, which would bring it up to $3.56 billion by 2030.
Advancing Machine Vision from Two Dimensions to Three
Machine vision was an important technology for developing industrial automation in the twentieth century. As an inspection and quality control tool, it is widely used in the manufacturing industry today. Pharmaceuticals and semiconductor chips use fast, automated machine vision systems for critical quality control applications. Food processing, automotive manufacturing, and goods manufacturing have all adopted automated machine vision systems to identify product defects or inefficient processes and manage stock.
Computers first got their eyes in 1959, with the invention of the first digital image scanner, which translated pictures into grids of differently numbered colors. The very next year, Larry Roberts (widely acknowledged as the father of computer vision) defended an MIT Ph.D. thesis that discussed a 3D model that worked by extracting spatial data from a 2D perspective of “polyhedra,” or blocks. Roberts’s book, Machine Perception of Three-Dimensional Solids was published in 1965. In it, he shows how computers could create a 3D model like this from just a photograph in 2D.
Engineering innovation continued through the 1960s and 1970s, as digital photography began to mature, and early systems for digitally generating 3D models from photographs were developed. By the mid-1980s, smart cameras were being introduced to the industry. These used an optical mouse, a single-system imaging device and a processing unit developed by Xerox engineer Richard Lyons in 1981.
Machine vision systems that could make sense of 3D environments were developed for autonomous driving as early as the 1990s, with the U.S. military acting as a key driver and funder of research in this area. The popular mobile game, Pokemon Go, is based on U.S. military-funded developments in geospatial data processing.
By the 2000s, the algorithms, computer processing power, data transmission capacity, and sensing abilities of machine vision technology had advanced far enough for real-time facial recognition systems to be possible.
Now, 2D machine vision is cheap, fast, and accurate. Powerful algorithms lend machines with modern processing capacity the ability to process vast amounts of data very quickly. As a result of this, and several other manufacturing breakthroughs like RFID tagging, the previous wave of industrial automation innovations has likely reached its maximum performance limits.
Robust 3D machine vision systems will be necessary for a new paradigm in the future of robotics and automation technology. This is because robots are increasingly being asked to interact more with humans and other robots, move around space, and respond to dynamically changing requirements and environments. 3D machine vision systems will enable these functions, enabling the next phase of industrial innovation.
How Does 3D Machine Vision Work?
Machine vision is a combination of technologies working together to create live, accurate 3D models that computers and robots can process.
Most 3D vision systems work by projecting multiple patterns onto the target environment or object. By analyzing image data with these patterns, computers can determine the distance between different points lit up by the pattern from the sensor and each other.
The algorithms that tell computers how to convert these images into digital 3D models need to be very efficient to enable live machine vision. Today, machine learning approaches are also used to train machine vision algorithms and increase their accuracy.
Image Credit: pixelparticle/Shutterstock.com
Applications for 3D Machine Vision Today
Many of the technology areas that support the Industry 4.0 concept will depend on cutting-edge 3D machine vision systems in their next stages of development. 3D machine vision will enable more intuitive, safer robot-with-human and robot-with-robot industrial processes. This increased collaboration between human workers and machines is a key element of Industry 4.0, as it was outlined in 2011 by the German government.
In one example, bulk-picking robots with 2D vision systems cannot pick up parts in disorganized piles, only working on tight production lines with little room for error. However, modern manufacturing demands require automation systems that respond to dynamic events in the manufacturing environment.
3D vision systems also work better in applications where 2D systems have been used previously. For example, they are more effective at identifying shiny objects or working in dark or bright environments. With depth perception, they are also more precise.
Still, 3D machine vision is not without competitors. Robots and machines can use other technologies to create 3D models, for example, LiDAR (Light Detection and Ranging) sensors which generally offer higher resolutions than 3D machine vision. Ultrasonic sensors are also used for presence sensing. The automotive industry already relies on this technology for reversing and parking assistance.
Many robots, however, combine different presence and environment sensors to gain the advantages of all of them. The future challenge of 3D machine vision systems will be accuracy, keeping up with LiDAR, and operating speeds, to replace 2D systems.
References and Further Reading
Alonso, V., et al. (2019). Industry 4.0 implications in machine vision metrology: an overview. Procedia Manufacturing. doi.org/10.1016/j.promfg.2019.09.020.
Deamer, L. (2018). New sensor technologies employed in today’s robots. [Online] Electronic Specifier. Available at: https://www.electronicspecifier.com/products/sensors/new-sensor-technologies-employed-in-today-s-robots
Hennemann, M. (2021). The Impact of 3D Machine Vision Technology. [Online] AZO Optics. Available at: https://www.azooptics.com/Article.aspx?ArticleID=2059
The History Of Machine Vision – Timeline. (2021) [Online] Robotics Biz. Available at: https://roboticsbiz.com/the-history-of-machine-vision-timeline/
What Is 3D Machine Vision and How Does It Work? [Online] Ainas.
3D Machine Vision Market is Expected to Hit USD 3.56 Billion at a CAGR of 9.12% by 2030 - Report by Market Research Future (MRFR). (2022) [Online] Yahoo Finance. Available at: https://finance.yahoo.com/news/3d-machine-vision-market-expected-084000486.html
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.