Time of Flight - 3D Imaging

Although 3D imaging technology has existed for several decades, commercial 3D imaging products have only recently become available in the last 10-15 years, which particularly increased once High Definition (HD) video cameras became an increasingly popular technology used by major film studios during the production of 3D movies. Since then, 3D imaging technology has rapidly evolved its use in various different consumer markets, as well as within the machine vision industry.

Despite its overwhelming popularity, there remains an urgent need to develop higher levels of process monitoring and automation, especially when these technologies are applied within the machine vision industry.

For these purposes, the traditional 2D approach is no longer a sufficient means of imaging, as they lack the ability to achieve the required level of accuracy and distance measurement for complex object recognition and dimensioning applications. Additionally, traditional 2D imaging also lacks the ability to handle complex interaction situations, such as that which is required for human/robot co-working.

Overview of 3D Imaging

The four main techniques needed to obtain a 3D image include stereo vision, structured light 3D imaging, laser triangulation and Time of Flight (ToF), the last three of which are of the ‘active’ imaging family that require an artificial source of light to obtain an image.

Stereo Vision

Stereo vision requires the use of two mounted cameras to obtain different visual perspectives of an object. The calibration techniques used for this specific 3D imaging technique involve the alignment of the pixel information between the cameras, as well as the extraction of the necessary information on the depth of the image. These calibration techniques are performed in a similar manner to how our brains visually measure distance. The transposition of the cognitive process into a system therefore requires a significant computational effort by the imaging system.

Stereo vision – Source Tech Briefs

Figure 1. Stereo vision – Source Tech Briefs

The incorporation of standard image sensors into stereo vision reduce the overall costs of these cameras, as compared to when more sophisticated sensors are used, such as a high-performance sensor or global shutter, which typically result in a higher cost of the complete system. However, it is important to note that the distance range is limited by mechanical constraints, as the requirement to achieve a physical baseline will require the use of larger dimension modules.

The precise mechanical alignment and recalibration of these systems are also important in ensuring their accurate measurements. One limitation of the stereo vision technique is that it often does not work well in changing light conditions, as it is heavily dependent upon the object’s reflective characteristics.

Structured light

Structured light – Source University of Kentucky, Laser Focus World

Figure 2. Structured light – Source University of Kentucky, Laser Focus World

In the structured light technique, a predetermined light pattern is projected onto an object to obtain an image’s depth information through the analysis of the distorted pattern. This imaging technique prevents motion blur from occurring, as there is no conceptual limit on frame times, which thereby provides this robust technique protection from multi-path interfaces.

Note that active illumination requires a complex camera, in which precise and stable mechanical alignment between the lens and pattern projector are required, however there is a risk of de-calibration that can occur in these situations. Additionally, the reflected pattern is sensitive to optical interference in the environment and must therefore only be utilized for use in indoor applications.

Laser Triangulation

Laser triangulation systems measure the geometrical offset of a line of light, whose value is directly related to the height of the object. Based on the ability of the camera to scan the object, this one-dimensional imaging technique is entirely dependent on the determining the distance between the laser and its point on the surface of the object, which will provide information on the position of the laser dot as it appears in the camera’s field of view. The term triangulation indicates that the laser dot, the camera and the laser emitter form a triangle.

Laser triangulation

Figure 3. Laser triangulation

High-resolution lasers are typically used for monitoring applications where high accuracy, stability and low temperature drift are required to investigate the displacement and position of objects.

This technique is limited to scanning applications only, as it is only capable of covering only a short range, and is sensitive to ambient light, as well as structured and/or complex structures. Complex algorithms and calibration are also required for this technique.

Time of Flight

Time of Flight (ToF) is a term used to denote each of the methods that measure the distance from a direct calculation of the double time flight of photons that are present between the camera and the scene. This measurement is performed either directly (D-ToF) or indirectly (I-ToF). Whereas D-ToF requires a complex and constraining time-resolved apparatus, I-ToF is more simple, as its a light source is synchronized with an image sensor.

The pulse of light is then emitted from the sensor in phases with the shuttering of the camera, during which the dysynchronization of the light pulse is used to calculate the ToF of the photons to determine the distance between the point of emission and the object. During this procedure, a direct measurement of the depth and amplitude of every measurable pixel is used to create the final image, which is otherwise referred to as the depth map.

The ToF system has a small aspect ratio and monocular approach that only requires calibration once in the entire lifetime of the device. Additionally, these properties of the ToF system allow for its consistent successful operation in ambient light conditions. Despite these advantages, the ToF system requires active illumination synchronization, which can ultimately lead to multi-path interference and distance aliasing of the depth map.

ToF operating principle

Figure 4. ToF operating principle

Comparing Techniques

Although few in number, many of the currently used 3D systems are based on 3D stereo vision, structured light cameras or laser triangulation, all of which typically operate at fixed working distances that require significant calibration to achieve specific areas of detection.

The ToF systems are therefore particularly advantageous as compared to these other 3D imaging techniques as a result of their unique ability to overcome these limiting challenges. In doing so, ToF systems often provide users with a greater amount of flexibility in its use in potential applications. Due to the pixel complexity and/or power consumption of most commercial solutions, image resolution is often limited to Video Graphics Array (VGA) or less.

Table 1. 3D imaging techniques ‘top-level’ comparison

CMOS Sensor Solution for ToF

The ToF technology offers a high application perspective, which has therefore prompted Teledyne e2v to develop the first 3D ToF solution that exhibits a true 1.3 megapixel (MP) depth of resolution and 1 inch optical format. This ToF solution is based on a specific high sensitivity and high dynamic range CMOS sensor that enables grey scale image and depth fusion capability.

Additional product features of the CMOS sensor for ToF Imaging includes:

  • State of the art 1.3 MP depth map resolution
  • Acquires a depth map at full resolution
  • Accuracy ±1 cm
  • High speed
  • 3D imaging of rapidly moving objects at a rate of up to 120 frames per second (fps) and 30 fps depth map at full resolution, all the while maintaining a high global shutter efficiency
  • 3D detection range of 0.5 m to 5 m
  • High Dynamic Range (HDR): 90 dB
  • Visible and NIR high sensitivity sensor of 50%
  • Quantum Efficiency (QE) at 850 nm
  • Night/Day vision
  • Embedded 3D processing of multiple regions of interest (ROI), of which includes two windows and a binning and/or on-chip histogram data contextual.
  • A demonstrator platform has been developed to evaluate the unique 1.3 MP depth resolution that is outputted in either a depth map or a point cloud format.

As shown in Figure 5, the Teledyne e2v ToF system consists of a compact 1 inch optical format board camera system that is based on the high sensitivity 1.3 MP sensor. Furthermore, this ToF system is equipped with an embedded multi-integration on-chip function (gated sensor), a light source and optics to ensure its performance is maintained at a resolution of 1.3 MP.

The ToF demonstrator platform – Source Teledyne e2v

Figure 5. The ToF demonstrator platform – Source Teledyne e2v

Active Imaging Using ToF with an Adapted 5T CMOS Sensor

One example active imaging, which is a technique that utilizes an artificial light source, involves the assisted autofocus feature found in most cameras that utilize an infrared signal to measure the distance between the camera and an object in low light conditions. The technology of active imaging allows for images to be produced despite harsh weather conditions, such as rain and/or fog, all the while maintaining range gating and ToF.

Range Gating

Range gating combines both a pulsed light wave front, which is sent towards the target and once the reflection returns from the plane of reflection, as well as a specific high speed shuttering camera that turns on at just the right moment. Range gate allows for the selection of an image plane distance, depending on the synchronization of the light and the sensor.

For example, when the target object and the camera are separated by a harsh environmental condition, such as rain, fog or a high concentration of aerosol particles within the environment, some of the ‘ballistic photons’ are still capable of crossing the medium back towards the camera.

Although these photons are small in number, their capture synchronization capabilities allow the image to be captured through the diffusing medium. The range gating technique operation is successful at long distance in most situations, depending on the power of the light source.

As compared to range gating, ToF differs in its ability to determine the distance and location of the reflection plane from the camera, as this system will instead require a rapid global shutter camera when the object is at a short distance away from the camera. Unlike active imaging, ToF does not focus on a specific image plane, which allows this system directly perform distance imaging within the range of interest.

As shown in Figure 6, the implementation of range gating is based on a synchronized camera source light system that can be operated in the slave or master mode, depending on the specific constraints of the application. With an extremely fast global shutter that is in the order of hundreds of nanoseconds, range gating is a particularly impressive system.

A pulse of light is emitted by the source according to a trigger at the starting time (τ0) by the camera. Following the emission of the pulse of light, which is otherwise denoted as t1, the pulse of light will reach the range to either be reflected or not, depending on whether an object is present or not. In the case of a reflection, the time that is required for the light to travel the distance back towards the camera is denoted as τ2. At the instant τ3 = τ0 + 2τ, where t represents the return time of light and the camera shutter opens. At this point in time, the images captured by the camera will eventually the reflected signal.

This cyclic process is repeated thousands of times during the frame duration to accumulate a sufficient signal with regards to the readout noise. The image produced during this process is in the grey scale and corresponds only to objects that are present within the range. To produce a depth image, it is necessary to either sweep a set of images in range gating mode at several depths or by adjusting the delay τ. The distance of each point is then computed from this set of images.

Range Gating principle

Figure 6. Range Gating principle

Global Shutter pixel

Figure 7. Global Shutter pixel

As shown in Figure 7, pixel image sensors can produce short and synchronous integration times, otherwise known as a global shutter, through the use of a five-transistor (5T) pixel that is associated with a dedicated phase driver. Therefore, the signal integration phase is carried through a continuous motion due to the accumulation of synchronous micro-integration.

Teledyne e2v has developed a proprietary technique that is based on a five-transistor pixel and timing generation on alternate lines to determine the change in time (Δt) periods, all of which have been narrowed down to approximately 10 nanoseconds, which represents a significant improvement in temporal resolution. With the high sensitivity of this system combined with its low noise ratio, the 1.3 MP CMOS image sensor includes a multi-integration or ‘accumulation’ mode.

A high Parasitic Light Sensitivity (PLS) ratio, otherwise known as the ‘extinction ratio’, which is placed between the pinned photodiode and the storage node capacitor, is required to reject excess background to further sharpen images through the rejection of parasitic light during the camera gating ‘off’ period.

5T pixel CMOS with adapted timing and sync circuit needs an adequate extinction ratio to reject the scene background

Figure 8. 5T pixel CMOS with adapted timing and sync circuit needs an adequate extinction ratio to reject the scene background

In an effort to continue the advancements of the ToF technology, Teledyne e2v has developed the novel BORA 1.3 MP CMOS image sensor for systems operating at short distances and ranges. As one of the only sensors currently available for industrial use, the BORA 1.3 MP CMOS features an optimized multi-integration mode, excellent performance capabilities in low light conditions and is also equipped with an electronic global shutter, all the while maintaining the accuracy and frame rate performance of previously existing ToF systems.

Originally released in Fall of 2017, the BORA sensor is currently available for consumer and commercial use. Upon purchase of this sensor, a complete support service is available to provide assistance to users interesting in building their systems according to specific application requirements. The new competitive performances of the BORA sensor are shown in Table 3.

Table 3. ToF platform performance comparison

(1) Accuracy gives the gab between the measured value and the actual value
(2) Temporal noise gives the RMS precision of measurement from frame to frame which represents the repeatability of the system

Summary

To improve the effectiveness and autonomy of industrial systems, the use of vision systems for guided robotics and other autonomous machines now require the integration of 3D vision of object recognition at a superior accuracy. Several 3D techniques currently exist with their own respective advantages and limitations that are dependent upon the requirements of the specific application. Time of Flight (ToF) technologies currently offers extraordinary perspectives for 3D vision, and is therefore driving the design of a new generation of dedicated CMOS image sensors.

This information has been sourced, reviewed and adapted from materials provided by Teledyne E2V.

For more information on this source, please visit Teledyne E2V.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Teledyne E2V. (2024, July 22). Time of Flight - 3D Imaging. AZoSensors. Retrieved on November 21, 2024 from https://www.azosensors.com/article.aspx?ArticleID=1153.

  • MLA

    Teledyne E2V. "Time of Flight - 3D Imaging". AZoSensors. 21 November 2024. <https://www.azosensors.com/article.aspx?ArticleID=1153>.

  • Chicago

    Teledyne E2V. "Time of Flight - 3D Imaging". AZoSensors. https://www.azosensors.com/article.aspx?ArticleID=1153. (accessed November 21, 2024).

  • Harvard

    Teledyne E2V. 2024. Time of Flight - 3D Imaging. AZoSensors, viewed 21 November 2024, https://www.azosensors.com/article.aspx?ArticleID=1153.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.