top of page
Buscar
Foto del escritorCarlos Osorio

Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions


In the last decade, vision systems have improved their capabilities to capture 3D images in bad weather scenarios. Several techniques exist for image acquisition in foggy or rainy scenarios that use infrared (IR) sensors. Due to the reduced light scattering at the IR spectra, it is possible to discriminate the objects in a scene compared with the images obtained in the visible spectrum. Therefore, in this work, we proposed 3D image generation in foggy conditions using the single-pixel imaging (SPI) active illumination approach in combination with the Time-of-Flight technique (ToF) at 1550 nm wavelength. For the generation of 3D images, we use space-filling projection with compressed sensing (CS-SRCNN) and depth information based on ToF. To evaluate the performance, the vision system included a designed test chamber to simulate different fog and background illumination environments and calculate the parameters related to image quality.


NIR-SPI System Test~Architecture



Fig.1. Sequence algorithm used to generate 2D/3D images.


In this work, we propose an NIR-SPI vision system based on the structured illumination scheme depicted in Figure 1. Still, instead of using an SLM or a DMD to generate the structured illumination patterns, an array of 8 × 8 NIR LEDs is used, emitting radiation with the wavelength λ = 1550 nm. The NIR-SPI system architecture is divided into two stages: the first one controls the elements used to generate images by applying the already explained Fig.2. 3D NIR-SPI camera system developed

single-pixel imaging principle: an InGaAs photodetector (diode FGA015 @ 1550 nm), accompanied by an array of 8 × 8 NIR LEDs. Nevertheless, the spatial resolution of the objects in the scene is achieved by applying the Shape-From-Shading (SFS) method and the unified reflectance model, additionally applying mesh enhancement algorithms, is still very much away from the aimed resolution goal of below 10 mm at a distance of 3 m. Thus, four control spots were incorporated into the system illumination array, consisting of NIR lasers with controlled variable light intensity emulating an illumination sinusoidal signal modulated in time and four additional InGaAs photodiode pairs to measure the distance to the objects in the depicted scene with much higher precision, using the indirect Time-of-Flight (iTOF) ranging method (see Figure 3a). The second stage of the system is responsible for processing the captured signals by the photodiode module through the use of an analog-to-digital converter (ADC), which is controlled by a Graphics Processing Unit (GPU) (see Figure 2). The GPU unit (Jetson–Nano) generates the Hadamard patterns and processes the converted data by the ADC. The 2D/3D image reconstruction is performed using the OMP-GPU algorithm.



 


6 visualizaciones0 comentarios

Entradas recientes

Ver todo

Comments


bottom of page