top of page

Research Blog

Buscar
Foto del escritorCarlos Osorio

Depth map estimation is a computer vision technique used to determine the distance of objects in an image or video from a single camera. It is a critical component in several applications such as augmented reality, autonomous driving, and 3D reconstruction. This task of estimating the depth map can be computationally intensive and requires a lot of processing power. In recent years, Graphics Processing Units (GPUs) have emerged as a powerful tool for accelerating the depth map estimation process.



A depth map is a 2D representation of the depth of objects in an image, where the intensity of each pixel in the map corresponds to the distance of the object from the camera. The goal of depth map estimation is to estimate the depth of every pixel in an image, given only a single RGB image as input.


The depth map estimation process can be divided into two main steps: feature extraction and depth estimation. Feature extraction is the process of detecting unique features in the image that can be used to estimate the depth. The most common features used for depth estimation are edges, corners, and texture.


Depth estimation is the process of using the extracted features to estimate the depth of the objects in the image. There are several methods for depth estimation, including stereo matching, monocular depth estimation, and structured light.


Stereo matching is the process of finding correspondences between the pixels in two images taken from different viewpoints. By finding the correspondences, the depth of objects in the image can be estimated. Stereo matching is an accurate method for depth estimation, but it requires two images and is computationally intensive.


Monocular depth estimation is the process of estimating the depth of objects in an image using only a single RGB image. Monocular depth estimation is less accurate than stereo matching, but it is more convenient and can be used in real-time applications. Structured light is a method for depth estimation that involves projecting a pattern of light onto the objects in the image and then capturing the deformed pattern. The depth of the objects can then be estimated based on the deformation of the pattern.


GPUs are well suited for depth map estimation because they are designed to perform many parallel computations simultaneously. In depth map estimation, the feature extraction and depth estimation steps can be parallelized, allowing for a significant speedup in processing time. The use of GPUs in depth map estimation has several benefits, including increased processing speed, reduced power consumption, and lower cost. With the increasing demand for real-time depth map estimation, GPUs have become an essential tool for solving this challenging problem.


In conclusion, depth map estimation is a critical component in several computer vision applications and is becoming increasingly important for real-time applications. GPUs have emerged as a powerful tool for accelerating the depth map estimation process, offering several benefits such as increased processing speed, reduced power consumption, and lower cost. As the demand for real-time depth map estimation continues to grow, the use of GPUs will become increasingly prevalent in this field.

 

Foto del escritorCarlos Osorio

Radar technology has been widely used for a range of applications such as air traffic control, weather monitoring, and military surveillance. With the recent advancements in communication technology, multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) has been

introduced as a new technique for radar imaging. In this article, we will discuss the basics of 2D RADAR imaging using MIMO-OFDM and its advantages over conventional RADAR systems.



What is MIMO-OFDM?

MIMO-OFDM is a combination of multiple-input multiple-output (MIMO) technology and orthogonal frequency-division multiplexing (OFDM) technology. MIMO refers to the use of multiple antennas at both the transmitting and receiving ends of a communication system. OFDM is a digital modulation technique that divides the available bandwidth into multiple subcarriers and transmits data symbols over each subcarrier. MIMO-OFDM combines the advantages of MIMO and OFDM to provide high data rates and reliable communication.


How does MIMO-OFDM work in RADAR imaging?

In RADAR imaging, MIMO-OFDM can be used to transmit multiple signals simultaneously over different antennas, resulting in a higher resolution image. The transmitted signals are modulated onto orthogonal subcarriers using OFDM, which helps to mitigate multi-path fading and improve the signal-to-noise ratio (SNR). The received echoes from the target are then demodulated and processed to produce a 2D image of the target.


Advantages of 2D RADAR imaging using MIMO-OFDM:

  1. High Resolution: MIMO-OFDM provides a higher resolution image compared to conventional RADAR systems. This is due to the use of multiple antennas at both the transmitting and receiving ends, which results in a higher spatial resolution.

  2. Improved SNR: The use of OFDM in MIMO-OFDM helps to mitigate multi-path fading and improve the SNR. This results in a clearer image of the target, even in adverse weather conditions.

  3. Increased Bandwidth Utilization: MIMO-OFDM allows for multiple signals to be transmitted simultaneously, increasing the utilization of the available bandwidth. This results in a higher data rate and improved overall performance.

  4. Robustness: MIMO-OFDM is more robust to interference compared to conventional RADAR systems, as it uses orthogonal subcarriers to transmit data. This results in improved reliability and better target detection.

Conclusion


In conclusion, 2D RADAR imaging using MIMO-OFDM provides several advantages over conventional RADAR systems. The combination of MIMO and OFDM results in a higher resolution image, improved SNR, increased bandwidth utilization, and robustness to interference. As technology continues to advance, it is likely that MIMO-OFDM will become increasingly popular for RADAR imaging applications.


Actualizado: 16 dic 2022

The capacity to capture images using RGB silicon-based sensors in the visible (VIS) can be limited in outdoor environmental conditions. Due to illumination variations of the scene or scattering effects caused by the interaction of light with micrometer-size particles in the rain, fog, or smoke, limiting the depth of visibility and the level of contrast of the image. In vision applications, where there is dependence on the vision systems such as UAVs or autonomous vehicles for navigation, this reduction in the sensor's performance is critical. A solution uses a hyperspectral vision system that can operate in the different spectral bands in the near-infrared range. This spectral band has fewer light attenuation effects due to operating in the Mie regime. The size of the scattering particles is comparable to the used illumination wavelengths. In this work, we propose a vision system based on NIR active illumination single-pixel imaging (SPI) with time-of-flight (ToF) wavelengths of 850 and 1550 nm for 2D image reconstruction under rainy or fog conditions.




 



bottom of page