top of page

Research Blog

Buscar

Millimeter wave is a particular class of radar technology that uses short-wavelength electromagnetic waves. The electromagnetic wave signal emitted by the radar system is reflected by other objects in the emission path, and by capturing the reflected signal, the radar system can determine information such as the distance, speed, and angle of the objects Localized within its field of view. Since the target echo distance information of the Linear-frequency-modulated continuous-wave (LFMCW) system radar can be simply processed using the Fast Fourier Transform (FFT). This was the most widely used modulation method and has been used for the most in-depth research on chirp continuous wave radar.


Fig.1. Block diagram of the FMCW radar system


The LFMCW radar system emits linear frequency modulation (FM) pulse signals and captures the signals reflected by objects in its emission path. Figure 1 is a simplified block diagram of the LFMCW radar system. The system working principle is as follows: first, the signal source generates a linear FM pulse transmitted by the antenna. The object reflects and modulates a reflected linear FM pulse captured by the receiving antenna. The mixer combines the transmitted and received signals to generate an intermediate frequency (IF) signal.


Fig.2. Amplitude of the linear FM pulse varying over time.


The mixer is used to mix the receiving end (RX) and transmitting end (TX) signals to generate an intermediate frequency (IF) signal. The output of the mixer contains two kinds of signals: the sum and the difference between the Rx and Tx chirp frequencies. In the FMCW radar system, the frequency of the emission signal increases linearly over time, and this type of signal is also known as the linear FM pulse signal, a function of the linear FM pulse amplitude that varies over time (see Fig. 2). The radar waveform used is the sawtooth wave, Tc is the sawtooth wave period, and B is the frequency modulation bandwidth.


Generation of 3D Images with the RADAR system


The radar signal received FMCW RADAR beat signal and model RADAR point reflectors in the scene (see Fig. 3b). From data simulation, we can synthesize RADAR heatmap top view (see Fig. 3c) and

front view (see Fig. 3d) applying 2D-FFT. From the RADAR heatmap font information, estimate the incident angles of the RADAR heatmap matrix and apply a Taylor series expansion over the model incidence reflection is possible to determine the point surface using 2D-FFT.

Fig.3. Simulation of the RADAR-generated image: (a) test object, (b) scanning surface obtained with radar, (c) Radar heatmap top view, and (d) Radar heatmap front view.


Generate images in the near/far field of objects with dimensions greater than 40 mm due to spatial resolution limitations of the RADAR system. In the test, defined as near field a distance of 25 cm (near-field<1m) and a far-field the 1.5 m (far-field>1m). The test object used was a sphere with a diameter of 65 mm, a cube of 45 x 45x45 mm, and a cylinder test object with dimension of 150 x 80 mm. In the test near-field, we generated the 2D image based on a heatmap (Fig.4).

Fig.4. Reconstruction of RADAR images in the near-field of an object placed at 25 cm: (a-e-i) test object sphere with diameter 65 mm , cube 45 x 45 x 45 mm3, and a cylinder with 150x80 mm, (b-f-j) Radar image near-field of an object placed at 30 cm, (c-g-k) 3D image reconstruction , (d-h-l) 3D image reconstruction+points.







During the last decade, radio detecting and ranging (RADAR) technology evolved from the linear-frequency-modulated (LFM) systems developed in the 1970s to the orthogonal frequency-division multiplexing (OFDM) systems developed in the early 2000s. In the mid-2010s, systems were proposed that combined the radar principle with optical solutions developed for imaging and ranging tasks following a hyperspectral embedded systems approach. The idea was to profit on the one side from the possibility offered by RADAR systems to work in harsh environments using emitted radio waves and detect mainly metal objects placed far away (hundreds of meters or even kilometers) from the detection system with positioning spatial resolutions in tens of centimeters, even if there are non-metallic barriers such as, e.g., walls in between, and expand this possibility by using optical systems (e.g., light detecting and ranging –LIDAR- systems), using visible light active illumination, capable of generating 2D and 3D images of objects placed at much smaller distances from the detector, but allowing for much higher spatial resolutions (in the millimeter range). To reduce the atmospheric absorption of the emitted active illumination and increase the emitted optical power allowed for these systems that can correctly function even in harsh environments, we propose shifting the active illumination wavelengths from the visible range to the near infra-red (NIR) range, e.g., to 1550 nm. Lacking affordable image sensors fabricated in InGaAs technology, capable of detecting NIR radiation, in this work, we propose a hyperspectral imaging system using a very low power consuming single commercially available InGaAs photodiode to generate 2D images using the single-pixel imaging (SPI) approach based on compressive sensing (CS) and an array of NIR light emitting LEDs, combined with an 80 GHz millimeter band RADAR. The system is conceived to deliver a maximum radar range of 150 m with a maximum spatial resolution of ≤ 5 cm and a RADAR cross-section (RCS) of 10 – 50 m2, combined with an optical system capable of generating 24 fps video streams based on SPI generated images yielding a maximum ranging depth of 10 m with a spatial resolution of < 1 cm. The proposed system will be used in unmanned ground vehicle (UGV) applications enabling decision-making in continuous time. The power consumption, dimensions, and weight of the hyperspectral ranging system will be adjusted to the UGV-targeted applications.


RADAR SYSTEM


Due to the advantages offered by orthogonal frequency-division multiplexing (OFDM) radars in terms of bandwidth, controlled in these systems using multiple carriers, and in terms of ambiguity over LFM modulation, we propose using an OFDM radar imaging system with an operating frequency of 2.8 GHz together with the Software Defined Radios (SDR) tool, ETTUS B200 modules, and an antenna array. The proposed radar system was implemented and was finally tested on a UGV following the configuration. The scene scan is performed in front of the vehicle during the driving process. The RADAR imaging information is obtained from the radio signals reflected from the (primarily metallic) surrounding objects. Using the gathered information, a 2D image is generated, on the one hand, along the x-axis following a “cross-range” mode, following Eq. (1), where dmax is the distance to the object,  the emitted radiation wavelength, and Leff the effective length of the antenna used for the emitter. On the other hand, the “down-range” is generated along the y-axis, defined using Eq. (2), where c is the speed of light, Nf is the frequency number, and Δf is the subcarrier spacing.





Fig.1. Photographs obtained from two test scenarios created for the performance evaluation of the OFDM RADAR system placed on top of a UGV (shown on both pictures on the left): (a) in the first test scenario, a cylindrical metallic object was placed at a distance of 90 cm in front of the UGV, which was then properly imaged by the RADAR, as it can be shown on the graph on the right; (b) in the second test-scenario, two objects to the one used in (a) were placed at precisely the same spot but 15 cm apart from each other, and were once again imaged adequately by the system, as shown on the graph on the right.


 

  • Foto del escritorCarlos Osorio

Single image super-resolution (SR) methods aim at recovering high-resolution (HR) images from given low-resolution (LR) ones. SR Algorithms are mostly learning-based methods that learn a mapping between the LR and HR image spaces (see Fig.1) . Among SR methods present in the literature, there are Super-Resolution Convolutional Neural Networks and Fast Super-Resolution Convolutional Neural Networks, among the most used ones.



Fig.1. Comparative performance of the SRCNN and FSRCNN super-resolution method.



FSRCNN: A improvement of the SRCNN method, which adopts the original low-resolution image as input. This method is divided into five parts:

  • Feature Extraction: Bicubic interpolation in previous SRCNN is replaced by 5×5 conv.

  • Shrinking: 1×1 conv is done to reduce the number of feature maps from d to s where s<<d.

  • Non-Linear Mapping: Multiple 3×3 layers are to replace a single wide one

  • Expanding: 1×1 conv is done to increase the number of feature maps from s to d

  • Deconvolution: 9×9 filters are used to reconstruct the HR image.


The FSRCNN is different from SRCNN mainly in three aspects. First, FSRCNN adopts the original low-resolution image as input without bicubic interpolation. A deconvolution layer is introduced at the end of the network to perform upsampling. Second, the non-linear mapping step in SRCNN is replaced by three steps in FSRCNN: the shrinking, the mapping, and the expanding step. Third, FSRCNN adopts smaller filter sizes and a deeper network structure. These improvements provide FSRCNN with better performance but lower computational cost than SRCNN.

Fig.2. Network structures of the SRCNN and FSRCNN methods.


Structure of FSRCNN

For the implementation of the FSRCNN, we implemented a model with eight layers, where layer 1 is the Feature extraction, layer 2 is, layers 3–6 are denoted to figure 3, layer seven carrier out expanding, and layer 8 performs the deconvolution function. The layers are defined as follows:


  • Conv. Layer 1 "Feature extraction": 56 filters of size 1 x 5 x 5.Activation function: PReLU. Output: 56 feature maps; parameters: 1 x 5 x 5 x 56 = 1400 weights and 56 biases

  • Conv. Layer 2 "Shrinking": 12 filters of size 56 x 1 x 1. Activation function: PReLU. Output: 12 feature maps; parameters: 56 x 1 x 1 x 12 = 672 weights and 12 biases

  • Conv. Layers 3–6 "Mapping": 4 x 12 filters of size 12 x 3 x 3. Activation function: PReLU. Output: HR feature maps; parameters: 4 x 12 x 3 x 3 x 12 = 5184 weights and 48 biases

  • Conv. Layer 7 "Expanding": 56 filters of size 12 x 1 x 1. Activation function: PReLU.Output: 12 feature maps; parameters: 12 x 1 x 1 x 56 = 672 weights and 56 biases

  • DeConv Layer 8 "Deconvolution": One filter of size 56 x 9 x 9. Activation function: PReLU. Output: 12 feature maps; parameters: 56 x 9 x 9 x 1 = 4536 weights and 1 bias


Fig.3.Structure of FSRCNN.





Fig.4: SPI 2D image reconstruction using Batch-OMP algorithm in combination with FSRCNN for the scanning methods: Basic, Hilbert, Zig-Zag, and Spiral. As the test object, a Sphere with 50 mm of diameter placed at 25 cm focal length:(a-d)SPI reconstruction of the 8x8 image using the following scanning methods Basic, Hilbert, Zig-Zag, and Spiral respectively, (e-h) post-processing a method based on the application of a bilateral filter, and(i-l)SPI image obtained after applying the FSRCNN approach.



bottom of page