top of page

Research Blog

Buscar
Foto del escritor: Carlos OsorioCarlos Osorio

In disaster-stricken areas, locating victims swiftly is of utmost importance. One of the most effective ways to achieve this is by detecting radio frequency (RF) signals emitted from communication devices. These signals, originating from cellular networks, radio broadcasts, and satellite communications, provide crucial indicators of human presence. However, scanning across multiple frequencies to identify relevant signals efficiently remains a challenge.



System Overview


Our system is built around three core components:

  1. RTL-SDR Hardware: Provides a flexible and affordable means to scan RF signals over a wide range of frequencies.

  2. FPGA-Based Processing: Accelerates real-time signal processing, ensuring fast and efficient classification of detected signals.

  3. Deep Neural Networks (DNNs): Three different architectures were implemented and tested to enhance the accuracy of signal classification.


This integration allows real-time detection of crucial signals in a disaster area, aiding rescue teams in pinpointing survivors and optimizing their response strategies.


Deep Learning Models and FPGA Integration


To maximize the system’s accuracy, we implemented three deep neural network architectures, trained on a diverse dataset of radio modulations. The networks were optimized for FPGA-based acceleration, leveraging DPU cores for real-time inference. The system is capable of recognizing:


  • AM-SSB-WC (Amplitude Modulation - Single Side Band)

  • AM-DSB-SC (Double Side Band Suppressed Carrier)

  • FM (Frequency Modulation)

  • QPSK (Quadrature Phase Shift Keying)

  • GMSK (Gaussian Minimum Shift Keying)

  • 16QAM (16-Quadrature Amplitude Modulation)

  • OQPSK (Offset Quadrature Phase Shift Keying)

  • 8PSK (8-Phase Shift Keying)

  • BPSK (Binary Phase Shift Keying)

  • OOK (On-Off Keying)


Through extensive testing, the best-performing model achieved an impressive classification accuracy of up to 98%. This level of precision significantly enhances the reliability of RF-based emergency detection systems, ensuring that important distress signals are not overlooked.


Future Applications and UAV Integration


Given its high accuracy and real-time processing capabilities, our system presents a strong candidate for UAV-based emergency response operations. Unmanned Aerial Vehicles (UAVs) equipped with this technology can autonomously scan large disaster zones, detecting and locating RF signals from survivors’ communication devices. This approach could drastically reduce the time required to identify individuals in need of assistance.


Conclusion


Our research demonstrates that cost-effective, FPGA-integrated RF signal detection is a viable solution for emergency response applications. By combining RTL-SDR hardware, FPGA-based processing, and deep learning, we have developed a system that achieves high accuracy in detecting crucial radio signals. The promising results open avenues for further development, particularly in UAV-based implementations, ensuring rapid and efficient victim detection in future disaster scenarios. With ongoing advancements in AI and hardware acceleration, this technology has the potential to revolutionize search and rescue operations, making emergency responses more efficient and saving more lives.

Foto del escritor: Carlos OsorioCarlos Osorio

Single-pixel imaging (SPI) is a powerful technique for capturing images under challenging conditions, such as low-light environments or spectral bands where traditional multi-pixel sensors are not readily available. This is particularly crucial in near-infrared (NIR) imaging, covering wavelengths from 850 to 1550 nm, where conventional imaging systems often struggle. In this blog post, we introduce a hybrid approach that leverages Deep Image Prior (DIP) and Generative Adversarial Networks (GANs) to enhance the resolution of SPI-based images.



The Challenge of SPI Resolution


SPI reconstructs images from a series of intensity measurements using a single photodetector. While this method offers advantages in low-light and specialized spectral ranges, it suffers from resolution limitations due to the inherent under-sampling of spatial information. Traditional deep learning-based super-resolution techniques require extensive labeled datasets, which are difficult to acquire for SPI in NIR bands. Our proposed approach mitigates this limitation by utilizing an unsupervised learning framework.


Hybrid Approach: DIP Meets GAN


Deep Image Prior (DIP) is a compelling technique that reconstructs high-quality images without requiring a large training dataset. By coupling DIP with a Generative Adversarial Network (GAN), we improve SPI resolution through an unsupervised learning paradigm. This approach offers several advantages:


  • Reduced Data Dependency: Unlike supervised methods, DIP leverages image priors, reducing the need for extensive SPI datasets.

  • Enhanced Super-Resolution: The GAN component learns to refine the image quality, making it more detailed and perceptually accurate.

  • Optimized Neural Architectures: We enhance the performance by leveraging variations of UNet and GAN architectures across four different neural network configurations.


Implementation and Results


We conducted both numerical simulations and experimental validations to assess the performance of our hybrid model. Key findings include:


  • Improved Image Quality: Our model consistently enhances SPI image resolution, particularly in the NIR range.

  • Robustness to Noise: The DIP-GAN approach exhibits strong resilience to noisy measurements, a common challenge in SPI applications.

  • Architectural Refinements: By optimizing UNet and GAN structures, we achieve significant improvements in feature extraction and detail preservation.


Future Perspectives


Our results demonstrate that combining DIP with GANs is a promising direction for SPI super-resolution, particularly for niche applications in biomedical imaging, remote sensing, and defense technology. Future research could explore:


  • Real-time implementations for SPI-based imaging systems.

  • Adaptations to other spectral bands beyond NIR.

  • Hybrid models incorporating physics-informed neural networks (PINNs) for further refinement.


Conclusion


By integrating DIP and GANs, we propose an innovative, unsupervised approach to improving SPI resolution. This hybrid model significantly reduces the need for large SPI datasets while maintaining high-quality reconstructions, making it a valuable advancement in computational imaging for the NIR spectrum. Our experimental and numerical results validate its effectiveness, paving the way for broader applications in optical imaging and beyond.


 

BibTeX

@article{OsorioQuero:25,
author = {Carlos Osorio Quero and Irving Rondon and Jose Martinez-Carranza},
journal = {J. Opt. Soc. Am. A},
keywords = {Ghost imaging; Neural networks; Single pixel imaging; Spatial light modulators; Three dimensional imaging; Underwater imaging},
number = {2},
pages = {201--210},
publisher = {Optica Publishing Group},
title = {Improving NIR single-pixel imaging: using deep image prior and GANs},
volume = {42},
month = {Feb},
year = {2025},
url = {https://opg.optica.org/josaa/abstract.cfm?URI=josaa-42-2-201},
doi = {10.1364/JOSAA.541763}
}

Unmanned Aerial Systems (UAS), commonly known as drones, have rapidly emerged as indispensable tools in Search and Rescue (SAR) operations. Their versatility, swift deployment capabilities, and high mobility make them uniquely suited for rapid assessment and intervention missions. As SAR operations become increasingly dependent on advanced technologies, the integration of Artificial Intelligence (AI), improved sensor payloads, and multi-drone coordination has significantly enhanced the efficiency and effectiveness of these aerial platforms.


The Role of Drones in SAR Missions


Drones have revolutionized SAR missions by enabling rapid aerial surveys over vast and often inaccessible areas. Traditional search operations, reliant on ground teams and manned aircraft, are frequently constrained by terrain, weather conditions, and response time. In contrast, drones provide real-time situational awareness, allowing rescue teams to locate survivors, assess hazards, and plan rescue operations more effectively. Multi-UAV coordination has further expanded the operational capabilities of SAR teams. Swarm technologies allow multiple drones to work collaboratively, increasing coverage and efficiency. These coordinated systems can divide search areas, optimize flight paths, and share data instantaneously, leading to improved accuracy and reduced mission duration.


Advancements in Sensor Technologies


One of the most significant technological advancements in SAR drones is the integration of enhanced sensor payloads. Cutting-edge sensor technologies have dramatically improved the ability of drones to detect and identify survivors under challenging conditions. Some key sensor advancements include:

  • Infrared and Thermal Imaging: Essential for detecting human heat signatures in low-visibility conditions, such as at night or in densely forested areas.

  • Radar and Lidar Systems: Effective for mapping terrain, detecting obstacles, and identifying survivors hidden under debris or foliage.

  • Biometric Monitoring: Emerging technologies allow drones to assess vital signs remotely, offering critical data for medical response teams.

  • High-Resolution Optical and Multispectral Cameras: Providing clear imaging and real-time video feeds to enhance operational decision-making.


These sensor technologies collectively enhance the capability of SAR drones to operate in diverse and demanding environments, making them invaluable in disaster response scenarios.


Artificial Intelligence and Autonomy in SAR Operations


Artificial Intelligence has played a pivotal role in elevating drone efficiency in SAR missions. AI-powered image recognition and machine learning algorithms enable drones to autonomously detect survivors, identify hazards, and differentiate between natural and artificial objects. Key AI-driven enhancements include:


  • Automated Object Recognition: AI systems can analyze drone-captured imagery to identify survivors, vehicles, and relevant objects of interest with high accuracy.

  • Predictive Analytics: Machine learning models help anticipate the movement of lost individuals based on terrain data and environmental conditions.

  • Path Optimization: AI-powered drones can autonomously determine the most efficient flight paths, reducing time spent searching and maximizing coverage.

  • Real-Time Decision Support: AI algorithms process sensor data in real-time, assisting SAR teams in making informed decisions rapidly.


The increasing sophistication of AI enables drones to conduct complex missions with minimal human intervention, significantly reducing response times and enhancing mission success rates.


The Promise of Digital Twin Simulations


Digital twin simulations represent a groundbreaking innovation in SAR operations. These simulations create virtual replicas of real-world environments, allowing SAR teams to:

  • Test and optimize drone deployment strategies before actual missions.

  • Simulate various search scenarios and refine operational procedures.

  • Train AI algorithms in realistic conditions, improving their decision-making accuracy.

By leveraging digital twins, SAR teams can enhance their preparedness and response efficiency, ultimately leading to more successful rescue missions.


Challenges and Future Directions


Despite the significant progress in SAR drone technology, several challenges persist:


  • Regulatory Restrictions: Legal and regulatory frameworks governing drone operations vary across jurisdictions, often limiting their deployment in emergencies.

  • Battery Life Constraints: Limited flight endurance remains a significant challenge, necessitating advancements in battery technology or alternative power sources such as solar or hydrogen fuel cells.

  • Payload Limitations: While sensor technologies continue to improve, payload capacity remains a limiting factor, restricting the types and number of sensors that can be carried simultaneously.

  • Weather and Environmental Limitations: Adverse weather conditions, such as strong winds, heavy rain, and extreme temperatures, can impact drone performance and reliability.


Addressing these challenges will require continued research, policy advancements, and technological breakthroughs. Future innovations in AI, autonomous navigation, and sensor miniaturization will further enhance the effectiveness of drones in SAR missions.


Conclusion


The transformative potential of evolving drone technologies in SAR operations is undeniable. By combining AI-driven analytics, advanced sensor integration, and multi-UAV coordination, drones are revolutionizing search and rescue missions, enabling faster, more efficient, and more effective responses. As research and development efforts continue, the next generation of SAR drones will push the boundaries of what is possible, ultimately saving lives through improved real-time decision-making and operational capabilities. The future of SAR operations is increasingly automated, intelligent, and capable, paving the way for unprecedented advancements in disaster response and humanitarian aid.


 

BibTeX

@article{QUERO2025105199,
title = {Unmanned aerial systems in search and rescue: A global perspective on current challenges and future applications},
journal = {International Journal of Disaster Risk Reduction},
pages = {105199},
year = {2025},
issn = {2212-4209},
doi = {https://doi.org/10.1016/j.ijdrr.2025.105199},
url = {https://www.sciencedirect.com/science/article/pii/S2212420925000238},
author = {Carlos Osorio Quero and Jose Martinez-Carranza},
keywords = {Unmanned aerial vehicles (UAV), Unmanned aerial systems (UAS), Search and rescue (SAR), Multi-sensor technology, Automatic control, Disaster response, Intelligent autonomous system}
}

Contact
Information

National Institute of Astrophysics, Optics and Electronics (INAOE)

Annie J. Canon 47 Santa María Tonatzintla 72840.

Puebla-Mexico

  • GitHub
  • Icono social LinkedIn
  • ORCID_ICON
  • reseachgate
  • google_scholar
  • YouTube

Thanks for submitting!

©2021 by Carlos Osorio Quero. 

bottom of page