A novel image dehazing framework for robust vision-based intelligent systems

Farah Deeba, Fayaz A. Dharejo, Muhammad Zawish, Fida H. Memon, Kapal Dev, Rizwan A. Naqvi, Yuanchun Zhou, Yi Du

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Apart from high-level computer vision tasks, deep learning has also made significant progress in low-level tasks, including single image dehazing. A well-detailed image looks realistic and natural with its clear edges and balanced colour. To achieve a clearer and vivid view, we exploit the role of edges and colours as a significant part of our proposed work. A progressive two-stage image dehazing network is presented to overcome the challenges of current image dehazing algorithms. The proposed image dehazing framework is divided into two steps; in the first stage, the multiscale image features of the encoder and decoder structure can be extracted. The second stage consists of the Color Correction Model (CCM), which retrieves balanced colour close to the ground truth. The encode-decoder network consists of a dense residual attention unit (DRAU) that comprises channel attention with pixel attention mechanisms. We have seen that weighted information and the haze difference is inconsistent across pixels without DRAU at the various channel-specific features. DRAU deals with different features and pixels unequally, which offers more versatility in handling knowledge of various types of detailed information. Our proposed two-stage network exceeds state-of-the-art algorithms in both visual and quantitative aspects. The findings are tested with the best-published peak signal-to-noise ratio metrics of 33.55–33.44 dB and SSIM 0.9619–0.9714 on SOTS indoor and outdoor test data sets.

Original languageEnglish
JournalInternational Journal of Intelligent Systems
Publication statusAccepted/In press - 2021


  • attention mechanism
  • colour correction
  • image dehazing
  • residual learning


Dive into the research topics of 'A novel image dehazing framework for robust vision-based intelligent systems'. Together they form a unique fingerprint.

Cite this