CycleGAN for day-to-night image translation: a comparative study

Muhammad Feriansyah Raihan Taufiq, Laksmita Rahadianti

Abstract


Computer vision tasks often fail when applied to night images, because the models are usually trained using clear daytime images only. This creates the need to augment the data with more nighttime image for training to increase robustness. In this study, we consider day-to-night image translation using both traditional image processing approaches and deep learning models. This study employs a hybrid framework of traditional image processing followed by a CycleGANbased deep learning model for day-to-night image translation. We then conduct a comparative study on various generator architectures in our CycleGAN model. This research compares four different CycleGAN models; i.e., the orginal CycleGAN, feature pyramid network (FPN) based CycleGAN, the original U-Net vision transformer based UVCGAN, plus a modified UVCGAN with additional edge loss. The experimental results show that the orginal UVCGAN obtains an Frechet inception distance (FID) score of 16.68 and structural similarity index ´ measure (SSIM) of 0.42, leading in terms of FID. Meanwhile, FPN-CycleGAN obtains an FID score of 104.46 and SSIM score of 0.44, leading in terms of SSIM. Considering FPN-CycleGAN’s bad FID score and visual observation, we conclude that UVCGAN is more effective in generating synthetic nighttime images.

Keywords


CycleGAN; Deep learning; Image processing; Image translation; Synthetic nighttime images

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v14.i3.pp2347-2357

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES).

View IJAI Stats