ITMO
ru/ ru

ISSN: 1023-5086

ru/

ISSN: 1023-5086

Scientific and technical

Opticheskii Zhurnal

A full-text English translation of the journal is published by Optica Publishing Group under the title “Journal of Optical Technology”

Article submission Подать статью
Больше информации Back

DOI: 10.17586/1023-5086-2020-87-10-05-14

УДК: 004.932.4

Single-frame Noise2Noise: method of training a neural network without using reference data for video sequence image enhancement

For Russian citation (Opticheskii Zhurnal):

Бойко A.A., Малашин Р.О. Single frame Noise2Noise: метод обучения нейронных сетей без использования эталонных данных в задаче улучшения изображения видеопоследовательности // Оптический журнал. 2020. Т. 87. № 10. С. 5–14. http://doi.org/10.17586/1023-5086-2020-87-10-05-14

 

Boiko A.A., Malashin R.O. Single-frame Noise2Noise: method of training a neural network without using reference data for video sequence image enhancement [in Russian] // Opticheskii Zhurnal. 2020. V. 87. № 10. P. 5–14. http://doi.org/10.17586/1023-5086-2020-87-10-05-14

 

For citation (Journal of Optical Technology):

A. A. Boiko and R. O. Malashin, "Single-frame Noise2Noise: method of training a neural network without using reference data for video sequence image enhancement," Journal of Optical Technology. 87(10), 567-573 (2020). https://doi.org/10.1364/JOT.87.000567

Abstract:

A method of training neural networks for image enhancement without using reference data is proposed based on the assumption of similarity of signals and independence of noise components in spatially proximal image pixels. This approach enables the formation of the training dataset from each frame of a video sequence by decimation into even and odd rows and columns. The training of image restoration is possible considering the markers of dynamic properties of objects in the image. The efficiency and limitations of the proposed method are studied. Its performance is evaluated using a database of images obtained at a low light intensity.

Keywords:

noise suppression, image restoration, training without standards

OCIS codes: 100.2980, 150.1135

References:
1. C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to see in the dark,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), pp. 3291–3300. 2. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Trans. Graphics 35(6), 192 (2016). 3. J. Anaya and A. Barbu, “RENOIR — a benchmark dataset for real noise reduction evaluation,” J. Vis. Commun. Image Representation 51, 144–154 (2018). 4. T. Plotz and S. Roth, “Benchmarking denoising algorithms with real photographs,” in IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 2750–2759. 5. B. Brummer and C. De Vleeschouwer, “Natural image noise dataset,” in IEEE/CVF Conference on Computer Vision and Pattern RecognitionWorkshops (2019), pp. 1777–1784. 6. X. Liu, M. Tanaka, and M. Okutomi, “Single-image noise level estimation for blind denoising,” IEEE Trans. Image Process. 22(12), 5226–5237 (2013). 7. J. Lehtinen, J. Munkberg, J. Hasselgren, S. Laine, T. Karras, M. Aittalla, and T. Aila, “Noise2Noise: learning image restoration without clean data,” in International Conference on Machine Learning (2018), pp. 2965–2974. 8. C. Tian, Y. Xu, L. Fei, and K. Yan, “Deep learning for image denoising: a survey,” in International Conference on Genetic and Evolutionary Computing (2018), pp. 563–572. 9. K. Zhang, W. Zuo, and L. Zhang, “FFDNet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process. 27(9), 4608–4622 (2018). 10. K. P. Murphy, Machine Learning: A Probabilistic Perspective (MIT Press, Cambridge, 2012). 11. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 1712–1722. 12. M. Claus and J. van Gemert, “ViDeNN: deep blind video denoising,” in IEEE/CVF Conference on Computer Vision and Pattern RecognitionWorkshops (2019), pp. 1843–1852. 13. A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical Poissonian–Gaussian noise modeling and fitting for single image raw-data,” IEEE Trans. Image Process. 17(10), 1737–1754 (2008). 14. C. Chen, Q. Chen, M. Do, and V. Koltun, “Seeing motion in the dark,” in IEEE/CVF International Conference on Computer Vision (2019), pp. 3184–3193. 15. B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, “Burst denoising with kernel prediction networks,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), pp. 2502–2510. 16. D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Deep image prior,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), pp. 9446–9454. 17. J. Batson and L. Royer, “Noise2Self: blind denoising by self-supervision,” arXiv:1901.11365 (2019). 18. A. Krull, T.-O. Buchholz, and F. Jug, “Noise2Void — learning denoising from single noisy images,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019), pp. 2124–2132. 19. N. Moran, D. Schmidt, Y. Zhong, and P. Coady, “Noisier2Noise: learning to denoise from unpaired noisy data,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020), pp. 12064–12072. 20. J. Xu, Y. Huang, L. Liu, F. Zhu, X. Hou, and L. Shao, “Noisy-As- Clean: learning unsupervised denoising from the corrupted image,” arXiv:1906.06878 (2019). 21. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015), pp. 234–241. 22. R. C. Gonzales and B. A. Fittes, “Gray-level transformations for interactive image enhancement,” Mech. Mach. Theory 12(1), 111–122 (1977). 23. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33(2), 156–158 (2008).