© The Institution of Engineering and Technology
In this study, the authors propose a novel multi-frame super-resolution method using frame selection and multiple fusions for atmospherically distorted, zoomed-in, image-quality enhancement. When a small part of the image captured by placing a target several kilometres away from the fixed camera is enlarged, the quality of the part becomes poor owing to low resolution, spatial deformations and noise that are mainly caused by long distance and atmospheric turbulence. Thus, the authors propose an adaptive frame selection method that selects only a few frames with small blur based on the corresponding images with relatively clear edges. Further, they propose multiple fusion schemes to reconstruct the selected frames, thereby suppressing the influence of deformation. By converting all the frames into high-resolution based on each frame and integrating them, deformation and noise are effectively removed without high computation cost using the multiple fusion scheme. The proposed method, which enhances the quality of atmospherically distorted zoomed-in images, exhibits superior performance than the state-of-the-art image super-resolution methods with regard to high accuracy, efficiency and ease of implementation, ensuring that the proposed method is suitable for enhancing the quality of an image captured using a general digital camera or a smartphone.
References
-
-
1)
-
1. Tsai, R., Huang, T.: ‘Multiframe image restoration and registration’, Adv. Comput. Vis. Image Process., 1984, 1, pp. 317–339.
-
2)
-
21. Gu, K., Li, L., Lu, H., et al: ‘A fast reliable image quality predictor by fusing micro-and macro-structures’, IEEE Trans. Ind. Electron., 2017, 64, (5), pp. 3903–3912.
-
3)
-
8. Dong, C., Loy, C.C., He, K., et al: ‘Learning a deep convolutional network for image super-resolution’. European Conf. on Computer Vision, Springer, Cham, 2014.
-
4)
-
9. Kim, J., Kwon Lee, J., Mu Lee, K.: ‘Accurate image super-resolution using very deep convolutional networks’. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 1646–1654.
-
5)
-
22. Xie, Y., Zhang, W., Tao, D., et al: ‘Removing turbulence effect via hybrid total variation and deformation-guided kernel regression’, IEEE Trans. Image Process., 2016, 25, (10), pp. 4943–4958.
-
6)
-
16. Li, Y., Iwamoto, Y., Ogawa, K., et al: ‘Computer simulation of image distortion by atmospheric turbulence using time-series image data with 250-million-pixels’, Int. J. Comput. Electr. Eng., 2018, 10, (1), pp. 53–61.
-
7)
-
20. Dong, W., Zhang, L., Shi, G., et al: ‘Nonlocally centralized sparse representation for image restoration’, IEEE Trans. Image Process., 2013, 22, pp. 1620–1630.
-
8)
-
6. Zhu, X., Milanfar, P.: ‘Image reconstruction from videos distorted by atmospheric turbulence’. Visual Information Processing and Communication, San Jose, USA, January 2010, 7543.
-
9)
-
19. Irani, M., Peleg, S.: ‘Improving resolution by image registration’, Graph. Models Image Process., 1991, 53, pp. 231–239.
-
10)
-
7. Freeman, W.T., Jones, T.R., Pasztor, E.C.: ‘Example-based super-resolution’, IEEE Comput. Graph. Appl., 2002, 22, pp. 56–65.
-
11)
-
14. Canon Global News: September 2015.
-
12)
-
17. Li, Y., Iwamoto, Y., Ogawa, K., et al: ‘Multi-frame super resolution using frame selection and multiple fusion for 250 million pixel images’. Proc. of IEEE Int. Conf. on Consumer Electronics, Las Vegas, USA, January 2018.
-
13)
-
11. Shi, W., Caballero, J., Huszár, F., et al: ‘Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network’. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 1874–1883.
-
14)
-
3. Zhu, X., Milanfar, P.: ‘Removing atmospheric turbulence via space-invariant deconvolution’, IEEE Trans. Pattern Anal. Mach. Intell., 2013, 35, pp. 157–170.
-
15)
-
10. Zhang, Y., Tian, Y., Kong, Y., et al: ‘Residual dense network for image super-resolution’. IEEE Conf. on Computer Vision and Pattern Recognition, Utah, USA, 2018, pp. 2472–2481.
-
16)
-
5. Shimizu, M., Yoshimura, S., Tanaka, M., et al: ‘Super-resolution from image sequence under influence of hot-air optical turbulence’. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, USA, 2008, pp. 1–8.
-
17)
-
4. Aubailly, M., Vorontsov, M.A., Carhat, G.W., et al: ‘Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach’, Int. Soc. Optics Photonics, 2009, 7463.
-
18)
-
2. Farsiu, S., Robinson, M.D., Elad, M., et al: ‘Fast and robust multiframe super resolution’, IEEE Trans. Image Process., 2004, 13, pp. 1327–1344.
-
19)
-
18. Gu, K., Zhai, G., Lin, W., et al: ‘No-reference image sharpness assessment in autoregressive parameter space’, IEEE Trans. Image Process, 2015, 24, (10), pp. 3218–3231.
-
20)
-
13. Ogawa, K., Yamaguchi, Y., Iwamoto, Y., et al: ‘SIFT-based multi-frame super resolution for 250 million pixel images’. Proc. of Image and Signal Processing, BioMedical Engineering and Informatics, Datong, China, 2016, pp. 834–837.
-
21)
-
12. Xue, C., Yu, M., Jia, C., et al: ‘Adaptive frame selection for multi-frame super resolution’, Adv. Future Comput. Control Syst., 2012, 159, pp. 41–46.
-
22)
-
15. Totsuka, H., Tsuboi, T., Muto, T., et al: ‘6.4 an APS-H-size 250Mpixel CMOS image sensor using column single-slope ADCs with dual-gain amplifiers’. Solid-State Circuits Conf., San Francisco, USA, 2016.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.0319
Related content
content/journals/10.1049/iet-ipr.2019.0319
pub_keyword,iet_inspecKeyword,pub_concept
6
6