Image deblurring via multi-scale feature fusion and multi-input multi-output encoder-decoder
-
-
Abstract
A deblurring method combining multi-scale feature fusion and a multi-input multi-output encoder-decoder is proposed for non-uniform blurred images caused by camera shake, fast motion of the captured object, and low shutter speed. Firstly, the initial features of smaller-scale blurred images are extracted using a multi-scale feature extraction module, which uses dilated convolution to obtain a larger receptive field with a smaller number of parameters. Second, the feature attention module is used to adaptively learn useful information from different scale features, which can effectively reduce redundant features by using features of small-scale images to generate attention maps. Finally, the multi-scale feature progressive fusion module is applied to gradually fuse features at different scales, making the information of different scale features to complement each other. Compared with recent multi-scale methods that use multiple subnets stacked on top of each other, we use a single network to extract multi-scale features, thus reducing the training difficulty. To evaluate the deblurring effect and generalization performance of the network, the proposed method is tested on both the benchmark datasets GoPro, HIDE, and the real dataset RealBlur. The peak signal-to-noise ratio values of 31.73 dB and 29.39 dB and the structural similarity values of 0.951 and 0.923 on the GoPro and HIDE datasets, respectively. The deblurring performance is higher than that of recent state-of-the-art deblurring methods, and it also has better performance on the RealBlur dataset containing real scenarios. The experimental results demonstrate that the proposed method is more effective than recent deblurring methods, can effectively restore the edge contour and texture detail information of images. In addition, our method can improve the robustness of subsequent high-level computer vision tasks.
-
-