Accelerating Optimization Algorithms With Dynamic Parameter Selections Using Convolutional Neural Networks For Inverse Problems In Image Processing

Byung Hyun Lee, Se Young Chun

Recent advances using deep neural networks (DNNs) for solving inverse problems in image processing have significantly outperformed conventional optimization algorithm based methods. Most works train DNNs to learn 1) forward models and image priors implicitly for direct mappings from given measurements to solutions, 2) data-driven priors as proximal operators in conventional iterative algorithms, or 3) forward models, priors and/or static stepsizes in unfolded structures of optimization iterations. Here we investigate another way of utilizing convolutional neural network (CNN) for empirically accelerating conventional optimization for solving inverse problems in image processing. We propose a CNN to yield parameters in optimization algorithms that have been chosen heuristically, but have shown to be crucial for good empirical performance. Our CNN-incorporated scaled gradient projection methods, without compromising theoretical properties, significantly improve empirical convergence rate over conventional optimization based methods in large-scale inverse problems such as image inpainting, compressive image recovery with partial Fourier samples, deblurring and sparse view CT. During testing, our proposed methods dynamically select parameters every iterations to speed up convergence robustly for different degradation levels, noise, or regularization parameters as compared to direct mapping approach.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment