LNCS Homepage
ContentsAuthor IndexSearch

Good Image Priors for Non-blind Deconvolution

Generic vs. Specific

Libin Sun1, Sunghyun Cho2, Jue Wang2, and James Hays1

1Brown University, Providence, RI 02912, USA
lbsun@cs.brown.edu
hays@cs.brown.edu

2Adobe Research, Seattle, WA 98103, USA
sodomau@postech.ac.kr
juewang@adobe.com

Abstract. Most image restoration techniques build “universal” image priors, trained on a variety of scenes, which can guide the restoration of any image. But what if we have more specific training examples, e.g. sharp images of similar scenes? Surprisingly, state-of-the-art image priors don’t seem to benefit from from context-specific training examples. Re-training generic image priors using ideal sharp example images provides minimal improvement in non-blind deconvolution. To help understand this phenomenon we explore non-blind deblurring performance over a broad spectrum of training image scenarios. We discover two strategies that become beneficial as example images become more context-appropriate: (1) locally adapted priors trained from region level correspondence significantly outperform globally trained priors, and (2) a novel multi-scale patch-pyramid formulation is more successful at transferring mid and high frequency details from example scenes. Combining these two key strategies we can qualitatively and quantitatively outperform leading generic non-blind deconvolution methods when context-appropriate example images are available. We also compare to recent work which, like ours, tries to make use of context-specific examples.

Keywords: deblur, non-blind deconvolution, gaussian mixtures, image pyramid, image priors, camera shake

LNCS 8692, p. 231 ff.

Full article in PDF | BibTeX


lncs@springer.com
© Springer International Publishing Switzerland 2014