Skip to main content

Investigating image quality loss while using statistical methods to filter grayscale Gaussian noise

Presented by:
Aidan Draper (Elon University)
Abstract:

Statisticians, as well as machine learning and computer vision experts, have been studying image enhancement through denoising different domains of photography, such as textual documentation, tomography, astronomical, and low-light photography. With the surge of interest in machine- and deep-learning, many in the computer vision field feel that current approaches for effective image denoising are moving away from statistical inference methods and, instead, moving into these subfields of artificial intelligence. However, this paper sheds some light on current applications that show how statistical inference-based methods and frameworks that rely on conditional probability and Bayes‚ theorem are still prevalent today. We reconstruct a few inferential kernel filters in the R and python languages and compare their effectiveness in denoising RAW images. In doing so, we also surveyed Elon students about their opinion of a single filtered photo using the various methods. Many scientists believe that noise filters cause blurring and image quality loss so we investigated whether or not people felt as though improved denoising causes any quality loss as compared to their original images. Individuals assigned scores indicating the image quality of a denoised photo compared to its true no-noise image on a 1 to 10 scale. Survey scores for the various methods are compared to benchmark tests, such as peak signal-to-noise ratio, mean squared error, and r-squared, to determine if there were any correlations between visual scores and the benchmark results.