Projects

Compressed sensing using generative models

Deep generative models are capable of learning from training images and generating new images. For example, the images below have been imagined by a model that has seen representative images. These images do not exist in real life and have been generated by a model.

pic 

Image courtesy BigGAN by Brock et al.

We consider the use of generative models in the problem of compressed sensing, in which the goal is to recover some unknown signal \(x^* \in \mathbb{R}^n\), for e.g. an image, using noisy observations of the form

\[ \begin{equation} y = Ax^* + \eta, \end{equation} \]

where \(y \in \mathbb{R}^m\) are the measurements, \(A\in \mathbb{R}^{m \times n}\) is a measurement matrix, and \(\eta\) is some noise in the measurement process.

This is a common model in many real world applications. For example, in MRI, it is assumed that \(A\) is a subset of the \(n \times n\) Fourier matrix. In order to get a good quality MRI scan, we would like the number of measurements \(m\) to be as large as possible. However, this can be time consuming and expensive. This raises the question – can we recover \(x^*\) using very few measurements, i.e., \(m \ll n?\) In general, this is an impossible problem, since if \(m < n\), we have more unknowns than measurements, and we cannot hope to recover \(x^*\). However, if we have some more information about \(x^*\), then there is some hope of recovering it.

We show that if \(x^*\) is well approximated using a deep generative model, then we can provably recover \(x^*\) using \(m \ll n\) measurements. Our experiments show that in comparison to classical algorithms, our algorithm requires \(5-10\times\) fewer measurements to achieve the same reconstruction quality. Please see our paper, published in ICML 2017, Sydney, for more details.

Robust compressed sensing

The algorithm we proposed above works great when the noise is well behaved. For example, if the noise is Gaussian distributed, then our algorithm can still recover something close to the true image.

However, in scenarios such as medical and astronomical imaging, the measurements we obtain have a large number of outliers that can break imaging algorithms. A considerable amount of manual human effort is spent in cleaning up the measurements so as to only use the good clean data.

We find that the algorithm we proposed in the previous section will not work in the presence of outliers in the measurements. This motivates the need for algorithms that are robust and do not depend on human supervision to remove outliers. As we show in our new paper accepted at NeurIPS 2020 (Vancouver, Canada), a new and improved algorithm is robust to outliers, and uses almost the same number of measurements as our old algorithm.