Ajil Jalal


PhD Student
Department of Electrical Engineering
The University of Texas at Austin
Google Scholar Page
GitHub page
ajiljalal [at] utexas [dot] edu


  • May 2021: Three new papers at ICML 2021. We prove that Posterior Sampling is instance-optimal for compressed sensing and we propose new definitions for fairness in generative processes. Code and models are available here: link to code.

  • October 2020: Our paper has been selected for an oral presentation at the NeurIPS 2020 Workshop on Deep Learning and Inverse Problems, Vancouver, Canada. We show that conditional sampling is provably good for solving linear problems using full dimensional generative models.

  • October 2020: Our paper shows how generative models can give performance improvements in channel estimation, and will appear in the IEEE JSAC Series on Machine Learning for Communications and Networks.

  • September 2020: Our paper has been accepted as a poster presentation at NeurIPS 2020, Vancouver, Canada. We develop a new robust algorithm for compressed sensing using generative models.

  • May 2020: New survey paper on deep learning techniques for inverse problems in imaging. We introduced a new taxonomy on supervised versus unsupervised methods.

  • January 2020: Invited talk at IIT Madras.

  • December 2019: I organized the Deep Learning and Inverse Problems social at NeurIPS 2019, Vancouver, Canada! I generated $1000 in funding for this social.

  • November 2019: Invited talk at Asilomar Conference on Signals, Systems, and Computers, in California, USA.

About Me

I am a fifth year PhD student at UT Austin, advised by Prof. Alex Dimakis. I received my B.Tech. in Electrical Engineering from IIT Madras in 2016, where I worked closely with Prof. Rahul Vaze, Prof. Umang Bhaskar, and Prof. Krishna Jagannathan.

My research explores the theoretical and practical aspects of techniques that use deep generative models for solving signal processing problems. On the theoretical side, my work has given Bayesian and Frequentist algorithms that are provably robust and achieve optimal statistical complexities. On the practical side, our algorithms have shown tremendous performance improvements over existing state-of-the-art.