We adopt an innovation-driven framework and investigate the sparse/compressible distributions obtained by linearly measuring or expanding continuous-domain stochastic models. Starting from the first principles, we show that all such distributions are necessarily infinitely divisible. This property is satisfied by many distributions used in statistical learning, such as Gaussian, Laplace, and a wide range of fat-tailed distributions, such as student's-t and alpha-stable laws. However, it excludes some popular distributions used in compressed sensing, such as the Bernoulli-Gaussian distribution and distributions, that decay like exp (-O(vertical bar x vertical bar(p))) for 1 < p < 2. We further explore the implications of infinite divisibility on distributions and conclude that tail decay and unimodality are preserved by all linear functionals of the same continuous-domain process. We explain how these results help in distinguishing suitable variational techniques for statistically solving inverse problems like denoising.
Rakesh Chawla, Andrea Rizzi, Matthias Finger, Federica Legger, Matteo Galli, Sun Hee Kim, Jian Zhao, João Miguel das Neves Duarte, Tagir Aushev, Hua Zhang, Alexis Kalogeropoulos, Yixing Chen, Tian Cheng, Ioannis Papadopoulos, Gabriele Grosso, Valérie Scheurer, Meng Xiao, Qian Wang, Michele Bianco, Varun Sharma, Joao Varela, Sourav Sen, Ashish Sharma, Seungkyu Ha, David Vannerom, Csaba Hajdu, Sanjeev Kumar, Sebastiana Gianì, Kun Shi, Abhisek Datta, Siyuan Wang, Anton Petrov, Jian Wang, Yi Zhang, Muhammad Ansar Iqbal, Yong Yang, Xin Sun, Muhammad Ahmad, Donghyun Kim, Matthias Wolf, Anna Mascellani, Paolo Ronchese, , , , , , , , , , , , , , , , , , , , , , , ,