Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-10T15:57:13.181Z Has data issue: false hasContentIssue false

3 - Compressed Sensing via Compression Codes

Published online by Cambridge University Press:  22 March 2021

Miguel R. D. Rodrigues
Affiliation:
University College London
Yonina C. Eldar
Affiliation:
Weizmann Institute of Science, Israel
Get access

Summary

In compressed sensing (CS) a signal xRn is measured as y =A x + z, where ARm×n (m<n) and zRm denote the sensing matrix and measurement noise. The goal is to recover x from measurements y when m<n. CS is possible because we typically want to capture highly structured signals, and recovery algorithms take advantage of a signal’s structure to solve the under-determined system of linear equations. As in CS, data-compression codes take advantage of a signal’s structure to encode it efficiently. Structures used by compression codes are much more elaborate than those used by CS algorithms. Using more complex structures in CS, like those employed by data-compression codes, potentially leads to more efficient recovery methods requiring fewer linear measurements or giving better reconstruction quality. We establish connections between data compression and CS, giving CS recovery methods based on compression codes, which indirectly take advantage of all structures used by compression codes. This elevates the class of structures used by CS algorithms to those used by compression codes, leading to more efficient CS recovery methods.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Donoho, D. L., “Compressed sensing,” IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006.Google Scholar
Candès, E. J. and Tao, T., “Near-optimal signal recovery from random projections: Universal encoding strategies?IEEE Trans. Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006.CrossRefGoogle Scholar
Bakin, S., “Adaptive regression and model selection in data mining problems,” Ph.D. Thesis, Australian National University, 1999.Google Scholar
Eldar, Y. C. and Mishali, M., “Robust recovery of signals from a structured union of subspaces,” IEEE Trans. Information. Theory, vol. 55, no. 11, pp. 5302–5316, 2009.Google Scholar
Eldar, Y. C., Kuppinger, P., and Bölcskei, H., “Block-sparse signals: Uncertainty relations and efficient recovery,” IEEE Trans. Signal Processing, vol. 58, no. 6, pp. 3042–3054, 2010.Google Scholar
Yuan, M. and Lin, Y., “Model selection and estimation in regression with grouped variables,” J. Roy. Statist. Soc. Ser. B, vol. 68, no. 1, pp. 49–67, 2006.Google Scholar
Ji, S., Dunson, D., and Carin, L., “Multi-task compressive sensing,” IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 92–106, 2009.Google Scholar
Maleki, A., Anitori, L., Yang, Z., and Baraniuk, R. G., “Asymptotic analysis of complex lasso via complex approximate message passing (CAMP),” IEEE Trans. Information Theory, vol. 59, no. 7, pp. 4290–4308, 2013.CrossRefGoogle Scholar
Stojnic, M., “Block -length dependent thresholds in block-sparse compressed sensing,” arXiv:0907.3679, 2009.Google Scholar
Stojnic, M., Parvaresh, F., and Hassibi, B., “On the reconstruction of block-sparse signals with an optimal number of measurements,” IEEE Trans. Signal Processing, vol. 57, no. 8, pp. 3075–3085, 2009.Google Scholar
Stojnic, M., “2/ℓ1-optimization in block-sparse compressed sensing and its strong thresholds,” IEEE J. Selected Topics Signal Processing, vol. 4, no. 2, pp. 350–357, 2010.CrossRefGoogle Scholar
Meier, L., Van De Geer, S., and Buhlmann, P., “The group Lasso for logistic regression,” J. Roy. Statist. Soc. Ser. B, vol. 70, no. 1, pp. 53–71, 2008.Google Scholar
Chandrasekaran, V., Recht, B., Parrilo, P. A., and Willsky, A. S., “The convex geometry of linear inverse problems,” Found. Comput. Math., vol. 12, no. 6, pp. 805–849, 2012.CrossRefGoogle Scholar
Baraniuk, R. G., Cevher, V., Duarte, M. F., and Hegde, C., “Model-based compressive sensing,” IEEE Trans. Information Theory, vol. 56, no. 4, pp. 1982–2001, 2010.CrossRefGoogle Scholar
Recht, B., Fazel, M., and Parrilo, P. A., “Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization,” SIAM Rev., vol. 52, no. 3, pp. 471–501, 2010.Google Scholar
Vetterli, M., Marziliano, P., and Blu, T., “Sampling signals with finite rate of innovation,” IEEE Trans. Signal Processing, vol. 50, no. 6, pp. 1417–1428, 2002.CrossRefGoogle Scholar
Som, S. and Schniter, P., “Compressive imaging using approximate message passing and a Markov-tree prior,” IEEE Trans. Signal Processing, vol. 60, no. 7, pp. 3439–3448, 2012.CrossRefGoogle Scholar
Donoho, D. and Kutyniok, G., “Microlocal analysis of the geometric separation problem,” Comments Pure Appl. Math., vol. 66, no. 1, pp. 1–47, 2013.Google Scholar
Candentss, E. J., Li, X., Ma, Y., and Wright, J., “Robust principal component analysis?,” J. ACM, vol. 58, no. 3, pp. 1–37, 2011.Google Scholar
Waters, A. E., Sankaranarayanan, A. C., and Baraniuk, R., “Sparcs: Recovering low-rank and sparse matrices from compressive measurements,” in Proc. Advances in Neural Information Processing Systems, 2011, pp. 1089–1097.Google Scholar
Chandrasekaran, V., Sanghavi, S., Parrilo, P. A., and Willsky, A., “Rank-sparsity incoherence for matrix decomposition,” SIAM J. Optimization, vol. 21, no. 2, pp. 572–596, 2011.Google Scholar
Duarte, M. F., Bajwa, W. U., and Calderbank, R., “The performance of group Lasso for linear regression of grouped variables,” Technical Report TR-2010-10, Duke University, Department of Computer Science, Durham, NC, 2011.Google Scholar
Blumensath, T. and Davies, M. E., “Sampling theorems for signals from the union of finite-dimensional linear subspaces,” IEEE Trans. Information Theory, vol. 55, no. 4, pp. 1872–1882, 2009.Google Scholar
McCoy, M. B. and Tropp, J. A., “Sharp recovery bounds for convex deconvolution, with applications,” arXiv:1205.1580, 2012.Google Scholar
Studer, C. and Baraniuk, R. G., “Stable restoration and separation of approximately sparse signals,” Appl. Comp. Harmonic Analysis (ACHA), vol. 37, no. 1, pp. 12–35, 2014.Google Scholar
Peyré, G. and Fadili, J., “Group sparsity with overlapping partition functions,” in Proc. EUSIPCO Rev, 2011, pp. 303–307.Google Scholar
Chen, S. S., Donoho, D. L., and Saunders, M. A., “Atomic decomposition by basis pursuit,” SIAM Rev., vol. 43, no. 1, pp. 129–159, 2001.Google Scholar
Tibshirani, R., “Regression shrinkage and selection via the Lasso,” J. Roy. Statist. Soc. Ser. B vol. 58, no. 1, pp. 267–288, 1996.Google Scholar
Beck, A. and Teboulle, M., “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci., vol. 2, no. 1, pp. 183–202, 2009.Google Scholar
Tropp, J. A. and Gilbert, A. C., “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007.Google Scholar
Daubechies, I., Defrise, M., and De Mol, C., “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Commun. Pure Appl. Math., vol. 57, no. 11, pp. 1413–1457, 2004.CrossRefGoogle Scholar
Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R., “Least angle regression,” Annals Statist., vol. 32, no. 2, pp. 407–499, 2004.Google Scholar
Blumensath, T. and Davies, M. E., “Iterative hard thresholding for compressed sensing,” Appl. Comp. Harmonic Analysis (ACHA), vol. 27, no. 3, pp. 265–274, 2009.Google Scholar
Needell, D. and Tropp, J. A., “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,” Appl. Comp. Harmonic Analysis (ACHA), vol. 26, no. 3, pp. 301–321, 2009.Google Scholar
Donoho, D. L., Maleki, A., and Montanari, A., “Message passing algorithms for compressed sensing,” Proc. Natl. Acad. Sci. USA, vol. 106, no. 45, pp. 18 914–18 919, 2009.Google Scholar
Dai, W. and Milenkovic, O., “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Information Theory, vol. 55, no. 5, pp. 2230–2249, 2009.CrossRefGoogle Scholar
Metzler, C. A., Maleki, A., and Baraniuk, R. G., “From denoising to compressed sensing,” IEEE Trans. Information Theory, vol. 62, no. 9, pp. 5117–5144, 2016.CrossRefGoogle Scholar
Zhu, J., Baron, D., and Duarte, M. F., “Recovery from linear measurements with complexity-matching universal signal estimation,” IEEE Trans. Signal Processing, vol. 63, no. 6, pp. 1512–1527, 2015.Google Scholar
Jalali, S., Maleki, A., and Baraniuk, R. G., “Minimum complexity pursuit for universal compressed sensing,” IEEE Trans. Information Theory, vol. 60, no. 4, pp. 2253–2268, 2014.CrossRefGoogle Scholar
Jalali, S. and Poor, H. V., “Universal compressed sensing for almost lossless recovery,” IEEE Trans. Information Theory, vol. 63, no. 5, pp. 2933–2953, 2017.Google Scholar
Jalali, S. and Maleki, A., “New approach to Bayesian high-dimensional linear regression,” Information and Inference, vol. 7, no. 4, pp. 605–655, 2018.Google Scholar
Jalali, S. and Maleki, A., “From compression to compressed sensing,” Appl. Comput Harmonic Analysis, vol. 40, no. 2, pp. 352–385, 2016.CrossRefGoogle Scholar
Taubman, D. S. and Marcellin, M. W., JPEG2000: Image compression fundamentals, standards and practice. Kluwer Academic Publishers, 2002.Google Scholar
Beygi, S., Jalali, S., Maleki, A., and Mitra, U., “Compressed sensing of compressible signals,” in Proc. IEEE International Symposium on Information Theory, 2017, pp. 2158–2162.Google Scholar
Beygi, S., Jalali, S., Maleki, A., and Mitra, U., “An efficient algorithm for compression-based compressed sensing,” vol. 8, no. 2, pp. 343–375, June 2019.Google Scholar
Rockafellar, R. T., “Monotone operators and the proximal point algorithm,” SIAM J. Cont. Opt., vol. 14, no. 5, pp. 877–898, 1976.CrossRefGoogle Scholar
Candès, E. J., Romberg, J., and Tao, T., “Decoding by linear programming,” IEEE Trans. Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005.Google Scholar
Candès, E. and Tao, T., “The Dantzig selector: Statistical estimation when p is much larger than n,” Annals Statist, vol. 35, no. 6, pp. 2313–2351, 2007.Google Scholar
Bickel, P. J., Ritov, Y., and Tsybakov, A. B., “Simultaneous analysis of Lasso and Dantzig selector,” Annals Statist, vol. 37, no. 4, pp. 1705–1732, 2009.Google Scholar
Nelder, J. A. and Mead, R., “A simplex method for function minimization,” Comp. J, vol. 7, no. 4, pp. 308–313, 1965.Google Scholar
Friedman, J., Hastie, T., and Tibshirani, R., The elements of statistical learning: Data mining, inference, and prediction, 2nd edn. Springer, 2009.Google Scholar
Dong, W., Shi, G., Li, X., Ma, Y., and Huang, F., “Compressive sensing via nonlocal low-rank regularization,” IEEE Trans. Image Processing, 2014.Google Scholar
Harrison, R. W., “Phase problem in crystallography,” J. Opt. Soc. America A, vol. 10, no. 5, pp. 1046–1055, 1993.Google Scholar
Fienup, C. and Dainty, J., “Phase retrieval and image reconstruction for astronomy,” Image Rec.: Theory and Appl., pp. 231–275, 1987.Google Scholar
Pfeiffer, F., Weitkamp, T., Bunk, O., and David, C., “Phase retrieval and differential phase-contrast imaging with low-brilliance X-ray sources,” Nature Physics, vol. 2, no. 4, pp. 258–261, 2006.Google Scholar
Candès, E. J., Strohmer, T., and Voroninski, V., “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math., vol. 66, no. 8, pp. 1241–1274, 2013.Google Scholar
Candès, E. J., Eldar, Y. C., Strohmer, T., and Voroninski, V., “Phase retrieval via matrix completion,” SIAM Rev., vol. 57, no. 2, pp. 225–251, 2015.Google Scholar
Ohlsson, H., Yang, A., Dong, R., and Sastry, S., “CPRL – an extension of compressive sensing to the phase retrieval problem,” in Proc. Advances in Neural Information Processing Systems 25, 2012, pp. 1367–1375.Google Scholar
Schniter, P. and Rangan, S., “Compressive phase retrieval via generalized approximate message passing,” IEEE Trans. Information Theory, vol. 63, no. 4, pp. 1043–1055, 2015.Google Scholar
Bakhshizadeh, M., Maleki, A., and Jalali, S., “Compressive phase retrieval of structured signals,” in Proc. IEEE International Symposium on Information Theory, 2018, pp. 2291–2295.CrossRefGoogle Scholar
Steinberg, Y. and Verdu, S., “Simulation of random processes and rate-distortion theory,” IEEE Trans. Information Theory, vol. 42, no. 1, pp. 63–86, 1996.CrossRefGoogle Scholar
Ihara, S. and Kubo, M., “Error exponent of coding for stationary memoryless sources with a fidelity criterion,” IEICE Trans. Fund. Elec., Comm. and Comp. Sciences, vol. 88, no. 5, pp. 1339–1345, 2005.Google Scholar
Iriyama, K., “Probability of error for the fixed-length lossy coding of general sources,” IEEE Trans. Information Theory, vol. 51, no. 4, pp. 1498–1507, 2005.Google Scholar
Shannon, C. E., “A mathematical theory of communication: Parts I and II,” Bell Systems Technical J., vol. 27, pp. 379–423 and 623–656, 1948.CrossRefGoogle Scholar
Cover, T. and Thomas, J., Elements of information theory, 2nd edn. Wiley, 2006.Google Scholar
Rényi, A., “On the dimension and entropy of probability distributions,” Acta Math. Acad. Sci. Hungarica, vol. 10, no. 5, 1–2, pp. 193–215, 1959.Google Scholar
Wu, Y. and Verdú, S., “Rényi information dimension: Fundamental limits of almost lossless analog compression,” IEEE Trans. Information Theory, vol. 56, no. 8, pp. 3721–3748, 2010.Google Scholar
Geiger, B. C. and Koch, T., “On the information dimension rate of stochastic processes,” in Proc. IEEE International Symposium on Information Theory, 2017, pp. 888–892.Google Scholar
Kawabata, T. and Dembo, A., “The rate-distortion dimension of sets and measures,” IEEE Trans. Information Theory, vol. 40, no. 5, pp. 1564–1572, 1994.Google Scholar
Rezagah, F. E., Jalali, S., Erkip, E., and Poor, H. V., “Compression-based compressed sensing,” IEEE Trans. Information Theory, vol. 63, no. 10, pp. 6735–6752, 2017.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×