L1 convex optimization signal recovery from incomplete, inaccurate data, Emmanuel J Candès & Justin Romberg: Rich Murray 2010.02.27

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

L1 convex optimization signal recovery from incomplete, inaccurate data, Emmanuel J Candès & Justin Romberg: Rich Murray 2010.02.27

Rich Murray
L1 convex optimization signal recovery from incomplete, inaccurate data,
Emmanuel J Candès & Justin Romberg: Rich Murray 2010.02.27
http://rmforall.blogspot.com/2010_02_01_archive.htm
Saturday, February 27, 2010
http://groups.yahoo.com/group/rmforall/message/95
_____________________________________________________


Emmanuel Jean Candès & Justin Romberg

http://en.wikipedia.org/wiki/Compressed_sensing

fine brief summary with many links and sources

"The classical solution to such problems would be minimizing the L2
norm -- that is, minimizing the amount of energy in the system.
This is usually simple mathematically (involving only a matrix
multiplication by the pseudo-inverse of the basis sampled in).
However, this leads to poor results for most practical applications,
as the unknown (not sampled) coefficients seldom have zero energy.

A more attractive solution would be minimizing the L0 norm, or
equivalently maximize the number of zero coefficients in the new basis.

However, this is NP-hard (it contains the subset-sum problem),
and so is computationally infeasible for all but the tiniest data sets.

Thus, following Tao et al., the L1 norm, or the sum of the
absolute values, is usually what is minimized.
Finding the candidate with the smallest L1 norm can be expressed
relatively easily as a linear program,
for which efficient solution methods already exist.
This leads to comparable results as using the L0 norm,
often yielding results with many coefficients being zero."


http://en.wikipedia.org/wiki/Emmanuel_Cand%C3%A8s

http://www-stat.stanford.edu/~candes/

http://www-stat.stanford.edu/~candes/publications.html
1998-2010

http://www-stat.stanford.edu/~candes/papers/StableRecovery.pdf
2005  15p

15159 E. J. Candès, J. Romberg and T. Tao.
Stable signal recovery from incomplete and inaccurate measurements.
Comm. Pure Appl. Math., 59 1207-1223.

Stable Signal Recovery from Incomplete and Inaccurate Measurements
Emmanuel Candes a, Justin Romberg a, and Terence Tao  b
a Applied and Computational Mathematics, Caltech,
Pasadena, CA 91125
b Department of Mathematics, University of California,
Los Angeles, CA 90095
February, 2005; Revised June, 2005
Abstract
Suppose we wish to recover a vector x0 E Rm
(e.g. a digital signal or image) from incomplete and contaminated
observations y = Ax0 + e;
A is a n by m matrix with far fewer rows than columns (n << m)
and e is an error term.
Is it possible to recover x0 accurately based on the data y?
To recover x0,
we consider the solution x# to the L1-regularization problem
min ll k llL1 subject to llAx ? yllL2 <= E,
where E is the size of the error term e.
We show that if A obeys a uniform uncertainty principle
(with unit-normed columns) and if the vector x0 is sufficiently sparse,
then the solution is within the noise level
llx# ? x0lL2 <= C · E .
As a first example, suppose that A is a Gaussian random matrix,
then stable recovery occurs for almost all such A's
provided that the number of nonzeros of x0 is of
about the same order as the number of observations.
As a second instance,
suppose one observes  few Fourier samples of x0,
then stable recovery occurs for almost any set of n coefficients
provided that the number of nonzeros is of the order of n/[logm]   6.
In the case where the error term vanishes,
the recovery is of course exact,
and this work actually provides novel insights
on the exact recovery phenomenon discussed in earlier papers.
The methodology also explains why one can also very nearly recover
approximately sparse signals.
Keywords: L1-minimization, basis pursuit, restricted orthonormality,
sparsity, singular values of random matrices.
Acknowledgments:
E. C. is partially supported by a National Science Foundation grant
DMS 01-40698 (FRG) and by an Alfred P. Sloan Fellowship.
J. R. is supported by National Science Foundation
grants DMS 01-40698 and ITR ACI-0204932.
T. T. is supported in part by grants from the Packard Foundation.

http://www-stat.stanford.edu/~candes/software.html

Emmanuel's code


equivalent to Turing machine? approximation for complex algorithms
(travelling salesman problem level)? compressed sensing 2004
Emmanuel Candès & Justin Romberg, article Wired 2010 March,
Jordan Ellenberg: Rich Murray 2010.02.26
http://rmforall.blogspot.com/2010_02_01_archive.htm
Friday, February 26, 2010
http://groups.yahoo.com/group/rmforall/message/94
_____________________________________________________


Rich Murray, MA
Boston University Graduate School 1967 psychology,
BS MIT 1964, history and physics,
1943 Otowi Road, Santa Fe, New Mexico 87505
505-501-2298  [hidden email]

http://groups.yahoo.com/group/rmforall/messages

http://groups.yahoo.com/group/AstroDeep/messages

http://RMForAll.blogspot.com new primary archive

http://groups.yahoo.com/group/aspartameNM/messages
group with 145 members, 1,595 posts in a public archive

http://groups.yahoo.com/group/aspartame/messages
group with 1215 members, 24,020 posts in a public archive

participant, Santa Fe Complex www.sfcomplex.org
__________________________________________________








============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org