The papers on this section deal with with following problem: can a signal be represented and reconstructed from a subset of its samples? To what extent? How stable is the reconstruction problem? This can be seen as a problem of decoding over an erasure channel (if the positions of the missing samples are known). The problem is also related to the frame representation discussed in section on Frames, codes and reconstruction. The (nonlinear) problem of detecting the positions of incorrect samples (corrupted by, say, impulsive noise) and then correcting them is discussed in section Real and DFT codes.
The papers could be further grouped: those that deal with the stability issues and consequently eigenvalues and condition numbers, those that deal with iterative methods (this set would include the papers on frames and the frame algorithm), and the papers that deal with noniterative solutions, or matrix formulations of the problem (to which one could apply any standard method, iterative or noniterative, and methods such as Chebyshev acceleration, or the conjugate gradient method).
This paper deals with two-channel sampling using an oversampled filter bank. It emphasizes sampling of one function and its derivative, at a rate higher than the critical minimum rate (oversampling), and the problem of reconstructing the function even when a finite number of samples (of the function or of its derivative) are unknown. The motivation for considering this problem is the need to add redundancy to data, as in conventional channel coding, and the convenience of doing it in a domain suited to the data, as in joint source-channel coding.
This paper proposes two fast algorithms for the extrapolation of band-limited images (or, equivalently, for spectrum extrapolation of finite support objects). The algorithms are noniterative and based on the Cholesky decomposition of certain Hermitian positive-definite matrices. One of the algorithms is based on a image domain formulation of the problem, whereas the other is based on the dual frequency domain formulation. Time-domain and frequency-domain formulations were introduced in Interpolation in the time and frequency domains, for 1D signals. Both formulations lead to a wide range of possible algorithms, but the proposed methods seem very adequate to the problem under consideration. Some of the issues that should be taken in consideration when designing algorithms for specific problems are discussed. The connection between these reconstruction algorithms and error control coding in the complex field is briefly mentioned.
This book chapter discusses a class of iterative and noniterative methods for the reconstruction of partially known band-limited signals. Band-limiting is equivalent to the vanishing of a known subset of the samples of the Discrete Fourier Transform (DFT) of the signals. The signals are partially known in the sense that only a subset of its samples is supposed to be available for measurement. The problem is to determine the signal from the available samples. Although this is in essence a nonuniform sampling problem, the methods employed differ considerably from the "infinite series" approaches. This is due to the nature of the finite-dimensional spaces in which we work, for which the matrix formulations are the most natural tool. The chapter is written as a tutorial. Although it is not a comprehensive review of the many methods that have been proposed for the solution of the missing data and interpolation problems, it focuses on a variety of possible methods, their interrelations, possible implementations, stability, and relative performance. Understanding the connections between the methods is particularly useful since it helps the reader in navigating through the literature, and it contributes to a better grasp of the subject and of the main options and algorithms. The methods described can be extended, for example, to signal and image interpolation, extrapolation, and superresolution.
This paper proposes a classification of several algorithms for solving the discrete-discrete band-limited interpolation and extrapolation problems, based on the dimension of certain underlying vector spaces, and distinguishing between time-domain and frequency-domain methods. This perspective allows a synthetic view of the problem and clarifies the connections between some of the existing algorithms. The main issues that arise when designing efficient algorithms for the solution of a specific problem are discussed.
This article studies the conditioning of complex Vandermonde matrices, in reference to applications such as superresolution and missing samples. The results include bounds for the singular values of Vandermonde matrices whose nodes are complex numbers on the unit circle. Contrarily to what happens in the real case, such matrices can be quite well conditioned.
This paper addresses the problem of estimating the eigenvalues and condition numbers of matrices of a certain form. It mentions some of the problems in which such matrices occur, and to which the results obtained can be applied. Examples of such problems include (i) approximation by sums of irregular translates (ii) the missing data problem and incomplete sampling series. The paper describes a method for the estimation of the eigenvalues and condition number of the class of matrices considered, and discusses some open issues.
This paper studies the stability of a direct method for superresolution proposed by Walsh and Delaney. It gives exact and best possible approximations for the determinant of the matrix of the method, and bounds for its smallest eigenvalue. The singular values of the matrix are also studied and related to the eigenvalues of the matrices of two other direct methods.
The eigenvalues of the matrices that occur in certain finite-dimensional interpolation problems are directly related to their numerical stability, and strongly depend on the distribution of the interpolation knots, that is, on the sampling set. We study this dependency as a function of the sampling set itself, and give accurate bounds for the eigenvalues of the interpolation matrices. The bounds can be evaluated in as few as four arithmetic operations, and so they greatly simplify the assessment of sampling sets regarding numerical stability. The accuracy and usefulness of the bounds are illustrated with examples. The paper Stability Issues in Error Control Coding in the Complex Field, Interpolation, and Frame Bounds (see the section on Frames, codes and reconstruction) considers bounds for more general sampling sets.
This paper addresses the problem of finding unknown samples in a signal, band-limited in the sense that its DFT contains a contiguous set of zero coefficients. It formulates the problem in two different ways, which lead to two sets of linear equations of different sizes. The time-domain formulation leads to as many equations as the number of unknown samples, whereas the frequency-domain formulation yields as many equations as there are unknown DFT coefficients. The solution of the time-domain equations leads to the unknown samples directly, whereas the solution of the frequency-domain equations yields the unknown DFT coefficients. The paper also clarifies the connections between two apparently unrelated approaches to the problem: that of Gröchenig / Strohmer and the minimum dimension method. See, for example, the paper Noniterative and faster iterative methods for interpolation and extrapolation. This paper shows that, in a certain sense, the methods are the dual of each other. The advantages of recognizing this duality are discussed.
This paper studies the eigenvalues of a matrix that arises in the recovery of lost samples from oversampled band-limited signals. Emphasis is placed on the variation of the eigenvalues as a function of the distribution of the missing samples and as a function of the oversampling parameter. The paper presents a number of results which help to understand the numerical difficulties that may occur in this class of problems, and ways to circumvent them. If you are interested in the discrete, finite-dimensional version of this problem, in which the Fourier transform is replaced by the DFT, see the paper The eigenvalues of matrices that occur in certain interpolation problems.
This paper is also related to frames and the frame algorithm, but gives a direct (noniterative) solution. For more on the connection between frames and the interpolation problem, see the book chapter Mathematics for Multimedia Signal Processing II — Discrete Finite Frames and Signal Reconstruction and the section on Frames, codes and reconstruction.
The paper studies the missing samples or band-limited interpolation and extrapolation problems for finite-dimensional signals. It shows that these problems can be easily reduced to the solution of a set of linear equations with a real symmetric positive-definite matrix with spectral radius less than unity. Thus, the equations can be solved directly or using successive approximation methods. A number of other well-known methods which may substantially increase the convergence rate may also be readily applied and are briefly discussed. We state conditions for their convergence, and illustrate their comparative performance through an example.
It is well known that a bandlimited oversampled signal is completely determined even if an arbitrary finite number of samples is lost. This paper gives (i) an alternative simple proof of this fact (ii) shows that it carries over to generalized sampling expansions. More precisely, it is shown that any finite number of missing samples can be recovered from the remaining ones, in the case of generalized Kramer sampling expansions, if an appropriate oversampling constraint is satisfied. The recovery can be accomplished either iteratively or noniteratively.