Approximation and coding
This section contains some papers that explore ideas somehow related to approximation or coding (say, approximation using splines, approximation by superposition of Gaussian functions, the role of the image variation in the lossless coding of images with sparse histograms).
The advantages of B-splines for signal representation are
well known. This paper stresses a fact that seems to be less
well known, namely, the possibility of using linear
combinations of B-splines to obtain representations that are
more stable than the usual ones. We give the best possible
Riesz bounds for these linear combinations and calculate their
duals, in a generalized sampling context.
To put this into perspective, consider an approximation scheme based on a certain kernel function. Often, it would lead to a Riesz sequence or a Riesz basis. Riesz bases are a generalization of orthonormal bases, and can be regarded as the result of applying a bounded invertible operator to the elements of an orthonormal basis. A Riesz sequence, on the other hand is an ``incomplete basis'', as it is merely a Riesz basis for the closed linear span of the set of functions under consideration.
The numerical stability of such representations is important in practice, as it dictates the magnitude of the adverse effects due to noisy data. The stability of Riesz bases and Riesz sequences can be measured by looking at the size of their Riesz bounds. Orthonormal bases are perfect from the viewpoint of numerical stability, and their Riesz bounds are both equal to unity. As the ratio of the upper to the lower bound increases, the numerical stability of the representation decreases. It is well known that the tighter (closer to each other) these bounds are the less any small perturbations in the input data will be felt at the output. This paper points out that the Riesz bounds associated with bases built using certain linear combinations of B-splines are better from the stability point of view than bases directly based on B-splines.
The performance of state-of-the-art lossless image coding
methods (such as JPEG-LS, lossless JPEG-2000, and CALIC) can be
considerably improved by a recently introduced preprocessing
technique (histogram packing), which can be applied whenever
the images have sparse histograms. Bitrate savings of up to 50%
have been reported, but so far no theoretical explanation of
the fact has been advanced. This paper addresses this issue,
and analyzes the effect of the technique in terms of the
interplay between histogram packing and the image total
variation, emphasizing the lossless JPEG-2000 case.
State-of-the-art lossless image compression methods, such as JPEG-LS, lossless JPEG-2000, and CALIC, perform much better on images with sparse histograms when a simple preprocessing technique is used. This paper attempts to explain how the simple preprocessing stage, which basically packs the histogram of the images, affects the image total variation, and as a result the ability of compression algorithms to work more effectively.
The approximation properties of Gaussian functions have
been used for coding purposes. This paper shows that those
properties can be obtained very directly from results due to
Norbert Wiener on the closure of translations in L_1 and L_2.
This observation not only simplifies the proof of the
approximation property but also renders the result applicable,
in a more general setting, to other functions (not necessarily
Gaussian). The problem of approximating with superpositions of
Gaussian functions is of course nonlinear (two of the
parameters to adjust are the positions and the widths, or
variances, of the Gaussians). For more works dealing with
nonlinear problems, see the section Nonlinear systems.
This paper shows that the approximation error computed
between a smoothed version of the signal (with an ideal
low-pass filter of band-width 2w) and a nonuniform sampling
approximation of the signal based on the sinc kernel is O(w log
w / N) in the sup norm. See also the section
Approximation and coding.