«Smoothing, Normalization, Correction and Reduction David M.W. Powers School of Informatics and Engineering Flinders University of South Australia ...»
However, since SVD and ICA are linear operations, and FFT and IFFT are also linear operations, the SVD and ICA will recover the same sources when applied direct to the time domain data as when applied to the frequency domain data, and apart from rounding and training error the frequency data extracted is the same whether the FFT is performed first or last. These leads to two conclusions: there is no point in performing an IFFT after SVD to take the sources back into the time domain, and it is more efficient to perform the SVD dimension reduction before the FFT than after. Note that whilst the SVD and ICA matrices need to be calculated based on the data (though theoretically they are then stable for stationary sources), the FFT and IFFT are fixed matrix multiplications, and a Symmetric Fourier Transform (SFT) actually expresses all the sin (imaginary) and cosine (real) components of the FFT in a rectangular real-valued matrix whose transpose acts as a pseudoinverse (viz. ISFT = SFT′).
To the extent that other forms of frequency analysis are linear and invertible, this would also be true for those techniques. However the other techniques do not in general have these properties, and even the windowing techniques applied to FFT represent deviations from these properties. DCT for example throws away phase information to double the frequency resolution. Wavelets sacrifice linearity for time specificity. Linear Predictive Coding (LPC) and Auto-Regression (AR) sacrifice the ability to distinguish phase, frequency and reverberation for a smoother frequency envelope. Cepstrum takes the logarithm of power between the FFT and IFFT operations, thus sacrificing linearity for increased frequency specificity and rejection of reverberation whilst counting the frequency of frequencies, which is particularly useful for establishing the fundamental where many harmonics are present.
Schur Tensor References Church, K.W. and Gale, W.A. (1995) Poisson mixtures. Natural Language Engineering 1(2):163-190.
Church, K.W. and Gale, W.A. (BBBB) Inverse Document Frequency (IDF): A measure of deviations from Poisson.
Gale, W.A. (AAAA) Good-Turing smoothing without tears.
Gao, Jianfeng and Lee, Kai-Fu (XXXX) Distribution-Based Pruning of Backoff Language Models Gao, Jianfeng, Li, Mingjing and Lee, Kai-Fu (YYYY) N-gram Distribution-Based Language Model Adaptation Katz, Slava M. (1987). Estimation of Probabilities from sparse data for the language model component of a speech recognizer, IEEE Trans. On Acoustics, Speech and Signal Processing 35(3), 400-401.
Katz, Slava M. (1996). Distribution of content words and phrases in text and language modelling. Natural Language Engineering 2(1):15Powers, David M. W. (1998), "Applications and Explanations of Zipf?s Law", pp151-160, NeMLaP3/CoNLL98 Joint Conference, Sydney, January 1998. (Research/AI/papers/199801c-CoNLL-Zipf.pdf) Samuelsson, C. (1996) Relating Turing’s formula and Zipf’s law.
WVLC’96 Umemura, K., Takeda, Y., Tanaka, M. Feng, L., Yamamoto, E. (ZZZZ) Empirical Term Weighting Zipf (1956), The principle of least effort.