ABSTRACT

CONTENTS 22.1 Introduction 488 22.2 Revisiting Stationarity 488

22.2.1 A Time-Frequency Perspective 488 22.2.2 Stationarization via Surrogates 489

22.3 Time-Frequency Learning Machines 491 22.3.1 Reproducing Kernels 492 22.3.2 The Kernel Trick, the Representer Theorem 492 22.3.3 Time-Frequency Learning Machines: General Principles 493 22.3.4 Wigner Distribution versus Spectrogram 495

22.4 A Nonsupervised Classification Approach 495 22.4.1 An Overview on One-Class Classification 496 22.4.2 One-Class SVM for Testing Stationarity 496 22.4.3 Spherical Multidimensional Scaling 498

22.5 Illustration 498 22.6 Conclusion 500 References 501

22.1 INTRODUCTION Time-frequency representations provide a powerful tool for nonstationary signal analysis and classification, supporting a wide range of applications [12]. As opposed to conventional Fourier analysis, these techniques reveal the evolution in time of the spectral content of signals. In Ref. [7,38], time-frequency analysis is used to test stationarity of any signal. The proposed method consists of a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogate signals for defining the null hypothesis of stationarity and, based upon this information, to derive statistical tests. An open question remains, however, about how to choose relevant time-frequency features.