Statistical Signal Processing of Complex-Valued Data :The Theory of Improper and Noncircular Signals

Publication subTitle :The Theory of Improper and Noncircular Signals

Author: Peter J. Schreier; Louis L. Scharf  

Publisher: Cambridge University Press‎

Publication year: 2010

E-ISBN: 9780511686696

P-ISBN(Paperback): 9780521897723

Subject: TN911.7 signal processing

Keyword: Civil engineering, surveying & building

Language: ENG

Access to resources Favorite

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Statistical Signal Processing of Complex-Valued Data

Description

Complex-valued random signals are embedded in the very fabric of science and engineering, yet the usual assumptions made about their statistical behavior are often a poor representation of the underlying physics. This book deals with improper and noncircular complex signals, which do not conform to classical assumptions, and it demonstrates how correct treatment of these signals can have significant payoffs. The book begins with detailed coverage of the fundamental theory and presents a variety of tools and algorithms for dealing with improper and noncircular signals. It provides a comprehensive account of the main applications, covering detection, estimation, and signal analysis of stationary, nonstationary, and cyclostationary processes. Providing a systematic development from the origin of complex signals to their probabilistic description makes the theory accessible to newcomers. This book is ideal for graduate students and researchers working with complex data in a range of research areas from communications to oceanography.

Chapter

1.4.3 Complex demodulation

1.4.4 Bedrosian's theorem: the Hilbert transform of a product

1.4.5 Instantaneous amplitude, frequency, and phase

1.4.6 Hilbert transform and SSB modulation

1.4.7 Passband filtering at baseband

1.5 Complex signals for the efficient use of the FFT

1.5.1 Complex DFT

1.5.2 Twofer: two real DFTs from one complex DFT

1.5.3 Twofer: one real 2N-DFT from one complex N-DFT

1.6 The bivariate Gaussian distribution and its complex representation

1.6.1 Bivariate Gaussian distribution

1.6.2 Complex representation of the bivariate Gaussian distribution

1.6.3 Polar coordinates and marginal pdfs

1.7 Second-order analysis of the polarization ellipse

1.8 Mathematical framework

1.9 A brief survey of applications

2 Introduction to complex random vectors and processes

2.1 Connection between real and complex descriptions

2.1.1 Widely linear transformations

2.1.2 Inner products and quadratic forms

2.2 Second-order statistical properties

2.2.1 Extending definitions from the real to the complex domain

2.2.2 Characterization of augmented covariance matrices

2.2.3 Power and entropy

2.3 Probability distributions and densities

2.3.1 Complex Gaussian distribution

2.3.2 Conditional complex Gaussian distribution

2.3.3 Scalar complex Gaussian distribution

2.3.4 Complex elliptical distribution

2.4 Sufficient statistics and ML estimators for covariances: complex Wishart distribution

Complex Wishart distribution

2.5 Characteristic function and higher-order statistical description

2.5.1 Characteristic functions of Gaussian and elliptical distributions

2.5.2 Higher-order moments

2.5.3 Cumulant-generating function

2.5.4 Circularity

Do circular random vectors have spherical pdf contours?

2.6 Complex random processes

2.6.1 Wide-sense stationary processes

2.6.2 Widely linear shift-invariant filtering

Notes

Part II Complex random vectors

3 Second-order description of complex random vectors

3.1 Eigenvalue decomposition

3.1.1 Principal components

3.1.2 Rank reduction and transform coding

3.2 Circularity coefficients

3.2.1 Entropy

3.2.2 Strong uncorrelating transform (SUT)

3.2.3 Characterization of complementary covariance matrices

3.3 Degree of impropriety

3.3.1 Upper and lower bounds

Least improper analog

3.3.2 Eigenvalue spread of the augmented covariance matrix

3.3.3 Maximally improper vectors

3.4 Testing for impropriety

3.5 Independent component analysis

Notes

4 Correlation analysis

4.1 Foundations for measuring multivariate association between two complex random vectors

4.1.1 Rotational, reflectional, and total correlations for complex scalars

4.1.2 Principle of multivariate correlation analysis

4.1.3 Rotational, reflectional, and total correlations for complex vectors

4.1.4 Transformations into latent variables

4.2 Invariance properties

4.2.1 Canonical correlations

4.2.2 Multivariate linear regression (half-canonical correlations)

Weighted MLR

4.2.3 Partial least squares

4.3 Correlation coefficients for complex vectors

4.3.1 Canonical correlations

4.3.2 Multivariate linear regression (half-canonical correlations)

4.3.3 Partial least squares

4.4 Correlation spread

4.5 Testing for correlation structure

4.5.1 Sphericity

4.5.2 Independence within one data set

4.5.3 Independence between two data sets

Notes

5 Estimation

5.1 Hilbert-space geometry of second-order random variables

5.2 Minimum mean-squared error estimation

5.3 Linear MMSE estimation

5.3.1 The signal-plus-noise channel model

5.3.2 The measurement-plus-error channel model

5.3.3 Filtering models

5.3.4 Nonzero means

5.3.5 Concentration ellipsoids

5.3.6 Special cases

Signal plus noise

The Gaussian case

5.4 Widely linear MMSE estimation

5.4.1 Special cases

5.4.2 Performance comparison between LMMSE and WLMMSE estimation

5.5 Reduced-rank widely linear estimation

5.5.1 Minimize mean-squared error (min-trace problem)

5.5.2 Maximize mutual information (min-det problem)

5.6 Linear and widely linear minimum-variance distortionless response estimators

5.6.1 Rank-one LMVDR receiver

Relation to LMMSE estimator

5.6.2 Generalized sidelobe canceler

5.6.3 Multi-rank LMVDR receiver

Generalized sidelobe canceler

5.6.4 Subspace identification for beamforming and spectrum analysis

5.6.5 Extension to WLMVDR receiver

5.7 Widely linear-quadratic estimation

5.7.1 Connection between real and complex quadratic forms

5.7.2 WLQMMSE estimation

Notes

6 Performance bounds for parameter estimation

6.1 Frequentists and Bayesians

6.1.1 Bias, error covariance, and mean-squared error

6.1.2 Connection between frequentist and Bayesian approaches

6.1.3 Extension to augmented errors

6.2 Quadratic frequentist bounds

6.2.1 The virtual two-channel experiment and the quadratic frequentist bound

Good and bad scores

6.2.2 Projection-operator and integral-operator representations of quadratic frequentist bounds

6.2.3 Extension of the quadratic frequentist bound to improper errors and scores

6.3 Fisher score and the Cramer-Rao bound

6.3.1 Nuisance parameters

6.3.2 The Cramer-Rao bound in the proper multivariate Gaussian model

6.3.3 The separable linear statistical model and the geometry of the Cramer-Rao bound

6.3.4 Extension of Fisher score and the Cramer-Rao bound to improper errors and scores

6.3.5 The Cramer-Rao bound in the improper multivariate Gaussian model

6.3.6 Fisher score and Cramer-Rao bounds for functions of parameters

6.4 Quadratic Bayesian bounds

6.5 Fisher--Bayes score and Fisher-Bayes bound

6.5.1 Fisher-Bayes score and information

6.5.2 Fisher-Bayes bound

6.6 Connections and orderings among bounds

Notes

7 Detection

7.1 Binary hypothesis testing

7.1.1 The Neyman-Pearson lemma

7.1.2 Bayes detectors

7.1.3 Adaptive Neyman-Pearson and empirical Bayes detectors

7.2 Sufficiency and invariance

7.3 Receiver operating characteristic

7.4 Simple hypothesis testing in the improper Gaussian model

7.4.1 Uncommon means and common covariance

7.4.2 Common mean and uncommon covariances

7.4.3 Comparison between linear and widely linear detection

7.5 Composite hypothesis testing and the Karlin-Rubin theorem

7.6 Invariance in hypothesis testing

7.6.1 Matched subspace detector

7.6.2 CFAR matched subspace detector

Notes

Part III Complex random processes

8 Wide-sense stationary processes

8.1 Spectral representation and power spectral density

8.2 Filtering

8.2.1 Analytic and complex baseband signals

8.2.2 Noncausal Wiener filter

8.3 Causal Wiener filter

8.3.1 Spectral factorization

8.3.2 Causal synthesis, analysis, and Wiener filters

8.4 Rotary-component and polarization analysis

8.4.1 Rotary components

8.4.2 Rotary components of random signals

Interpretation of the random ellipse

Statistical properties of the random ellipse

8.4.3 Polarization and coherence

8.4.4 Stokes and Jones vectors

8.4.5 Joint analysis of two signals

8.5 Higher-order spectra

8.5.1 Moment spectra and principal domains

8.5.2 Analytic signals

Notes

9 Nonstationary processes

9.1 Karhunen-Loeve expansion

9.1.1 Estimation

9.1.2 Detection

9.2 Cramer-Loeve spectral representation

9.2.1 Four-corners diagram

9.2.2 Energy and power spectral densities

Wide-sense stationary signals

Nonstationary signals

9.2.3 Analytic signals

9.2.4 Discrete-time signals

9.3 Rihaczek time-frequency representation

9.3.1 Interpretation

9.3.2 Kernel estimators

Statistical properties

9.4 Rotary-component and polarization analysis

9.4.1 Ellipse properties

9.4.2 Analytic signals

9.5 Higher-order statistics

Notes

10 Cyclostationary processes

10.1 Characterization and spectral properties

10.1.1 Cyclic power spectral density

10.1.2 Cyclic spectral coherence

10.1.3 Estimating the cyclic power-spectral density

10.2 Linearly modulated communication signals

10.2.1 Symbol-rate-related cyclostationarity

10.2.2 Carrier-frequency-related cyclostationarity

10.2.3 Cyclostationarity as frequency diversity

10.3 Cyclic Wiener filter

10.4 Causal filter-bank implementation of the cyclic Wiener filter

10.4.1 Connection between scalar CS and vector WSS processes

10.4.2 Sliding-window filter bank

10.4.3 Equivalence to FRESH filtering

10.4.4 Causal approximation

Notes

Appendix 1: Rudiments of matrix analysis

A1.1 Matrix factorizations

A1.1.1 Partitioned matrices

A1.1.2 Eigenvalue decomposition

2 × 2 matrix

A1.1.3 Singular value decomposition

A1.2 Positive definite matrices

A1.2.1 Matrix square root and Cholesky decomposition

A1.2.2 Updating the Cholesky factors of a Grammian matrix

A1.2.3 Partial ordering

A1.2.4 Inequalities

A1.3 Matrix inverses

A1.3.1 Partitioned matrices

A1.3.2 Moore-Penrose pseudo-inverse

A1.3.3 Projections

Appendix 2: Complex differential calculus (Wirtinger calculus)

A2.1 Complex gradients

A2.1.1 Holomorphic functions

A2.1.2 Complex gradients and Jacobians

A2.1.3 Properties of Wirtinger derivatives

A2.2 Special cases

A2.3 Complex Hessians

A2.3.1 Properties

A2.3.2 Extension to complex-valued functions

Appendix 3: Introduction to majorization

A3.1 Basic definitions

A3.1.1 Majorization

A3.1.2 Schur-convex functions

A3.2 Tests for Schur-convexity

A3.2.1 Specialized tests

A3.2.2 Functions defined on D

A3.3 Eigenvalues and singular values

A3.3.1 Diagonal elements and eigenvalues

A3.3.2 Diagonal elements and singular values

A3.3.3 Partitioned matrices

References

Index

The users who browse this book also browse


No browse record.