Cargando…

Advanced Signal Processing : A Concise Guide /

"A comprehensive introduction to the mathematical principles and algorithms in statistical signal processing and modern neural networks. This text is an expanded version of a graduate course on advanced signal processing at the Johns Hopkins University Whiting School program for professionals,...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autores principales: Najmi, Amir-Homayoon (Autor), Moon, Todd K. (Autor)
Formato: Electrónico eBook
Idioma:Inglés
Publicado: New York, N.Y. : McGraw-Hill Education, [2020]
Edición:First edition.
Colección:McGraw-Hill's AccessEngineering.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Cover
  • About the Authors
  • Title Page
  • Copyright Page
  • Dedication
  • Contents
  • List of Figures
  • List of Tables
  • Acronyms
  • Preface
  • Acknowledgments
  • 1 Mathematical Structures of Signal Spaces
  • 1.1 Introduction
  • 1.2 Vector Spaces, Norms, and Inner Products
  • 1.3 Orthonormal Vectors and the Gram-Schmidt Method
  • 1.4 Complete and Orthonormal Bases
  • 1.5 Linear Operators in Function Spaces
  • 1.6 Matrix Determinant, Eigenvectors, and Eigenvalues
  • 1.7 Matrix Norms
  • 1.8 Solutions to Ax = b
  • 1.9 Projections in a Hilbert Space
  • 1.10 The Prolate Spheroidal Functions
  • 1.11 The Approximation Problem and the Orthogonality Principle
  • 1.12 The Haar Wavelet
  • 1.13 MRA Subspaces and Discrete Orthogonal Wavelet Bases
  • 1.14 Compressive Sensing
  • 2 Matrix Factorizations and the Least Squares Problem
  • 2.1 Introduction
  • 2.2 QR Factorization
  • 2.3 QR Factorization Using Givens Rotations
  • 2.4 QR Using Householder Reflections
  • 2.5 QR Factorization and Full Rank Least Squares
  • 2.6 Cholesky Factorization and Full Rank Least Squares
  • 2.7 Singular Value Decomposition (SVD)
  • 2.8 SVD and Reduced Rank Approximation
  • 2.9 SVD and Matrix Subspaces
  • 2.10 SVD: Full Rank Least Squares and Minimum Norm Solutions
  • 2.11 Total Least Squares
  • 2.12 SVD and the Orthogonal Procrustes Problem
  • 3 Linear Time-Invariant Systems and Transforms
  • 3.1 Introduction
  • 3.2 The Laplace Transform
  • 3.3 Phase and Group Delay Response: Continuous Time
  • 3.4 The Z Transform
  • 3.5 Phase and Group Delay Response: Discrete Time
  • 3.6 Minimum Phase and Front Loading Property
  • 3.7 The Fourier Transform
  • 3.8 The Short-Time Fourier Transform and the Spectrogram
  • 3.9 The Discrete Time Fourier Transform
  • 3.10 The Chirp Z Transform
  • 3.11 Finite Convolutions
  • 3.12 The Cepstrum
  • 3.13 The Orthogonal Discrete Wavelet Transform
  • 3.14 The Hilbert Transform Relations
  • 3.15 The Analytic Signal and Instantaneous Frequency
  • 3.16 Time-Frequency Distribution Functions
  • 4 Least Squares Filters
  • 4.1 Introduction
  • 4.2 Quadratic Minimization Problems
  • 4.3 Frequency Domain Least Squares Filters
  • 4.4 Time Domain Least Squares Shaping Filters
  • 4.5 Gradient Descent Iterative Solution to Least Squares Filtering
  • 4.6 Time Delay Estimation
  • 5 Random Variables and Estimation Theory
  • 5.1 Real Random Variables and Random Vectors
  • 5.2 Complex Random Variables and Random Vectors
  • 5.3 Random Processes
  • 5.4 Gaussian Random Variables and Random Vectors
  • 5.5 Gram-Schmidt Decorrelation
  • 5.6 Principal Components Analysis
  • 5.7 The Karhunen-Lo?ve Transformation
  • 5.8 Statistical Properties of the Least Squares Filter
  • 5.9 Estimation of Random Variables
  • 5.10 Jointly Gaussian Random Vectors, the Conditional Mean and Covariance
  • 5.11 The Conditional Mean and the Linear Model
  • 5.12 The Kalman Filter
  • 5.13 Parameter Estimation and the Cramer-Rao Lower Bound
  • 5.14 Linear MVU and Maximum Likelihood Estimators
  • 5.15 MLE of the Parameter Vector of a Linear Model
  • 5.16 MLE of Complex Amplitude of a Complex Sinusoid in Gaussian Noise
  • 5.17 MLE of a First Order Gaussian Markov Process
  • 5.18 Information Theory: Entropy and Mutual Information
  • 5.19 Independent Components Analysis
  • 5.20 Maximum Likelihood ICA
  • 6 WSS Random Processes
  • 6.1 Auto-Correlation and the Power Spectral Density
  • 6.2 Complex Sinusoids in Zero-Mean White Noise
  • 6.3 The MUSIC Algorithm
  • 6.4 Pisarenko Harmonic Decomposition (PHD)
  • 6.5 The ESPRIT Algorithm
  • 6.6 The Auto-Correlation Matrix for Time Reversed Signal Vectors
  • 7 Linear Systems and Stochastic Inputs
  • 7.1 Filtered Random Processes
  • 7.2 Detection of a Known Non-Random Signal in WSS Noise
  • 7.3 Detection of a WSS Random Signal in WSS Random Noise
  • 7.4 Canonical Factorization
  • 7.5 The Continuous-Time Causal Wiener Filter
  • 7.6 The Discrete-Time Causal Wiener Filter
  • 7.7 The Causal Wiener Filter and the Kalman Filter
  • 7.8 The Non-Causal Wiener Filter and the Coherence Function
  • 7.9 Generalized Cross-Correlation and Time-Delay Estimation
  • 7.10 Random Fields
  • 8 PSD Estimation and Signal Models
  • 8.1 Introduction
  • 8.2 Ergodicity
  • 8.3 Sample Estimates of Mean and Correlation Functions
  • 8.4 The Periodogram
  • 8.5 Statistical Properties of the Periodogram
  • 8.6 Reducing the Periodogram Variance
  • 8.7 The Multitaper Method
  • 8.8 Example Applications of Classical Spectral Estimation
  • 8.9 Minimum Variance Distortionless Spectral Estimator
  • 8.10 Autoregressive Moving Average (ARMA) Signal Models
  • 8.11 Autoregressive Signal Models
  • 8.12 Maximum Entropy and the AR(P) Process
  • 8.13 Spectral Flatness and the AR(P) Process
  • 8.14 AR(P) Process Examples
  • 8.15 The Levinson-Durbin Algorithm
  • 8.16 The Relationship Between MVD and AR Spectra
  • 8.17 AR Model of a Zero-Mean WSS Random Signal
  • 8.18 AR Model of a Complex Sinusoid in White Noise
  • 8.19 AR Model of Multiple Complex Sinusoids in White Noise
  • 8.20 Resolution of AR Models
  • 8.21 AR Model Parameter Estimation
  • 8.22 AR Parameter Estimation: Auto-Correlation Method
  • 8.23 AR Parameter Estimation: Covariance Method
  • 8.24 Model Order Selection
  • 8.25 Akaike Information Criterion
  • 8.26 Bayesian Model Order Selection
  • 8.27 Minimum Description Length
  • 9 Linear Prediction
  • 9.1 Introduction
  • 9.2 The Discrete Time FIR Wiener Filter
  • 9.3 The Forward Prediction Problem
  • 9.4 The Backward Prediction Problem
  • 9.5 Prediction Error Sequences and Partial Correlations
  • 9.6 Lattice Filters
  • 9.7 The Minimum Phase Property of the Forward PEF
  • 9.8 AR Parameter Estimation: the Burg Method
  • 9.9 Linear Prediction and Speech Recognition
  • 10 Adaptive Filters
  • 10.1 Introduction
  • 10.2 The LMS Algorithm
  • 10.3 Complex LMS
  • 10.4 Sign Adaptive LMS Algorithms
  • 10.5 Normalized LMS Algorithm
  • 10.6 Equalizing LMS Convergence Rates
  • 10.7 Recursive Least Squares (RLS)
  • 10.8 RLS Implementation
  • 11 Optimal Processing of Linear Arrays
  • 11.1 Uniform Linear Array (ULA)
  • 11.2 The Signal Model on a ULA
  • 11.3 Beamforming
  • 11.4 Optimal Beamforming
  • 11.5 Performance of the Optimal Beamformer
  • 11.6 Optimal Beamforming in Practice
  • 11.7 Recursive Methods in SMI Beamforming
  • 11.8 PCA and Dominant Mode Rejection (DMR) Beamforming
  • 11.9 Direction of Arrival (DOA) Estimation
  • 12 Neural Networks
  • 12.1 Introduction
  • 12.2 The Perceptron
  • 12.3 Fully Connected Feed Forward Neural Networks
  • 12.4 The Backpropagation Algorithm
  • 12.5 Loss Functions in Neural Network Training
  • 12.6 Gradient Descent Variants
  • 12.7 Single Hidden Layer and Multiple Hidden Layers
  • 12.8 Mini-Batch Training and Normalization
  • 12.9 Network Initialization
  • 12.10 Regularization
  • 12.11 Convolutional Neural Networks (CNNs)
  • 12.12 Time Series Classification with a CNN
  • 12.13 Image Classification with a CNN
  • 12.14 Recurrent Neural Networks (RNNs)
  • 12.15 Unsupervised Learning
  • 12.16 Generative Adversarial Networks
  • 12.17 Perspective
  • References
  • Index.