|
|
|
|
LEADER |
00000nam a2200000 i 4500 |
001 |
MGH_AEACE20102002 |
003 |
IN-ChSCO |
005 |
20210416124538.0 |
006 |
m||||||||||||||||| |
007 |
cr |n||||||||n |
008 |
210416s2020||||nyu|||||o|||||||||||eng|| |
010 |
|
|
|z 2020018309
|
020 |
|
|
|a 9781260458947 (e-ISBN)
|
020 |
|
|
|a 1260458946 (e-ISBN)
|
020 |
|
|
|a 9781260458930 (print-ISBN)
|
020 |
|
|
|a 1260458938 (print-ISBN)
|
035 |
|
|
|a (OCoLC)1202624902
|
040 |
|
|
|a IN-ChSCO
|b eng
|e rda
|
041 |
0 |
|
|a eng
|
050 |
|
4 |
|a TK5102.9
|
072 |
|
7 |
|a TEC
|x 007000
|2 bisacsh
|
082 |
0 |
4 |
|a 621.382/2
|2 23
|
100 |
1 |
|
|a Najmi, Amir-Homayoon,
|e author.
|
245 |
1 |
0 |
|a Advanced Signal Processing :
|b A Concise Guide /
|c Amir-Homayoon Najmi, Todd K. Moon.
|
250 |
|
|
|a First edition.
|
264 |
|
1 |
|a New York, N.Y. :
|b McGraw-Hill Education,
|c [2020]
|
264 |
|
4 |
|c ?2020
|
300 |
|
|
|a 1 online resource (353 pages) :
|b illustrations.
|
336 |
|
|
|a text
|2 rdacontent
|
337 |
|
|
|a computer
|2 rdamedia
|
338 |
|
|
|a online resource
|2 rdacarrier
|
490 |
1 |
|
|a McGraw-Hill's AccessEngineering
|
504 |
|
|
|a Includes bibliographical references and index.
|
505 |
0 |
|
|a Cover --
|t About the Authors --
|t Title Page --
|t Copyright Page --
|t Dedication --
|t Contents --
|t List of Figures --
|t List of Tables --
|t Acronyms --
|t Preface --
|t Acknowledgments --
|t 1 Mathematical Structures of Signal Spaces --
|t 1.1 Introduction --
|t 1.2 Vector Spaces, Norms, and Inner Products --
|t 1.3 Orthonormal Vectors and the Gram-Schmidt Method --
|t 1.4 Complete and Orthonormal Bases --
|t 1.5 Linear Operators in Function Spaces --
|t 1.6 Matrix Determinant, Eigenvectors, and Eigenvalues --
|t 1.7 Matrix Norms --
|t 1.8 Solutions to Ax = b --
|t 1.9 Projections in a Hilbert Space --
|t 1.10 The Prolate Spheroidal Functions --
|t 1.11 The Approximation Problem and the Orthogonality Principle --
|t 1.12 The Haar Wavelet --
|t 1.13 MRA Subspaces and Discrete Orthogonal Wavelet Bases --
|t 1.14 Compressive Sensing --
|t 2 Matrix Factorizations and the Least Squares Problem --
|t 2.1 Introduction --
|t 2.2 QR Factorization --
|t 2.3 QR Factorization Using Givens Rotations --
|t 2.4 QR Using Householder Reflections --
|t 2.5 QR Factorization and Full Rank Least Squares --
|t 2.6 Cholesky Factorization and Full Rank Least Squares --
|t 2.7 Singular Value Decomposition (SVD) --
|t 2.8 SVD and Reduced Rank Approximation --
|t 2.9 SVD and Matrix Subspaces --
|t 2.10 SVD: Full Rank Least Squares and Minimum Norm Solutions --
|t 2.11 Total Least Squares --
|t 2.12 SVD and the Orthogonal Procrustes Problem --
|t 3 Linear Time-Invariant Systems and Transforms --
|t 3.1 Introduction --
|t 3.2 The Laplace Transform --
|t 3.3 Phase and Group Delay Response: Continuous Time --
|t 3.4 The Z Transform --
|t 3.5 Phase and Group Delay Response: Discrete Time --
|t 3.6 Minimum Phase and Front Loading Property --
|t 3.7 The Fourier Transform --
|t 3.8 The Short-Time Fourier Transform and the Spectrogram --
|t 3.9 The Discrete Time Fourier Transform --
|t 3.10 The Chirp Z Transform --
|t 3.11 Finite Convolutions --
|t 3.12 The Cepstrum --
|t 3.13 The Orthogonal Discrete Wavelet Transform --
|t 3.14 The Hilbert Transform Relations --
|t 3.15 The Analytic Signal and Instantaneous Frequency --
|t 3.16 Time-Frequency Distribution Functions --
|t 4 Least Squares Filters --
|t 4.1 Introduction --
|t 4.2 Quadratic Minimization Problems --
|t 4.3 Frequency Domain Least Squares Filters --
|t 4.4 Time Domain Least Squares Shaping Filters --
|t 4.5 Gradient Descent Iterative Solution to Least Squares Filtering --
|t 4.6 Time Delay Estimation --
|t 5 Random Variables and Estimation Theory --
|t 5.1 Real Random Variables and Random Vectors --
|t 5.2 Complex Random Variables and Random Vectors --
|t 5.3 Random Processes --
|t 5.4 Gaussian Random Variables and Random Vectors --
|t 5.5 Gram-Schmidt Decorrelation --
|t 5.6 Principal Components Analysis --
|t 5.7 The Karhunen-Lo?ve Transformation --
|t 5.8 Statistical Properties of the Least Squares Filter --
|t 5.9 Estimation of Random Variables --
|t 5.10 Jointly Gaussian Random Vectors, the Conditional Mean and Covariance --
|t 5.11 The Conditional Mean and the Linear Model --
|t 5.12 The Kalman Filter --
|t 5.13 Parameter Estimation and the Cramer-Rao Lower Bound --
|t 5.14 Linear MVU and Maximum Likelihood Estimators --
|t 5.15 MLE of the Parameter Vector of a Linear Model --
|t 5.16 MLE of Complex Amplitude of a Complex Sinusoid in Gaussian Noise --
|t 5.17 MLE of a First Order Gaussian Markov Process --
|t 5.18 Information Theory: Entropy and Mutual Information --
|t 5.19 Independent Components Analysis --
|t 5.20 Maximum Likelihood ICA --
|t 6 WSS Random Processes --
|t 6.1 Auto-Correlation and the Power Spectral Density --
|t 6.2 Complex Sinusoids in Zero-Mean White Noise --
|t 6.3 The MUSIC Algorithm --
|t 6.4 Pisarenko Harmonic Decomposition (PHD) --
|t 6.5 The ESPRIT Algorithm --
|t 6.6 The Auto-Correlation Matrix for Time Reversed Signal Vectors --
|t 7 Linear Systems and Stochastic Inputs --
|t 7.1 Filtered Random Processes --
|t 7.2 Detection of a Known Non-Random Signal in WSS Noise --
|t 7.3 Detection of a WSS Random Signal in WSS Random Noise --
|t 7.4 Canonical Factorization --
|t 7.5 The Continuous-Time Causal Wiener Filter --
|t 7.6 The Discrete-Time Causal Wiener Filter --
|t 7.7 The Causal Wiener Filter and the Kalman Filter --
|t 7.8 The Non-Causal Wiener Filter and the Coherence Function --
|t 7.9 Generalized Cross-Correlation and Time-Delay Estimation --
|t 7.10 Random Fields --
|t 8 PSD Estimation and Signal Models --
|t 8.1 Introduction --
|t 8.2 Ergodicity --
|t 8.3 Sample Estimates of Mean and Correlation Functions --
|t 8.4 The Periodogram --
|t 8.5 Statistical Properties of the Periodogram --
|t 8.6 Reducing the Periodogram Variance --
|t 8.7 The Multitaper Method --
|t 8.8 Example Applications of Classical Spectral Estimation --
|t 8.9 Minimum Variance Distortionless Spectral Estimator --
|t 8.10 Autoregressive Moving Average (ARMA) Signal Models --
|t 8.11 Autoregressive Signal Models --
|t 8.12 Maximum Entropy and the AR(P) Process --
|t 8.13 Spectral Flatness and the AR(P) Process --
|t 8.14 AR(P) Process Examples --
|t 8.15 The Levinson-Durbin Algorithm --
|t 8.16 The Relationship Between MVD and AR Spectra --
|t 8.17 AR Model of a Zero-Mean WSS Random Signal --
|t 8.18 AR Model of a Complex Sinusoid in White Noise --
|t 8.19 AR Model of Multiple Complex Sinusoids in White Noise --
|t 8.20 Resolution of AR Models --
|t 8.21 AR Model Parameter Estimation --
|t 8.22 AR Parameter Estimation: Auto-Correlation Method --
|t 8.23 AR Parameter Estimation: Covariance Method --
|t 8.24 Model Order Selection --
|t 8.25 Akaike Information Criterion --
|t 8.26 Bayesian Model Order Selection --
|t 8.27 Minimum Description Length --
|t 9 Linear Prediction --
|t 9.1 Introduction --
|t 9.2 The Discrete Time FIR Wiener Filter --
|t 9.3 The Forward Prediction Problem --
|t 9.4 The Backward Prediction Problem --
|t 9.5 Prediction Error Sequences and Partial Correlations --
|t 9.6 Lattice Filters --
|t 9.7 The Minimum Phase Property of the Forward PEF --
|t 9.8 AR Parameter Estimation: the Burg Method --
|t 9.9 Linear Prediction and Speech Recognition --
|t 10 Adaptive Filters --
|t 10.1 Introduction --
|t 10.2 The LMS Algorithm --
|t 10.3 Complex LMS --
|t 10.4 Sign Adaptive LMS Algorithms --
|t 10.5 Normalized LMS Algorithm --
|t 10.6 Equalizing LMS Convergence Rates --
|t 10.7 Recursive Least Squares (RLS) --
|t 10.8 RLS Implementation --
|t 11 Optimal Processing of Linear Arrays --
|t 11.1 Uniform Linear Array (ULA) --
|t 11.2 The Signal Model on a ULA --
|t 11.3 Beamforming --
|t 11.4 Optimal Beamforming --
|t 11.5 Performance of the Optimal Beamformer --
|t 11.6 Optimal Beamforming in Practice --
|t 11.7 Recursive Methods in SMI Beamforming --
|t 11.8 PCA and Dominant Mode Rejection (DMR) Beamforming --
|t 11.9 Direction of Arrival (DOA) Estimation --
|t 12 Neural Networks --
|t 12.1 Introduction --
|t 12.2 The Perceptron --
|t 12.3 Fully Connected Feed Forward Neural Networks --
|t 12.4 The Backpropagation Algorithm --
|t 12.5 Loss Functions in Neural Network Training --
|t 12.6 Gradient Descent Variants --
|t 12.7 Single Hidden Layer and Multiple Hidden Layers --
|t 12.8 Mini-Batch Training and Normalization --
|t 12.9 Network Initialization --
|t 12.10 Regularization --
|t 12.11 Convolutional Neural Networks (CNNs) --
|t 12.12 Time Series Classification with a CNN --
|t 12.13 Image Classification with a CNN --
|t 12.14 Recurrent Neural Networks (RNNs) --
|t 12.15 Unsupervised Learning --
|t 12.16 Generative Adversarial Networks --
|t 12.17 Perspective --
|t References --
|t Index.
|
520 |
0 |
|
|a "A comprehensive introduction to the mathematical principles and algorithms in statistical signal processing and modern neural networks. This text is an expanded version of a graduate course on advanced signal processing at the Johns Hopkins University Whiting School program for professionals, with students from electrical engineering, physics, computer and data science, and mathematics backgrounds. It covers the theory underlying applications in statistical signal processing, including spectral estimation, linear prediction, adaptive filters, and optimal processing of uniform spatial arrays. Unique among books on the subject, it also includes a comprehensive introduction to modern neural networks with examples in time series forecasting and image classification."--Publisher's description.
|
530 |
|
|
|a Also available in print edition.
|
533 |
|
|
|a Electronic reproduction.
|b New York, N.Y. :
|c McGraw Hill,
|d 2020.
|n Mode of access: World Wide Web.
|n System requirements: Web browser.
|n Access may be restricted to users at subscribing institutions.
|
538 |
|
|
|a Mode of access: Internet via World Wide Web.
|
546 |
|
|
|a In English.
|
588 |
|
|
|a Description based on e-Publication PDF.
|
650 |
|
7 |
|a TECHNOLOGY & ENGINEERING / Electrical
|2 bisacsh.
|
650 |
|
0 |
|a Signal processing
|v Textbooks.
|
650 |
|
0 |
|a Signal processing.
|
655 |
|
0 |
|a Electronic books.
|
700 |
1 |
|
|a Moon, Todd K.,
|e author.
|
776 |
0 |
|
|i Print version:
|t Advanced Signal Processing : A Concise Guide.
|b First edition.
|d New York, N.Y. : McGraw-Hill Education, 2020
|w (OCoLC)1138607054
|
830 |
|
0 |
|a McGraw-Hill's AccessEngineering.
|
856 |
4 |
0 |
|u https://accessengineeringlibrary.uam.elogim.com/content/book/9781260458930
|z Texto completo
|