Cargando…

Tools for signal compression /

This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and...

Descripción completa

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor principal: Moreau, Nicolas, 1945-
Formato: Electrónico eBook
Idioma:Inglés
Francés
Publicado: London : Hoboken, N.J. : ISTE ; Wiley, 2011.
Colección:ISTE publications.
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)
Tabla de Contenidos:
  • Machine generated contents note: pt. 1 TOOLS FOR SIGNAL COMPRESSION
  • ch. 1 Scalar Quantization
  • 1.1. Introduction
  • 1.2. Optimum scalar quantization
  • 1.2.1. Necessary conditions for optimization
  • 1.2.2. Quantization error power
  • 1.2.3. Further information
  • 1.2.3.1. Lloyd-Max algorithm
  • 1.2.3.2. Non-linear transformation
  • 1.2.3.3. Scale factor
  • 1.3. Predictive scalar quantization
  • 1.3.1. Principle
  • 1.3.2. Reminders on the theory of linear prediction
  • 1.3.2.1. Introduction: least squares minimization
  • 1.3.2.2. Theoretical approach
  • 1.3.2.3. Comparing the two approaches
  • 1.3.2.4. Whitening filter
  • 1.3.2.5. Levinson algorithm
  • 1.3.3. Prediction gain
  • 1.3.3.1. Definition
  • 1.3.4. Asymptotic value of the prediction gain
  • 1.3.5. Closed-loop predictive scalar quantization
  • ch. 2 Vector Quantization
  • 2.1. Introduction.
  • 2.2. Rationale
  • 2.3. Optimum codebook generation
  • 2.4. Optimum quantizer performance
  • 2.5. Using the quantizer
  • 2.5.1. Tree-structured vector quantization
  • 2.5.2. Cartesian product vector quantization
  • 2.5.3. Gain-shape vector quantization
  • 2.5.4. Multistage vector quantization
  • 2.5.5. Vector quantization by transform
  • 2.5.6. Algebraic vector quantization
  • 2.6. Gain-shape vector quantization
  • 2.6.1. Nearest neighbor rule
  • 2.6.2. Lloyd-Max algorithm
  • ch. 3 Sub-band Transform Coding
  • 3.1. Introduction
  • 3.2. Equivalence of filter banks and transforms
  • 3.3. Bit allocation
  • 3.3.1. Defining the problem
  • 3.3.2. Optimum bit allocation
  • 3.3.3. Practical algorithm
  • 3.3.4. Further information
  • 3.4. Optimum transform
  • 3.5. Performance
  • 3.5.1. Transform gain
  • 3.5.2. Simulation results
  • ch. 4 Entropy Coding
  • 4.1. Introduction
  • 4.2. Noiseless coding of discrete, memoryless sources.
  • 4.2.1. Entropy of a source
  • 4.2.2. Coding a source
  • 4.2.2.1. Definitions
  • 4.2.2.2. Uniquely decodable instantaneous code
  • 4.2.2.3. Kraft inequality
  • 4.2.2.4. Optimal code
  • 4.2.3. Theorem of noiseless coding of a memoryless discrete source
  • 4.2.3.1. Proposition 1
  • 4.2.3.2. Proposition 2
  • 4.2.3.3. Proposition 3
  • 4.2.3.4. Theorem
  • 4.2.4. Constructing a code
  • 4.2.4.1. Shannon code
  • 4.2.4.2. Huffman algorithm
  • 4.2.4.3. Example 1
  • 4.2.5. Generalization
  • 4.2.5.1. Theorem
  • 4.2.5.2. Example 2
  • 4.2.6. Arithmetic coding
  • 4.3. Noiseless coding of a discrete source with memory
  • 4.3.1. New definitions
  • 4.3.2. Theorem of noiseless coding of a discrete source with memory
  • 4.3.3. Example of a Markov source
  • 4.3.3.1. General details
  • 4.3.3.2. Example of transmitting documents by fax
  • 4.4. Scalar quantizer with entropy constraint
  • 4.4.1. Introduction
  • 4.4.2. Lloyd-Max quantizer
  • 4.4.3. Quantizer with entropy constraint.
  • 4.4.3.1. Expression for the entropy
  • 4.4.3.2. Jensen inequality
  • 4.4.3.3. Optimum quantizer
  • 4.4.3.4. Gaussian source
  • 4.5. Capacity of a discrete memoryless channel
  • 4.5.1. Introduction
  • 4.5.2. Mutual information
  • 4.5.3. Noisy-channel coding theorem
  • 4.5.4. Example: symmetrical binary channel
  • 4.6. Coding a discrete source with a fidelity criterion
  • 4.6.1. Problem
  • 4.6.2. Rate-distortion function
  • 4.6.3. Theorems
  • 4.6.3.1. Source coding theorem
  • 4.6.3.2. Combined source-channel coding
  • 4.6.4. Special case: quadratic distortion measure
  • 4.6.4.1. Shannon's lower bound for a memoryless source
  • 4.6.4.2. Source with memory
  • 4.6.5. Generalization
  • pt. 2 AUDIO SIGNAL APPLICATIONS
  • ch. 5 Introduction to Audio Signals
  • 5.1. Speech signal characteristics
  • 5.2. Characteristics of music signals
  • 5.3. Standards and recommendations
  • 5.3.1. Telephone-band speech signals
  • 5.3.1.1. Public telephone network.
  • 5.3.1.2. Mobile communication
  • 5.3.1.3. Other applications
  • 5.3.2. Wideband speech signals
  • 5.3.3. High-fidelity audio signals
  • 5.3.3.1. MPEG-1
  • 5.3.3.2. MPEG-2
  • 5.3.3.3. MPEG-4
  • 5.3.3.4. MPEG-7 and MPEG-21
  • 5.3.4. Evaluating the quality
  • ch. 6 Speech Coding
  • 6.1. PCM and ADPCM coders
  • 6.2. The 2.4 bit/s LPC-10 coder
  • 6.2.1. Determining the filter coefficients
  • 6.2.2. Unvoiced sounds
  • 6.2.3. Voiced sounds
  • 6.2.4. Determining voiced and unvoiced sounds
  • 6.2.5. Bit rate constraint
  • 6.3. The CELP coder
  • 6.3.1. Introduction
  • 6.3.2. Determining the synthesis filter coefficients
  • 6.3.3. Modeling the excitation
  • 6.3.3.1. Introducing a perceptual factor
  • 6.3.3.2. Selecting the excitation model
  • 6.3.3.3. Filtered codebook
  • 6.3.3.4. Least squares minimization
  • 6.3.3.5. Standard iterative algorithm
  • 6.3.3.6. Choosing the excitation codebook
  • 6.3.3.7. Introducing an adaptive codebook.
  • 6.3.4. Conclusion
  • ch. 7 Audio Coding
  • 7.1. Principles of "perceptual coders"
  • 7.2. MPEG-1 layer 1 coder
  • 7.2.1. Time/frequency transform
  • 7.2.2. Psychoacoustic modeling and bit allocation
  • 7.2.3. Quantization
  • 7.3. MPEG-2 AAC coder
  • 7.4. Dolby AC-3 coder
  • 7.5. Psychoacoustic model: calculating a masking threshold
  • 7.5.1. Introduction
  • 7.5.2. The ear
  • 7.5.3. Critical bands
  • 7.5.4. Masking curves
  • 7.5.5. Masking threshold
  • ch. 8 Audio Coding: Additional Information
  • 8.1. Low bit rate/acceptable quality coders
  • 8.1.1. Tool one: SBR
  • 8.1.2. Tool two: PS
  • 8.1.2.1. Historical overview
  • 8.1.2.2. Principle of PS audio coding
  • 8.1.2.3. Results
  • 8.1.3. Sound space perception
  • 8.2. High bit rate lossless or almost lossless coders
  • 8.2.1. Introduction
  • 8.2.2. ISO/IEC MPEG-4 standardization
  • 8.2.2.1. Principle
  • 8.2.2.2. Some details
  • ch. 9 Stereo Coding: A Synthetic Presentation.
  • 9.1. Basic hypothesis and notation
  • 9.2. Determining the inter-channel indices
  • 9.2.1. Estimating the power and the intercovariance
  • 9.2.2. Calculating the inter-channel indices
  • 9.2.3. Conclusion
  • 9.3. Downmixing procedure
  • 9.3.1. Development in the time domain
  • 9.3.2. In the frequency domain
  • 9.4. At the receiver
  • 9.4.1. Stereo signal reconstruction
  • 9.4.2. Power adjustment
  • 9.4.3. Phase alignment
  • 9.4.4. Information transmitted via the channel
  • 9.5. Draft International Standard
  • pt. 3 MATLAB® PROGRAMS
  • ch. 10 A Speech Coder
  • 10.1. Introduction
  • 10.2. Script for the calling function
  • 10.3. Script for called functions
  • ch. 11 A Music Coder
  • 11.1. Introduction
  • 11.2. Script for the calling function
  • 11.3. Script for called functions.