Invariant Features and Enhanced Speaker Normalization for Automatic Speech Recognition

Invariant Features and Enhanced Speaker Normalization for Automatic Speech Recognition
Author: Florian Müller
Publisher: Logos Verlag Berlin GmbH
Total Pages: 247
Release: 2013
Genre: Computers
ISBN: 3832533192


Download Invariant Features and Enhanced Speaker Normalization for Automatic Speech Recognition Book in PDF, Epub and Kindle

Automatic speech recognition systems have to handle various kinds of variabilities sufficiently well in order to achieve high recognition rates in practice. One of the variabilities that has a major impact on the performance is the vocal tract length of the speakers. Normalization of the features and adaptation of the acoustic models are commonly used methods in speech recognition systems. In contrast to that, a third approach follows the idea of extracting features with transforms that are invariant to vocal tract lengths changes. This work presents several approaches for extracting invariant features for automatic speech recognition systems. The robustness of these features under various training-test conditions is evaluated and it is described how the robustness of the features to noise can be increased. Furthermore, it is shown how the spectral effects due to different vocal tract lengths can be estimated with a registration method and how this can be used for speaker normalization.

Automatic Speech and Speaker Recognition

Automatic Speech and Speaker Recognition
Author: Joseph Keshet
Publisher: John Wiley & Sons
Total Pages: 268
Release: 2009-04-27
Genre: Technology & Engineering
ISBN: 9780470742037


Download Automatic Speech and Speaker Recognition Book in PDF, Epub and Kindle

This book discusses large margin and kernel methods for speech and speaker recognition Speech and Speaker Recognition: Large Margin and Kernel Methods is a collation of research in the recent advances in large margin and kernel methods, as applied to the field of speech and speaker recognition. It presents theoretical and practical foundations of these methods, from support vector machines to large margin methods for structured learning. It also provides examples of large margin based acoustic modelling for continuous speech recognizers, where the grounds for practical large margin sequence learning are set. Large margin methods for discriminative language modelling and text independent speaker verification are also addressed in this book. Key Features: Provides an up-to-date snapshot of the current state of research in this field Covers important aspects of extending the binary support vector machine to speech and speaker recognition applications Discusses large margin and kernel method algorithms for sequence prediction required for acoustic modeling Reviews past and present work on discriminative training of language models, and describes different large margin algorithms for the application of part-of-speech tagging Surveys recent work on the use of kernel approaches to text-independent speaker verification, and introduces the main concepts and algorithms Surveys recent work on kernel approaches to learning a similarity matrix from data This book will be of interest to researchers, practitioners, engineers, and scientists in speech processing and machine learning fields.

Acoustical and Environmental Robustness in Automatic Speech Recognition

Acoustical and Environmental Robustness in Automatic Speech Recognition
Author: Alex Acero
Publisher: Springer Science & Business Media
Total Pages: 216
Release: 1992-11-30
Genre: Technology & Engineering
ISBN: 9780792392842


Download Acoustical and Environmental Robustness in Automatic Speech Recognition Book in PDF, Epub and Kindle

The need for automatic speech recognition systems to be robust with respect to changes in their acoustical environment has become more widely appreciated in recent years, as more systems are finding their way into practical applications. Although the issue of environmental robustness has received only a small fraction of the attention devoted to speaker independence, even speech recognition systems that are designed to be speaker independent frequently perform very poorly when they are tested using a different type of microphone or acoustical environment from the one with which they were trained. The use of microphones other than a "close talking" headset also tends to severely degrade speech recognition -performance. Even in relatively quiet office environments, speech is degraded by additive noise from fans, slamming doors, and other conversations, as well as by the effects of unknown linear filtering arising reverberation from surface reflections in a room, or spectral shaping by microphones or the vocal tracts of individual speakers. Speech-recognition systems designed for long-distance telephone lines, or applications deployed in more adverse acoustical environments such as motor vehicles, factory floors, oroutdoors demand far greaterdegrees ofenvironmental robustness. There are several different ways of building acoustical robustness into speech recognition systems. Arrays of microphones can be used to develop a directionally-sensitive system that resists intelference from competing talkers and other noise sources that are spatially separated from the source of the desired speech signal.

New Era for Robust Speech Recognition

New Era for Robust Speech Recognition
Author: Shinji Watanabe
Publisher: Springer
Total Pages: 433
Release: 2017-10-30
Genre: Computers
ISBN: 331964680X


Download New Era for Robust Speech Recognition Book in PDF, Epub and Kindle

This book covers the state-of-the-art in deep neural-network-based methods for noise robustness in distant speech recognition applications. It provides insights and detailed descriptions of some of the new concepts and key technologies in the field, including novel architectures for speech enhancement, microphone arrays, robust features, acoustic model adaptation, training data augmentation, and training criteria. The contributed chapters also include descriptions of real-world applications, benchmark tools and datasets widely used in the field. This book is intended for researchers and practitioners working in the field of speech processing and recognition who are interested in the latest deep learning techniques for noise robustness. It will also be of interest to graduate students in electrical engineering or computer science, who will find it a useful guide to this field of research.

Diphone-Based Speech Recognition Using Neural Networks

Diphone-Based Speech Recognition Using Neural Networks
Author: Mark E. Cantrell
Publisher:
Total Pages: 357
Release: 1996-06-01
Genre: Automatic speech recognition
ISBN: 9781423584988


Download Diphone-Based Speech Recognition Using Neural Networks Book in PDF, Epub and Kindle

Speaker-independent automatic speech recognition (ASR) is a problem of long-standing interest to the Department of Defense. Unfortunately, existing systems are still too limited in capability for many military purposes. Most large-vocabulary systems use phonemes (individual speech sounds, including vowels and consonants) as recognition units. This research explores the use of diphones (pairings of phonemes) as recognition units. Diphones are acoustically easier to recognize because coarticulation effects between the diphones's phonemes become recognition features, rather than confounding variables as in phoneme recognition. Also, diphones carry more information than phonemes, giving the lexical analyzer two chances to detect every phoneme in the word. Research results confirm these theoretical advantages. In testing with 4490 speech samples from 163 speakers, 70.2% of 157 test diphones were correctly identified by one trained neural network. In the same tests, the correct diphone was one of the top three outputs 89.0% of the time. During word recognition tests, the correct word was detected 85% of the time in continuous speech. Of those detections, the correct diphone was ranked first 41.6% of the time and among the top six 74% of the time. In addition, new methods of pitch-based frequency normalization and network feedback-based time alignment are introduced. Both of these techniques improved recognition accuracy on male and female speech samples from all eight dialect regions in the U.S. In one test set, frequency normalization reduced errors by 34%. Similarly, feedback-based time alignment reduced another network's test set errors from 32.8% to 11.0%.

Fundamentals of Speaker Recognition

Fundamentals of Speaker Recognition
Author: Homayoon Beigi
Publisher: Springer Science & Business Media
Total Pages: 984
Release: 2011-12-09
Genre: Technology & Engineering
ISBN: 0387775927


Download Fundamentals of Speaker Recognition Book in PDF, Epub and Kindle

An emerging technology, Speaker Recognition is becoming well-known for providing voice authentication over the telephone for helpdesks, call centres and other enterprise businesses for business process automation. "Fundamentals of Speaker Recognition" introduces Speaker Identification, Speaker Verification, Speaker (Audio Event) Classification, Speaker Detection, Speaker Tracking and more. The technical problems are rigorously defined, and a complete picture is made of the relevance of the discussed algorithms and their usage in building a comprehensive Speaker Recognition System. Designed as a textbook with examples and exercises at the end of each chapter, "Fundamentals of Speaker Recognition" is suitable for advanced-level students in computer science and engineering, concentrating on biometrics, speech recognition, pattern recognition, signal processing and, specifically, speaker recognition. It is also a valuable reference for developers of commercial technology and for speech scientists. Please click on the link under "Additional Information" to view supplemental information including the Table of Contents and Index.

Automatic Speech and Speaker Recognition

Automatic Speech and Speaker Recognition
Author: N. Rex Dixon
Publisher: John Wiley & Sons
Total Pages: 433
Release: 1979
Genre: Automatic speech recognition
ISBN: 9780471058335


Download Automatic Speech and Speaker Recognition Book in PDF, Epub and Kindle

A book of selected reprints. Includes a chapter on automatic speech recognition.

Speech Normalization and Data Augmentation Techniques Based on Acoustical and Physiological Constraints and Their Applications to Child Speech Recognition

Speech Normalization and Data Augmentation Techniques Based on Acoustical and Physiological Constraints and Their Applications to Child Speech Recognition
Author: Gary Joseph Yeung
Publisher:
Total Pages: 90
Release: 2021
Genre:
ISBN:


Download Speech Normalization and Data Augmentation Techniques Based on Acoustical and Physiological Constraints and Their Applications to Child Speech Recognition Book in PDF, Epub and Kindle

Recently, adult automatic speech recognition (ASR) system performance has improved dramatically. In contrast, the performance of child ASR systems remains inadequate in an era where demand for child speech technology is on the rise. While adult speech data is abundant, publicly available child speech data is sparse due, in part, to privacy concerns. Hence, many child ASR systems are trained using adult speech data. However, child ASR systems perform poorly when trained on adult speech due to the acoustic mismatch that results from body size differences, especially the vocal folds and the vocal tract, as well as the high variability of child speech.This research analyzes the acoustical properties of child speech across various ages and compares them to the acoustic properties of adult speech. Specifically, the subglottal resonances (SGRs), fundamental frequency (fo), and formant frequencies of vowel productions are investigated. These acoustic features are shown to be capable of predicting acoustic structures across speakers. As such, we propose feature extraction methods utilizing these properties to normalize the acoustic structure across speakers and reduce the acoustic mismatch between adult and child speech. This allows child ASR systems to leverage adult data for training and suggests a framework for a universal ASR system that need not be adult or child dependent. Furthermore, we demonstrate that when child speech data is limited, these feature normalization methods are capable of producing significant improvements in child ASR for both Gaussian mixture model (GMM) and deep neural network (DNN)-based systems.