Prediction-driven Computational Auditory Scene Analysis
Author | : Daniel P. W. Ellis |
Publisher | : |
Total Pages | : |
Release | : |
Genre | : |
ISBN | : 9780805827262 |
Download Prediction-driven Computational Auditory Scene Analysis Book in PDF, ePub and Kindle
Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Prediction Driven Computational Auditory Scene Analysis PDF full book. Access full book title Prediction Driven Computational Auditory Scene Analysis.
Author | : Daniel P. W. Ellis |
Publisher | : |
Total Pages | : |
Release | : |
Genre | : |
ISBN | : 9780805827262 |
Author | : Daniel P. W. Ellis |
Publisher | : |
Total Pages | : 180 |
Release | : 1996 |
Genre | : |
ISBN | : 9780805827255 |
Author | : David F. Rosenthal |
Publisher | : CRC Press |
Total Pages | : 417 |
Release | : 2021-02-01 |
Genre | : Technology & Engineering |
ISBN | : 1000149323 |
The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting. Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.
Author | : David F. Rosenthal |
Publisher | : CRC Press |
Total Pages | : 414 |
Release | : 2021-01-31 |
Genre | : Technology & Engineering |
ISBN | : 100010611X |
The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting. Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.
Author | : Deliang Wang |
Publisher | : Wiley-IEEE Press |
Total Pages | : 432 |
Release | : 2006-09-29 |
Genre | : Medical |
ISBN | : |
Provides a comprehensive and coherent account of the state of the art in CASA, in terms of the underlying principles, the algorithms and system architectures that are employed, and the potential applications of this exciting new technology.
Author | : Martin Cooke |
Publisher | : |
Total Pages | : 212 |
Release | : 1999 |
Genre | : |
ISBN | : |
Author | : Joanna Luberadzka |
Publisher | : |
Total Pages | : 0 |
Release | : 2023 |
Genre | : |
ISBN | : |
Author | : Tuomas Virtanen |
Publisher | : Springer |
Total Pages | : 417 |
Release | : 2017-09-21 |
Genre | : Technology & Engineering |
ISBN | : 331963450X |
This book presents computational methods for extracting the useful information from audio signals, collecting the state of the art in the field of sound event and scene analysis. The authors cover the entire procedure for developing such methods, ranging from data acquisition and labeling, through the design of taxonomies used in the systems, to signal processing methods for feature extraction and machine learning methods for sound recognition. The book also covers advanced techniques for dealing with environmental variation and multiple overlapping sound sources, and taking advantage of multiple microphones or other modalities. The book gives examples of usage scenarios in large media databases, acoustic monitoring, bioacoustics, and context-aware devices. Graphical illustrations of sound signals and their spectrographic representations are presented, as well as block diagrams and pseudocode of algorithms.
Author | : International Joint Conferences on Artificial Intelligence |
Publisher | : Morgan Kaufmann |
Total Pages | : 1720 |
Release | : 1997 |
Genre | : Artificial intelligence |
ISBN | : 9781558604803 |
Author | : Soundararajan Srinivasan |
Publisher | : |
Total Pages | : 186 |
Release | : 2006 |
Genre | : Automatic speech recognition |
ISBN | : |
Abstract: Speech perception studies indicate that robustness of human speech recognition is primarily due to our ability to segregate a target sound source from other interferences. This perceptual process of auditory scene analysis (ASA) is of two types, primitive and schema-driven. This dissertation investigates several aspects of integrating computational ASA (CASA) and automatic speech recognition (ASR). While bottom-up CASA are used as front-end for ASR to improve its robustness, ASR is used to provide top-down information to enhance primitive segregation. Listeners are able to restore masked phonemes by utilizing lexical context. We present a schema-based model for phonemic restoration. The model employs missing-data ASR to decode masked speech and activates word templates via dynamic time warping. A systematic evaluation shows that the model restores both voiced and unvoiced phonemes with a high spectral quality. Missing-data ASR requires a binary mask from bottom-up CASA that identifies speech-dominant time-frequency regions of a noisy mixture. We propose a two-pass system that performs segregation and recognition in tandem. First, an n-best lattice, consistent with bottom-up speech separation, is generated. Second, the lattice is re-scored using a model-based hypothesis test to improve mask estimation and recognition accuracy concurrently. By combining CASA and ASR, we present a model that simulates listeners' ability to attend to a target speaker when degraded by energetic and informational masking. Missing-data ASR is used to account for energetic masking and the output degradation of CASA is used to model informational masking. The model successfully simulates several quantitative aspects of listener performance. The degradation in the output of CASA-based front-ends leads to uncertain ASR inputs. We estimate feature uncertainties in the spectral domain and transform them into the cepstral domain via nonlinear regression. The estimated uncertainty substantially improves recognition accuracy. We also investigate the effect of vocabulary size on conventional and missing-data ASRs. Based on binaural cues, for conventional ASR, we extract the speech signal using a Wiener filter and for missing-data ASR, we estimate a binary mask. We find that while missing-data ASR outperforms conventional ASR on a small vocabulary task, the relative performance reverses on a larger vocabulary task.