Improving classification of speech transcripts
Off-the-shelf speech recognition systems can yield useful results and accelerate application development, but general-purpose systems applied to specialized domains can introduce acoustically small–but semantically catastrophic–errors. Furthermore, sufficient audio data may not be available to develop custom acoustic models for niche tasks. To address these problems, we propose a concept to improve performance in text classification tasks that use speech transcripts as input, without any in-domain audio data. Our method augments available typewritten text training data with inferred phonetic information so that the classifier will learn semantically important acoustic regularities, making it more robust to transcription errors from the general purpose ASR. We successfully pilot our method in a speech-based virtual patient used for medical training, recovering up to 62% of errors incurred by feeding a small test set of speech transcripts to a classification model trained on typescript.
Exploring Mimic Loss for Robust ASR
We have recently devised a non-local criterion, called mimic loss, for training a model for speech denoising. This objective, which uses feedback from a senone classifier trained on clean speech, ensures that the denoising model produces spectral features are useful for speech recognition. We combine this knowledge transfer technique with the traditional local criterion to train the speech enhancer. We achieve a new state-of-the-art for the CHiME-2 corpus by feeding the denoised outputs to an off-the-shelf Kaldi recipe. An in-depth analysis of mimic loss reveals that this performance correlates with better reproduction of consonants with low average energy.
Explicitly Incorporating Tense/Aspect to Facilitate Creation of New Virtual Patients
The Virtual Patient project has collected a fair amount of data from student interactions with a patient presenting with back pain, but there is a desire to include a more diverse array of patients. With adequate training examples, treating the question identification task as a single label classification problem has been fairly successful. However, the current approach is not expected to work well to identify the novel questions that are important for patients with different circumstances, because these new questions have little training support. Exploring the label sets reveals some generalities across patients, including the importance of temporal properties of the symptoms. Including temporal information in the canonical question representations may allow us to leverage external data to mitigate the data sparsity issue for questions unique to new patients. I will solicit feedback on an approach to create a frame-like question representation that incorporates this temporal information, as revealed by the tense and linguistic aspect of clauses in the queries.
Alternate Uses for Domain Adaptation and Neural Machine Translation
Recent advances in Neural Machine Translation (NMT) have had ripple effects in other areas of NLP. The advances I am concerned with in this talk have to do with using NMT sentence encodings in downstream NLP tasks. After verifying an experiment where Wang et al. (2017) used this technique for sentence selection, I would now like to use this approach for paraphrase identification. In this talk, I will discuss Wang et al.’s experiment, my reimplementation, and my plans for integrating similar approaches for augmenting data used in the Virtual Patient project.
Tailoring “language agnostic” blackboxes to Arabic Dialects
Many state-of-the-art NLP technologies aspire to be language agnostic but perform disproportionately poorly on Arabic and its dialects. Identifying and understanding the linguistic phenomena which cause these performance drops and developing language specific solutions can shed light on how such technologies might be adapted to broaden their typological coverage. This talk will discuss several recent projects involving Arabic dialects which I worked on, including pan-dialectal dictionary induction, morphological modeling, and spelling normalization. For each of these projects, I will discuss the linguistic traits of Arabic that challenge language agnostic approaches, the language specific adaptations we employed to resolve such challenges, and finally, I will speculate on the generalizability of our solutions to other languages.
Learning from the best: A teacher-student framework multilingual models for low-resource languages.
Automatic Speech Recognition (ASR) in low resource languages is problematic because of the absence of transcripted speech. The amount of training data for any specific language in this category does not exceed 100 hours of speech. Recently, it has been found that knowledge obtained from a huge multilingual dataset (~ 1500 hours) is advantageous for ASR systems in low resource settings, i.e. the neural speech recognition models pre-trained on this dataset and then fine-tuned on language-specific data report a gain in performance as compared to training on language-specific data only. However, it goes without saying that a lot of time and resources are required to pre-train these models, specially the ones with recurrent connections. This work investigates the effectiveness of Teacher-Student (TS) learning to transfer knowledge from a recurrent speech recognition model (TDNN-LSTM) to a non-recurrent model (TDNN) in the context of multilingual speech recognition. Our results are interesting in more than one level. First, we find that student TDNN models trained using TS learning from a recurrent model (TDNN-LSTM) perform much better than their counterparts pre-trained using supervised learning. Second, these student models are trained only with language-specific data instead of the bulky multilingual dataset. Finally, the TS architecture allows us to leverage untranscribed data (previously untouched during supervised training) resulting in further improvement in the performance of the student TDNNs.
I’ll be presenting next Tuesday on incremental coreference as it relates to linguistic and psycholinguistic accuracy. Specifically, I’ll first discuss some human reading time results from coreference-based predictors, and reasons to think humans are processing coreference in an online way. The second part will cover ongoing work to add coreference prediction to an existing incremental left-corner parser, and give a sketch of linguistic and future psycholinguistic evaluation using such a parser.
Depth-bounding a grammar has been a popular technique for applying cognitively motivated restrictions to grammar induction algorithms to limit the search space of possible grammars. In this talk I will introduce two Bayesian depth-bounded grammar induction models for probabilistic context-free grammar from raw text. Both of them first depth-bound a normal PCFG and then sample trees using the depth-bounded PCFG but with different sampling algorithms. Several analyses are performed showing that depth-bounding is indeed effective in limiting the search space of the inducer. Results are also presented for successful unbounded PCFG induction with minimal constraints which has usually been thought to be very difficult. Parsing results on three different languages show that our models are able to produce parse trees better than or competitive with state-of-the-art constituency grammar induction models in terms of parsing accuracy.
This talk proposes deconvolutional time series regression (DTSR) — a general-purpose regression technique for modeling sequential data in which effects can reasonably be assumed to be temporally diffuse — and applies it to discover temporal structure in three existing psycholinguistic datasets. DTSR borrows from digital signal processing by recasting time series modeling as temporal deconvolution. It thus learns latent impulse response functions (IRF) that mediate the temporal relationship between two signals: the independent variable(s) on the one hand and the dependent variable on the other. Synthetic experiments show that DTSR successfully recovers true latent IRF, and psycholinguistic experiments demonstrate (1) important patterns of temporal diffusion that have not previously been quantified in psycholinguistic reading time experiments, (2) the ability to provide evidence for the absence of temporal diffusion, and (3) comparable (or in some cases substantially improved) prediction quality in comparison to more heavily parameterized statistical models. DTSR can thus be used to detect the existence of temporal diffusion and (when it exists) determine data driven impulse response functions to control for it. This suggests that DTSR can be an important component of any analysis pipeline for time series.
Evaluation Order Effects in Dynamic Continuized CCG:
From Negative Polarity Items to Balanced Punctuation
Combinatory Categorial Grammar’s (CCG; Steedman, 2000) flexible treatment of word order and constituency enable it to employ a compact lexicon, an important factor in its successful application to a range of NLP problems. However, its word order flexibility can be problematic for linguistic phenomena where linear order plays a key role. In this talk, I’ll show that the enhanced control over evaluation order afforded by Continuized CCG (Barker & Shan, 2014) makes it possible to formulate improved analyses of negative polarity items and balanced punctuation, and discuss their implementation as a refinement to a prototype parser for Dynamic Continuized CCG (White et al., 2017).