Aasish Project Abstract

From Cohen Courses
Jump to navigationJump to search

Title: Named Entity/OOV detection from ASR hypothesis

What you plan to do with what data ?

In this project,

  • We investigate and analyze the performance of named entity detection in the context of

broadcast news speech output.

  • We would explore various rescoring techniques and usefulness of external resources to provide additional

information along with local context surrounding the candidate for named entity.

  • 1998 HUB4 Broadcast News Evaluation English Test Material. 175 hours of speech (1.2m words of audio.. training data)
  • 2003 NIST Rich Transcription Evaluation Data.


External Resources for additional context information:
       *  NYT, APW, Wikipedia. 

Why you think it’s interesting ?

  • Recognizing Named Entities from speech or at least detecting the errors caused due to Named Entities have been a challenging problem in itself.
  • Although, this problem has been around for a while, it is still interesting given the trend that spoken dialog interfaces like Google Voice Search

would compete with traditional textual interfaces for search.

  • This problem may tunnel into the domain of less well-behaved texts and other non-conventional usage of Named Entities like in the

case of spontaneous speech, OCR etc.

  • Miller et al. '97 observed a motivational reason to consider NE extraction from speech as a problem of interest.

They found that OOV rate for words that are part of named-entities can be as much as a factor of ten greater than the baseline OOV for non-name words.


  • Related work
  * Miller et al. '97 (Named Entity Extraction from Noisy Input)
  * Benoit et al. '05 (similar work on french broadcast news)  
  * Palmer and Ostendorf '04(Improving out of vocabulary name resolution)
  * Palmer and Ostendorf '01 (Improving Information Extraction by Modeling Errors in Speech Recognizer Output)
  * Chung et al. '04(A Dynamic Vocabulary Spoken Dialogue Interface)


How you plan to evaluate your work ? Evaluate on the test set (3 hours of news ~25k words) provided by 1998 HUB-4 dataset.

What techniques you plan to use ?

We plan to divide our work into two subtasks:

  • OOV detection: we are going to implement error detection algorithm described in Palmer and Ostendorf \cite{palmer2}
  • Error resolution: based on final word choice from a candidate list of named entities pruned on the basis of phonetic distance

or may be adapated language model score or via second pass of recognition.

What question you want to answer ?

  • Whether external resources "really" help improve speech based IE systems ?

Who you might work with ?

  • Minh Duong

References

  • David Miller, Scan Boisen, Richard Schwartz, Rebecca Stone, Ralph Weischedel. Named Entity Extraction from Noisy Input: Speech and OCR. ACL 1997
  • David D. Palmer, Mari Ostendorf. Improving out-of-vocabulary name resolution, Computer Speech and Language 19 (2005) 107-128.
  • David D. Palmer, Mari Ostendorf. Improving Information Extraction by Modeling Errors in Speech Recognizer Output. Proceedings of the first international conference on Human language technology research, 2001.
  • G Chung, S Seneff, C Wang, L Hetherington - A dynamic vocabulary spoken dialogue interface, Proc. ICSLP, 2004.
  • Benoit Favre, Frederic Bechet, Pascal Nocera - Robust Named Entity extraction from large spoken archives, HLT/EMNLP 2005, 491-498.