Paper:Takeuchi and Collier, CoNLL 2002

From Cohen Courses
Jump to navigationJump to search

Citation

Use of Support Vector Machines in Extended Named Entity Recognition, Takeuchi and Collier, CoNLL 2002

Online Version

Here is the online version of the paper.

Summary

This paper explores the use of Support Vector Machines (SVMs) for an extended named entity (NE) task and compares it's performance with standard HMM bigram model. The author distinguishes between traditional NE and extended NE (referred as NE+) as the latter being able to capture types, i.e. instances of conceptual classes as well as individuals. NE's main role to identify expressions such as the names of people, places, organizations etc. becomes hard to accomplish using traditional NLP because there is an infinite variety and new expressions are constantly being invented. Such expressions (termed as NE+) require richer contextual evidence than is needed for regular NEs - for eg. knowledge of the head noun or the predicate.

The authors implement and compare two learning methods (SVM, HMM) and tested on two datasets.

SVM

SVMs are known to robustly handle large feature sets and to develop models that maximize their generalizability and makes them ideal for NE+ task. In the implementation each training pattern is given as a vector which represents certain lexical features and a context. The lexical features include surface word forms, part of speech, orthographic features and previous word class tags. The orthographic features used are the ones described in Collier et al., 2000. The full window of context considered in the experiments is about the focus word. In NE+ chunk identification each word was assigned a tag from where is the class, stands for a beginning of chunk tag, stands for an in-chunk tag, and stands for outside of chunk, i.e. not a member of the given class. Two versions of SVM were implemented. uses a window about the focus word and is implemented with the polynomial kernel function. uses only features of focus word and previous word.

HMM

The HMM considered here is the one fully described in Collier et al., 2000. It is a linear interpolating HMM trained using maximum likelihood estimates from bigrams of the surface word and an orthographic feature chosen deterministically.

The results show that SVM outperforms HMM by a significant margin on both MUC-6 and Bio1 datasets if it is given a wide context window () and a rich feature set. Another thing the authors noticed is that the SVM lacked sufficient knowledge about complex structures in NE+ expressions to achieve its best performance on Bio1.

Experimental Result

Results are given as F-scores. The following table shows the overall F-score for the three models and two collections, calculated using 10-fold cross validation on the total test collection. signifies results for models using surface word and orthographic features but not POS features. signifies results for models using surface word, orthographic and POS features.

Table1 Collier.png

There is a clear and sustained advantage by over HMM for the NE task in MUC-6 and the NE+ task in Bio1. The only drawback observed with was that it seemed to be quite weak for the very low frequency classes. However by exploiting the SVMs capability to easily handle large feature sets including a wide context window and POS tags the results suggest that the SVM will perform at a significantly higher level than the HMM.

Related papers

This paper compares and contrasts SVM with the HMM implementation in Collier et al., 2000.