Bikel et al MLJ 1999

From Cohen Courses
Jump to navigationJump to search

Being edited by Rui Correia

Citation

D. M. Bikel, R. L. Schwartz, and R. M. Weischedel. An algorithm that learns what's in a name. Machine Learning Journal, 34: 211–-231, 1999.

Summary

In this paper the authors present IdentiFinder, an Hidden Markov Model approach to the Named Entity Recognition problem. Most techniques used in Named Entity Recognition until the time of the paper, were mainly based on handcrafted patterns that are completely language dependent, and not flexible to different inputs (speech input, upper case texts, etc).

This was the first paper that addressed Named Entity Recognition with HMM's, recognizing a structure in the identification of named entities, formulating it as a classification problem where a word is either part of some class or not.

The proposed solution can be efficiently trained, achieving results equivalent to the best NE solution that existed so far (aprox. 95%). The latter was a hand-crafted solution that needed experts to maintain the rules, being completely language dependent.The authors also show how the language dependent component of the model affect the performance of the solution, proving that although it helps, it is not nearly as significant as the other parts of the model.

Brief Description of the Method

Their solution had a model for each name-class and a model for the not-a-name text. Additionally, there are tow special states, the START-OF-SENTENCE and END-OF-SENTENCE. The figure below provides a graphical representation of the model (the dashed edges assure the completion of the graph).

BikelHmmGraph.png

Each of the regions in the above graph was modeled with a different statistical bigram language model (likelihood of words occurring within that region), meaning that each type of name is considered a different language, with separate bigram probabilities. Formally, one is trying to find the most likely sequence of name classes given a sequence of words . Using Viterbi algorithm one can then search the entire space of all possible name-class assignments, that maximize the following equation


Additionally, the authors represented words as two-element vectors. represents a word occurrence where is the text of the word and is a feature that is assigned to it. The set of features as long as the motivation behind them can be found in the figure below. These feature require a language dependent computation, but that is simple and deterministic.

BikelWordFeatures.png


The final model that is presented is divided in two smaller models: the Top Level Model and Back-off Model & Smoothing.

The Top Level Model, the most accurate and powerful, generates the words and name-classes following three steps:

  • (1) Select a name-class, conditioning on the previous class and previous word, i.e., .
  • (2) Generate the first word inside that name-class, conditioning on the current and previous name-classes, i.e., .
  • (3) Generate all subsequent words inside that name-class, conditioned on its predecessor, i.e., .

where is the function (number of times the events occurred in the training data.

The Back-Off Model & Smoothing is responsible to deal with the fact that there is not enough training data to estimate accurate probabilities for all possibilities. This can happen whether when there is a word not seen in training or an event with insufficient data to predict. This model adopts a back-off approach, going back to unigrams, or smoothing techniques.

Experimental Results

The model was tested with MUC-6

  • 100k words of training = 90% performance

Related Papers