Ritter et al, EMNLP 2011. Named Entity Recognition in Tweets: An Experimental Study

From Cohen Courses
Revision as of 17:56, 24 September 2011 by Ysim (talk | contribs)
Jump to navigationJump to search

Named Entity Recognition in Tweets: An Experimental Study, by A. Ritter, S. Clark, Mausam, O. Etzioni. In Empirical Methods in Natural Language Processing, 2011.

This Paper is available online [1].

Under Construction

Summary

This paper seeks to design an NLP pipeline from the ground up (POS tagging through Chunking, to Named Entity Recognition) for twitter tweets. Off the shelf NER systems are not able to perform NER on tweets effectively due to its noisy (misspellings, short forms, slangs), terse (140 char limit) nature. Tweets contains a large number of distinctive named entity types.

The authors experimentally evaluate the performance of off the shelf news trained NLP tools on Twitter data. POS tagging performance is reported to drop from 0.97 to 0.80.

In addition, the authors introduce a new approach to distant supervision (Mintz et al 2009) using topic model.

Brief description of the method

Part-of-Speech Tagging

The authors manually annotated 800 tweets using the PennTreeBank tagset.

EG Algorithm

Given a set of distributions , the update equations are

where

and

Batch learning

At each iteration, is updated simultaneously with all (or subset of) the available training instances.

Online learning

At each iteration, we choose a single training instance, and update

Convergence rate of batch algorithm

To get within of the optimum parameters, we need iterations.

Experimental Result

The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.

Multiclass classification

The authors used a subset of the MNIST handwritten digits classification.

Multiclass.png

It can be seen that the EG algorithm converges considerably faster than the other methods.

Structured learning (dependency parsing)

The author used the Slovene data in UsesDataset:CoNLL-X Shared Task on Multilingual dependency parsing.

Depparse.png

It can be seen that the EG algorithm converges faster in terms of objective function and accuracy measures.

Related Papers

The approach here is also similar to the use of EG algorithms for large margin structured classification in Bartlett et al NIPS 2004.