Difference between revisions of "Ritter et al, EMNLP 2011. Named Entity Recognition in Tweets: An Experimental Study"

From Cohen Courses
Jump to navigationJump to search
Line 11: Line 11:
 
The authors experimentally evaluate the performance of off the shelf news trained NLP tools on Twitter data. POS tagging performance is reported to drop from 0.97 to 0.80.
 
The authors experimentally evaluate the performance of off the shelf news trained NLP tools on Twitter data. POS tagging performance is reported to drop from 0.97 to 0.80.
  
In addition, the authors introduce a new approach to distant supervision ([[Mintz et al 2009]]) using [[UsesMethod::topic model]].
+
In addition, the authors introduce a new approach to *distant supervision* ([[Mintz et al 2009]]) using [[UsesMethod::topic model]].
  
 
== Brief description of the method ==
 
== Brief description of the method ==

Revision as of 17:53, 24 September 2011

Named Entity Recognition in Tweets: An Experimental Study, by A. Ritter, S. Clark, Mausam, O. Etzioni. In Empirical Methods in Natural Language Processing, 2011.

This Paper is available online [1].

Under Construction

Summary

This paper seeks to design an NLP pipeline from the ground up (POS tagging through Chunking, to Named Entity Recognition) for twitter tweets. Off the shelf NER systems are not able to perform NER on tweets effectively due to its noisy (misspellings, short forms, slangs), terse (140 char limit) nature. Tweets contains a large number of distinctive named entity types.

The authors experimentally evaluate the performance of off the shelf news trained NLP tools on Twitter data. POS tagging performance is reported to drop from 0.97 to 0.80.

In addition, the authors introduce a new approach to *distant supervision* (Mintz et al 2009) using topic model.

Brief description of the method

Consider a supervised learning setting with objects and corresponding labels , which maybe trees, sequences or other high dimensional structure. Also, assume we are given a function that maps pairs to feature vectors . Given a parameter vector , a conditional log-linear model defines a distribution over labels as:

where is a partition function.

The problem of learning from the training data is thus finding which maximizes the regularized log-likelihood:

where is the regularization parameter. The above equation has a convex dual which is derived in Lebanon and Lafferty NIPS 2001. With dual variables , and , we define:

where

The dual problem is thus

EG Algorithm

Given a set of distributions , the update equations are

where

and

Batch learning

At each iteration, is updated simultaneously with all (or subset of) the available training instances.

Online learning

At each iteration, we choose a single training instance, and update

Convergence rate of batch algorithm

To get within of the optimum parameters, we need iterations.

Experimental Result

The authors compared the performance of the EG algorithm to conjugated-gradient and L-BFGS methods.

Multiclass classification

The authors used a subset of the MNIST handwritten digits classification.

Multiclass.png

It can be seen that the EG algorithm converges considerably faster than the other methods.

Structured learning (dependency parsing)

The author used the Slovene data in UsesDataset:CoNLL-X Shared Task on Multilingual dependency parsing.

Depparse.png

It can be seen that the EG algorithm converges faster in terms of objective function and accuracy measures.

Related Papers

The approach here is also similar to the use of EG algorithms for large margin structured classification in Bartlett et al NIPS 2004.