Difference between revisions of "Models of metaphor in NLP"

From Cohen Courses
Jump to navigationJump to search
Line 44: Line 44:
 
* Approach
 
* Approach
 
# Use a set of seed sentences (human annotated)
 
# Use a set of seed sentences (human annotated)
# Compute similarity between (1)the sentence containing the word to be disambiguated and (2) all of the seed sentences  
+
# Compute similarity between (1) the sentence containing the word to be disambiguated and (2) all of the seed sentences  
 
# Select the sense corresponding to the annotation in the most similar seed sentences
 
# Select the sense corresponding to the annotation in the most similar seed sentences
 
* F1-score = 0.538. But the task is not clearly defined.
 
* F1-score = 0.538. But the task is not clearly defined.

Revision as of 16:04, 8 October 2012

Citation

E. Shutova. 2010. Models of Metaphor in NLP. In Proceedings of ACL 2010, Uppsala, Sweden.

Online version

ACL anthology

Introduction

This is a review paper of modeling metaphors in NLP. The author devised it into two main tasks: "metaphor recognition" and "metaphor interpretation".

Metaphor Recognition

Met* System (Fass, 1991)

  • First attempt to identify and interpret metaphorical expression
  • Using selectional preference and hand-coded knowledge base
  • 3-Stage Approaches
  1. Detect selectional preference violation
  2. If find violations, tested for being a metonymic relation using hand-coded patterns
  3. If not metonymy, search the knowledge base for a relevant analogy in order to discriminate metaphorical relations
  • Problem
  1. Detects any kind of non-literalness in language (metaphors, metonymies and others), and not only metaphors
  2. Fail to detect high conventionality of metaphor

Goatly (1997)

  • Identify a set of linguistic cues indicate metaphor
    • metaphorically speaking, utterly, completely, so to speak and, surprisingly, literally.

Peters & Peters (2000)

  • Detect figurative language in the WordNet
  • Search for systematic polysemy, which allows to capture metonymic and metaphorical relations

CorMet System (Mason, 2004)

  • The first attempt for source-target domain mapping
  • A corpus-based approach to find systematic variations in domain-specific selectional preferences
  • Take Master Metaphor List (Lakoff et al., 1991) as baseline, and achieve an accuracy of 77% (judged by human)

TroFi System (Birke & Sarkar, 2006)

  • Sentence clustering approach for non-literal language recognition
  • Inspired by a similarity-based word sense disambiguation method
  • Approach
  1. Use a set of seed sentences (human annotated)
  2. Compute similarity between (1) the sentence containing the word to be disambiguated and (2) all of the seed sentences
  3. Select the sense corresponding to the annotation in the most similar seed sentences
  • F1-score = 0.538. But the task is not clearly defined.

Gedigan et al. (2006)

Krishnakumaran & Zhu (2007)

Metaphor Interpretation

MIDAS System (Martin, 1990)

KARMA System (Narayanan, 1997), ATT-Meta (Barnden and Lee, 2002)

Veale and Hao (2008)

Shutova (2010)

Metaphor Resources

Metaphor Annotation in Corpora

Metaphor & Polysemy

Metaphor Identification

Pragglejaz Procedure

Source - Target Domain Vocabulary

Annotating Source and target Domains

Related papers

The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.

An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.