Models of metaphor in NLP
From Cohen Courses
Revision as of 14:59, 8 October 2012 by Tinghaoh (talk | contribs) (→TroFi System(Birke & Sarkar, 2006))
Contents
Citation
E. Shutova. 2010. Models of Metaphor in NLP. In Proceedings of ACL 2010, Uppsala, Sweden.
Online version
Introduction
This is a review paper of modeling metaphors in NLP. The author devised it into two main tasks: "metaphor recognition" and "metaphor interpretation".
Metaphor Recognition
Met* System (Fass, 1991)
- First attempt to identify and interpret metaphorical expression
- Using selectional preference and hand-coded knowledge base
- 3-Stage Approaches
- Detect selectional preference violation
- If find violations, tested for being a metonymic relation using hand-coded patterns
- If not metonymy, search the knowledge base for a relevant analogy in order to discriminate metaphorical relations
- Problem
- Detects any kind of non-literalness in language (metaphors, metonymies and others), and not only metaphors
- Fail to detect high conventionality of metaphor
Goatly (1997)
- Identify a set of linguistic cues indicate metaphor
- metaphorically speaking, utterly, completely, so to speak and, surprisingly, literally.
Peters & Peters (2000)
- Detect figurative language in the WordNet
- Search for systematic polysemy, which allows to capture metonymic and metaphorical relations
CorMet System (Mason, 2004)
- The first attempt for source-target domain mapping
- A corpus-based approach to find systematic variations in domain-specific selectional preferences
- Take Master Metaphor List (Lakoff et al., 1991) as baseline, and achieve an accuracy of 77% (judged by human)
TroFi System (Birke & Sarkar, 2006)
- Sentence clustering approach for non-literal language recognition
- Inspired by a similarity-based word sense disambiguation method
- Approach
- Use a set of seed sentences (human annotated)
- Compute similarity between (1)the sentence containing the word to be disambiguated and (2) all of the seed sentences
- Select the sense corresponding to the annotation in the most similar seed sentences
- F1-score = 0.538. But the task is not clearly defined.
Gedigan et al. (2006)
Krishnakumaran & Zhu (2007)
Metaphor Interpretation
MIDAS System (Martin, 1990)
KARMA System (Narayanan, 1997), ATT-Meta (Barnden and Lee, 2002)
Veale and Hao (2008)
Shutova (2010)
Metaphor Resources
Metaphor Annotation in Corpora
Metaphor & Polysemy
Metaphor Identification
Pragglejaz Procedure
Source - Target Domain Vocabulary
Annotating Source and target Domains
Related papers
The widely cited Pang et al EMNLP 2002 paper was influenced by this paper - but considers supervised learning techniques. The choice of movie reviews as the domain was suggested by the (relatively) poor performance of Turney's method on movies.
An interesting follow-up paper is Turney and Littman, TOIS 2003 which focuses on evaluation of the technique of using PMI for predicting the semantic orientation of words.