Turney et.al.,EMNLP11

From Cohen Courses
Revision as of 23:24, 26 September 2012 by Ytsvetko (talk | contribs) (Created page with '== Citation == title = {Literal and metaphorical sense identification through concrete and abstract context}, author = {Turney, Peter D. and Neuman, Yair and Assaf, Dan and …')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Citation

 title = {Literal and metaphorical sense identification through concrete and abstract context},
 author = {Turney, Peter D. and Neuman, Yair and Assaf, Dan and Cohen, Yohai},
 booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
 series = {EMNLP '11},
 year = {2011},
 isbn = {978-1-937284-11-4},
 location = {Edinburgh, United Kingdom},
 pages = {680--690},


Abstract from the paper

Metaphor is ubiquitous in text, even in highly technical text. Correct inference about textual entailment requires computers to distinguish the literal and metaphorical senses of a word. Past work has treated this problem as a classical word sense disambiguation task. In this paper, we take a new approach, based on research in cognitive linguistics that views metaphor as a method for transferring knowledge from a familiar, well-understood, or concrete domain to an unfamiliar, less understood, or more abstract domain. This view leads to the hypothesis that metaphorical word usage is correlated with the degree of abstractness of the word’s context. We introduce an algorithm that uses this hypothesis to classify a word sense in a given context as either literal (denotative) or metaphorical (connotative). We evaluate this algorithm with a set of adjective-noun phrases (e.g., in dark comedy, the adjective dark is used metaphorically; in dark hair, it is used literally) and with the TroFi (Trope Finder) Example Base of literal and nonliteral usage for fifty verbs. We achieve state-of-the-art performance on both datasets.

Online version

pdf link to the paper

Summary of approach

  • The main goal of this article is to distinct between literal and metaphorical senses of the same word. For example, for ‘shot down’, the algorithm should tag the sentence ‘He shot down my plane’ as literal, and ‘He shot down my argument’ as metaphorical
  • Authors hypothesize that degree of abstractness in a word’s context is correlated with the likelihood that the word is used metaphorically. In the previous example, ‘plane’ is relatively concrete concept, and ‘argument’ is relatively abstract, therefore, ‘shot down’ near ‘argument’ is more likely to have a metaphorical sense.
  • To compute an abstractness of words authors use a variation of Turney and Litman’s algorithm that rates words according to their semantic orientation. They compute an abstractness of a given word by comparing it to twenty abstract words and twenty concrete words that are used as paradigms of abstractness and concreteness. LSA is used to measure semantic similarity between each pair of words.
  • A feature vector is generated for each word, with features relating to word’s context average abstractness ratings; a feature per part-of-speech. For example, the first feature corresponds to the average abstractness ratings of all nouns that follow the candidate word. Given feature vectors, logistic regression classifier is used to relate degrees of abstractness to the classes literal and metaphorical.


Experiments and results

  • In the first experiment the algorithm is evaluated on one hundred adjective-noun phrases labeled literal or metaphorical by five annotators, according to the sense of the adjective. For instance, deep snow is labeled literal and deep appreciation is labeled metaphorical. The algorithm is able to predict the labels of the annotators with an average accuracy of 79%.

The next two experiments use the TroFi (Trope Finder) Example Base [1] of literal and nonliteral usage for fifty verbs which occur in 3,737 sentences from the Wall Street Journal (WSJ) corpus. In each sentence, the target verb is labeled L (literal) or N (nonliteral), according to the sense of the verb that is invoked by the sentence.

  • In the second experiment, authors reproduce the setup of Birke and Sarkar (2006) on a subset of twenty-five of the fifty verbs. The result is an average f-score of 63.9%, compared to Birke and Sarkar’s (2006) 64.9%.
  • In the third experiment, the algorithm is trained on the twenty-five new verbs that were not used by Birke and Sarkar (2006) and then tested on the old verbs. That is, the algorithm is tested with verbs that it has never seen before. In this experiment, the average f-score is 68.1%.


Related Papers

  • A Clustering Approach for the Nearly Unsupervised Recognition of Nonliteral Language. Julia Birke and Anoop Sarkar. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, EACL-2006. Trento, Italy. April 3-7, 2006. pdf
  • Active Learning for the Identification of Nonliteral Language. Julia Birke and Anoop Sarkar. In Proceedings of the Workshop on Computational Approaches to Figurative Language, NAACL-HLT 2007 workshop. Rochester, NY. April 26, 2007. pdf


Study Plan

Papers you may want to read to understand this paper.

  • George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University Of Chicago Press, Chicago, IL. pdf
  • Thomas Landauer, Peter W. Foltz, & Darrell Laham 1998. Introduction to Latent Semantic Analysis. Discourse Processes 25 (2–3): 259–284. doi:10.1080/01638539809545028. pdf
  • Peter D. Turney and Michael L. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 21(4):315–346. pdf