Difference between revisions of "A Clustering Approach for the Nearly Unsupervised Recognition of Nonliteral Language, EACL-2006"
From Cohen Courses
Jump to navigationJump to searchLine 28: | Line 28: | ||
== Discussion and Thought == | == Discussion and Thought == | ||
# This work explore a approach of metaphor identification which is relatively less mentioned. Compared with selection restriction modeling or lexicon-based methods, this method requires less human involvements, and adopt the well-development technologies borrowed from word sense disambiguation. | # This work explore a approach of metaphor identification which is relatively less mentioned. Compared with selection restriction modeling or lexicon-based methods, this method requires less human involvements, and adopt the well-development technologies borrowed from word sense disambiguation. | ||
− | # [ | + | # [[Models_of_metaphor_in_NLP (Shutova, 2012)]] criticized that |
Revision as of 17:00, 7 November 2012
Citation
Birke, J. and A. Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In Proceedings of EACL-06, pages 329–336.
Online Version
Method Summary
- TroFi (TropeFinder) System
- Task: Classifying literal and nonliteral usages of verbs
- Approach: Use nearly unsupervised word-sense disambiguation and * clustering techniques
- Processing Steps
- KE Algorithm: Similarity-based word-sense disambiguation algorithm
- Similarities are calculated between:
- Sentences containing the word we wish to disambiguate (the target word)
- Collections of seed sentences (feedback sets)
- Similarities are calculated between:
- Clean the Feedback Sets
- In order to remove false attraction
- 4 Principle of Scrubbing
- Human annotations (in DoKMIE) are reliable
- Phrasal and expression verbs are often indicative of nonliteral uses
- Content words appearing in both feedback sets should be avoided
- Learning & voting: Use four learners (A, B, C, D) to vote the best form of scrubbing action
Result
- TroFi achieved F1-score of 0.538, and outperforms the baseline by 24.4% (on human-labeled data)
- Build the TroFi Example Base, which is a freely available metaphor annotated resource.
Discussion and Thought
- This work explore a approach of metaphor identification which is relatively less mentioned. Compared with selection restriction modeling or lexicon-based methods, this method requires less human involvements, and adopt the well-development technologies borrowed from word sense disambiguation.
- Models_of_metaphor_in_NLP (Shutova, 2012) criticized that