Difference between revisions of "Comparison of Birke07 and Birke06"
From Cohen Courses
Jump to navigationJump to searchLine 9: | Line 9: | ||
* (Birke 06), clustering without active learning, the models obtains an f-score of 53.8%; (Birke 07), the model adopted active learning obtains 64.9%. | * (Birke 06), clustering without active learning, the models obtains an f-score of 53.8%; (Birke 07), the model adopted active learning obtains 64.9%. | ||
* (Birke 07) and (Birke 06) actually work on almost same task, same data set, and same evaluation metrics and method. The only concern for me is that the active learning in (Birke 07) actually involves in much more human annotated gold-standard annotation than the method of (Birke 06). I'm not quite sure this comparison is really fear. | * (Birke 07) and (Birke 06) actually work on almost same task, same data set, and same evaluation metrics and method. The only concern for me is that the active learning in (Birke 07) actually involves in much more human annotated gold-standard annotation than the method of (Birke 06). I'm not quite sure this comparison is really fear. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Other Questions == | == Other Questions == |
Latest revision as of 12:01, 8 November 2012
Contents
Papers
- Active learning for the identification of nonliteral language (Birke07)
- A Clustering Approach for the Nearly Unsupervised Recognition of Nonliteral Language (Birke06)
Big Idea
- The (Birke 07) paper is the further development of (Birke 06). In (Birke 06), they use the literal feedback seed and non-literal feedback seed set to do the word sense disambiguation of the verb in a given sentence, and thus distinguish the non-literal uses from literal uses. However, the result in (Birke 06) is apparently noisy that they adopted some heuristic rules and a voting mechanism to clean the data. In (Birke 07), they use active learning instead of heuristic rules or voting, so that the human annotation can more effectively improve the result but not only "clean" them.
Comparison
- (Birke 06), clustering without active learning, the models obtains an f-score of 53.8%; (Birke 07), the model adopted active learning obtains 64.9%.
- (Birke 07) and (Birke 06) actually work on almost same task, same data set, and same evaluation metrics and method. The only concern for me is that the active learning in (Birke 07) actually involves in much more human annotated gold-standard annotation than the method of (Birke 06). I'm not quite sure this comparison is really fear.
Other Questions
- How much time did you spend reading the (new, non-wikified) paper you summarized?
- 4 hours.
- How much time did you spend reading the old wikified paper?
- 30 mins.
- How much time did you spend reading the summary of the old paper?
- 1.5 hour.
- How much time did you spend reading background material?
- The problem is very relevant to my own research project, so not much, only about 1 hour.
- Was there a study plan for the old paper?
- Yes
- if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them?
- Not really the same paper, but I took a look at some other word sense disambiguation papers. Totally maybe about 1.5 hour.
- if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them?
- Yes
- Give us any additional feedback you might have about this assignment.
- This is a good assignment let us really understand what those papers about.