Difference between revisions of "Comparison Das et al WSDM 2011 and Zhao et al AAAI 2007"

From Cohen Courses
Jump to navigationJump to search
m
m
 
(4 intermediate revisions by the same user not shown)
Line 37: Line 37:
  
 
== Questions ==
 
== Questions ==
# How much time did you spend reading the (new, non-wikified) paper you summarized?
+
# How much time did you spend reading the (new, non-wikified) paper you summarized? ''About 35 minutes.''
# How much time did you spend reading the old wikified paper?
+
# How much time did you spend reading the old wikified paper? ''About 35 minutes.''
# How much time did you spend reading the summary of the old paper?
+
# How much time did you spend reading the summary of the old paper? ''About 15 minutes.''
# How much time did you spend reading background materiel?
+
# How much time did you spend reading background materiel? ''About 30 minutes.''
# Was there a study plan for the old paper?
+
# Was there a study plan for the old paper? ''There wasn't an explicit study plan, but the article did provide a good background of the related papers that would be useful.''
# if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them?
+
## if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them? ''Yes. I did a quick read of the [[Chambers, N. and Jurafsky, D. Template-based information extraction without the templates, ACL 2011]] paper. It took me about 10 minutes.''
# Give us any additional feedback you might have about this assignment.
+
# Give us any additional feedback you might have about this assignment. ''The paper pairings was well chosen (at least for the papers I read). Doing a comparative analysis of two papers enable me to think more deeply about the different approaches to the same/similar problem and identify the pros/cons/assumptions of each, etc.''

Latest revision as of 23:49, 5 November 2012

This is a comparison of two related papers in event detection and temporal information extraction.

Papers

The papers are

Comparative analysis of both papers

On a high level, both papers are interested in discovering events from large amount temporal information sources. Both of them leverage on user generated content, with Das et al using Wikipedia as their dataset, while Zhao et al used the Enron email corpus and Dailykos blogs.

In Das et al, their task was to first discover pairs of entities that were co-bursting in the same time period (of a week). Co-bursting means that both entities are mentioned significantly more than during other time periods. After which, the next step is to discover the relationships between such entities. This forms the foundation for an event, an n-ary relationship between entities that are bursty at the same time period. Likewise, Zhao et al's task is to discover events, exploiting the temporal burstiness property of entities and text, and also the "social" aspect, where an event is being talked about more than usual by "social actors".

Method-wise, both papers framed the problem of identifying relationships in the context of graphs. In Das et al, vertices are entities and edges describe how much overlap two entities have in the time periods that they are bursty. So two entities who were mentioned more at the same time would have stronger edges between them. In Zhao et al, vertices are social actors. Social actors are not entities that are directly involved in an event (much unlike Das et al), they are just actors that converse (through text) about the event that is taking place. Edges between social actors are thus weighted by how intense pairs social actors communicate during the time period.

In Das et al's approach, events are thus assumed to be associated with two or more public entities, while Zhao et al's event are more associated with the topical nature of the discussions that are going on. The advantage of Das et al's approach is that events are easily interpretable, especially within the context of public news (entertainment news, political news, etc), which is often about specific public figures or organizations. However, it would not be able to capture abstract events, that do not have specific associated entities, say a natural disaster, where there is no specific entity it is associated with. Zhao et al's approach, on the other hand, would be able to identify such abstract events, however, their event topics may not be easily identifable.

Both papers made use of algorithms from time series models and graph clustering to solve their respective problems.

Related papers

Questions

  1. How much time did you spend reading the (new, non-wikified) paper you summarized? About 35 minutes.
  2. How much time did you spend reading the old wikified paper? About 35 minutes.
  3. How much time did you spend reading the summary of the old paper? About 15 minutes.
  4. How much time did you spend reading background materiel? About 30 minutes.
  5. Was there a study plan for the old paper? There wasn't an explicit study plan, but the article did provide a good background of the related papers that would be useful.
    1. if so, did you read any of the items suggested by the study plan? and how much time did you spend with reading them? Yes. I did a quick read of the Chambers, N. and Jurafsky, D. Template-based information extraction without the templates, ACL 2011 paper. It took me about 10 minutes.
  6. Give us any additional feedback you might have about this assignment. The paper pairings was well chosen (at least for the papers I read). Doing a comparative analysis of two papers enable me to think more deeply about the different approaches to the same/similar problem and identify the pros/cons/assumptions of each, etc.