Unsupervised Modeling of Dialog Acts in Asynchronous Conversation
Contents
Citation
Shafiq Joty, Giuseppe Carenini, Chin-Yew Lin. Unsupervised Modeling of Dialog Acts in Asynchronous Conversations. In Proceedings of the twenty second International Joint Conference on Artificial Intelligence (IJCAI) 2011. Barcelona, Spain.
Online version
Introduction
This paper aims at Modeling of Dialog Acts in asynchronous conversations in an unsupervised setting. There were 12 different dialog acts targeted (which are self-explanatory by their names), viz. Statement, Polite Mechanism, Yes-no question, Action motivator, Wh-question, Accept response, Open-ended question, Acknowledge and appreciate, from web for the task of domain-specific Information Extraction. The domain under consideration was "terrorist events". The authors started with some seed patterns extracted from the given MUC-4 terrorism corpus, and then looked over web for extracting more similar patterns that had the required Semantic Affinity for the semantic classes belonging the semantic classes identified for the terrorism domain. The similarity metric used was Pointwise mutual information. After retrieving these additional patterns from the web, all these identified patterns were used to extract required information from the MUC-4 terrorism corpus.
Dataset
The dataset used was the MUC-4 terrorism corpus, which contains 1700 terrorism stories. Most of them are news stories related to Latin American terrorism. Each story also has answer key templates which contains the information supposed to be extracted from that story. Per the authors analysis, the dataset is difficult for an IE task, because all of the text is in upper-case, and nearly half of the stories do not pertain to a terrorist event. Even in the rest half with stories pertaining to terrorist events, many of the stories describe multiple terrorist events. In addition to this authors downloaded 6182 news articles related to terrorism from CNN News website (cnn.com), and used them for the task of extracting more patterns.
Extracting Seed Patterns
The authors used the AutoSlog-TS system [1], for extracting the seed patterns from the MUC corpus. The Autoslog-TS system basically works by extracting syntactic patterns for all the noun-phrases present in a text. The extraction of these patterns is done both for the text that is relevant to the domain and that is irrelevant to the domain, and a ranked list of the patterns is prepared based on a relevance score. The relevance score that the authors used for this task was RlogF score, which is defined as:
where is the frequency of the i-th phrase in the text that is relevant to the domain, and is the frequency of i-th pattern in the whole corpus.
The noun-phrases are identified based on a heuristic-based algorithm, and the typical patterns extracted are of the type "died in <np>", "<group> claimed responsibility", etc.
Extracting text of Terrorism Domain from the web
To extract more patterns relevant to the terrorism domain, the first thing was to retrieve data from the web that actually pertained to this domain. The authors didn't bother themselves with the task of web-page classification into relevant or non-relevant text for this domain. Instead, they fired the CNN news server (cnn.com) with some specific search queries aimed at retrieving terrorism related articles(using Google search APIs) and collected 6182 news articles related to terrorism.
Extracting similar patterns from the web-text
Similar to the exercise done with the text in MUC corpus, all possible patterns were extracted from the news-stories data downloaded from the web. Now the task was to extract the patterns from these web news-stories that were similar and as useful as the seed patterns. For this task, a two-step approach was taken. First, similar patterns from the web-text were extracted using PMI as the metric. If a pattern in a news-story co-occurred with a seed pattern in the same sentence, that pattern was selected in the first step. In the second step the "semantic affinity" of this pattern was calculated. Semantic affinity is a measure to find how much a pattern relates to a particular semantic class. In other words it judges the pattern's capability to extract information relevant to a particular semantic class. In context of terrorism domain, the identified semantic classes were: target, victim, perpetrator, organization, weapon and other. Mathematically semantic affinity for a pattern is defined as:
where is the frequency of occurrence of the pattern where it had a noun-phrase from the semantic class "class", and is the total frequency of occurrence of that pattern in the corpus.
Experiments and Results
The complete set of extracted patterns (seed and those learnt from the web-data) were used to identify target and victim information from the MUC-4 test corpus. The average results in Precision, Recall and F-score are as presented below. Baseline scores are for the experiment with just the seed patterns; n+baseline scores are for information extraction with the seed patterns plus n patterns from the larger set learnt from the web-data. Note that these patterns were ranked according to their semantic affinity scores.
References
Ellen Riloff and William Phillips, "An Introduction to the Sundance and AutoSlog Systems"