Difference between revisions of "Automated Template Extraction"

From Cohen Courses
Jump to navigationJump to search
Line 20: Line 20:
 
* Apply the algorithm to a new dataset
 
* Apply the algorithm to a new dataset
 
** This will not have a baseline
 
** This will not have a baseline
 +
 +
== Intuition ==
 +
 +
Templates in an information extraction task generally represent important information to pull from a subset of all the documents. The intuition we're following is that, generally, the information we're seeking is a specific semantic role within a specific action (i.e. who performed action ''X''). By this reasoning, by finding the semantic relations within a given document, we should be able to obtain most of the possible important templates.
 +
 +
We will also need to devise a way to filter out bad templates, or templates that are non-indicative of the domain. There are certainly many ways to do this. Something as simple as taking the ''N'' templates that occur most often in the data is one way, while we could also use more complex clustering like Chambers and Jurafsky.
 +
 +
One of the nice things about this idea is that it is unsupervised. Assuming we have the tools to do semantic role labeling, clustering, and choosing of the templates, we can apply this technique to any domain. In addition, it need not be related to information extraction - using semantic role labeling in this unsupervised manner could help with document summarization, as well as help in gaining domain knowledge.
  
 
== Methodology ==
 
== Methodology ==
Line 29: Line 37:
 
* Semantic Role Labeling
 
* Semantic Role Labeling
  
Chambers and Jurafsky also use clustering algorithms for concluding that two templates are the same (i.e. ''detonate'' and ''destroy'').
+
Chambers and Jurafsky also use clustering algorithms for concluding that two templates are the same (i.e. ''detonate'' and ''destroy''). We will begin doing this in a very simple manner (likely using just WordNet for comparison, and taking the ''N''-best), and then implementing improvements.
  
 
== Baseline & Dataset ==
 
== Baseline & Dataset ==

Revision as of 17:45, 6 October 2011

Team Member(s)

Proposal

Template-based information extraction methods have one glaring weakness: they rely on - you guessed it - templates. These templates are often hand-crafted, and thus either require a significant amount of time and painstaking tuning, or they are prone to errors. Neither of these alternatives is ideal, so it would be beneficial if we could automatically produce these templates from data.

The paper referenced below by Chambers and Jurafsky is what we plan to use as a "jumping-off" point, so to speak.

We'd like to look more into the paper's methodology, apply it to a new domain, and potentially improve upon some methodology that is used.

Goal

The goal we have is twofold:

  • Develop an algorithm for automated template extraction, probably either unsupervised, or potentially semi-supervised
    • It will likely be similar to the Chambers and Jurafsky paper, but likely not exactly the same (as we will be combining a lot of out of the box components)
  • Compare the results on MUC-4 to the results from Chambers and Jurafsky
  • Apply the algorithm to a new dataset
    • This will not have a baseline

Intuition

Templates in an information extraction task generally represent important information to pull from a subset of all the documents. The intuition we're following is that, generally, the information we're seeking is a specific semantic role within a specific action (i.e. who performed action X). By this reasoning, by finding the semantic relations within a given document, we should be able to obtain most of the possible important templates.

We will also need to devise a way to filter out bad templates, or templates that are non-indicative of the domain. There are certainly many ways to do this. Something as simple as taking the N templates that occur most often in the data is one way, while we could also use more complex clustering like Chambers and Jurafsky.

One of the nice things about this idea is that it is unsupervised. Assuming we have the tools to do semantic role labeling, clustering, and choosing of the templates, we can apply this technique to any domain. In addition, it need not be related to information extraction - using semantic role labeling in this unsupervised manner could help with document summarization, as well as help in gaining domain knowledge.

Methodology

The components we will need:

  • Part of Speech Tagging
  • Named Entity Recognition
  • Semantic Role Labeling

Chambers and Jurafsky also use clustering algorithms for concluding that two templates are the same (i.e. detonate and destroy). We will begin doing this in a very simple manner (likely using just WordNet for comparison, and taking the N-best), and then implementing improvements.

Baseline & Dataset

The Chambers and Jurafsky paper uses the MUC 4 data set on terrorism. To give ourselves a good baseline, we will also use that set.

We will compare our results on MUC-4 with the results from the Chambers and Jurafsky paper.

Second Dataset

One of the strengths of automatically generating templates is that it can be done in an unsupervised manner. In this way, we will show that this can be used to not only be expanded easily to new domains, but also it can be used to get significant information about domains.

Related Work

Other Links