Difference between revisions of "Palmer et al Computational Linguistics 2005"

From Cohen Courses
Jump to navigationJump to search
(Created page with '== Citation == Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. =…')
 
 
(5 intermediate revisions by the same user not shown)
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
The [[Category::paper]] presents a system for [[AddressesProblem::Semantic Role Labeling]], whose roles are defined at the level of semantic frames of the type. For instance, the JUDGEMENT frame contains roles like JUDGE, EVALUEE, and REASON, amd tje STATEMENT frame contains roles like SPEAKER, ADDRESSEE, and MESSAGE.
+
The [[Category::paper]] presents how they built The Proposition Bank ([[PropBank]]) corpus, and describes an automatic system for [[AddressesProblem::Semantic Role Labeling]] trained on the [[UsesDataset::PropBank]] corpus.
  
(1) [She''/Judge''] '''blames''' [the Government''/Evaluee''][for failing to do enough to help''/Reason''].
 
  
(2) ["I'll knock on your door at quarter to six"''/Message''][Susan''/Speaker''] '''said'''.
+
For automatic determination of semantic role labels, they adopted the features and probability model of [[RelatedPaper::Gildea and Jurafsky Computational Linguistics 2002]] to the PropBank for their experiments. While [[Gildea and Jurafsky Computational Linguistics 2002]] do not have a gold standard of parse tree, they do have a gold standard of parse trees, and they show improvements in the performance of the system.  
  
  
They define semantic roles at the frame level. They use the [[UsesDataset::FrameNet]] database including nouns and adjectives as well as verbs. However, they also did experiments on more general semantic role (thematic role) labeling.  
+
The dataset used has annotations for 72,109 predicate-argument structures containing 190,815 individual arguments and containing examples from 2,462 lexical predicates types.
  
  
Their job is mainly divided into two tasks: finding the frame boundaries and assigning the semantic role to the frames.
+
Features used for the system are the phrase type, the parse tree path, the position, the voice, and the head word.
  
  
Features used for probability estimation for roles include phrase type, governing category, parse tree path, position, voice, and head word. They built a classifier by combining probabilities from distributions conditioned on a variety of subsets of the features because they will have seen the combination of all features only a small number of times, providing a poor estimate of the probability. To combine the strengths of the various distributions, they used various ways to obtain an estimate of the full distribution, such as [[UsesMethod::linear interpolation]][http://en.wikipedia.org/wiki/Linear_interpolation], EM linear interpolation, geometric mean [http://en.wikipedia.org/wiki/Geometric_mean], backoff lineanr interpolation, and backoff geometric mean.
+
The system was tested with two purposes: 1. to predict the correct semantic role given the constituents for arguments to the predicate, 2. to both find the arguments in the sentence and predict the correct semantic role.
  
  
To generalize lexical statistics, they compared three different approaches: automatic clustering, use of a hand-built ontological resource, WordNet, and boostrapping. Automatic clustering and WordNet hierarchy were used only for noun phrases.  
+
The system shows 80.9% accuracy in predicting the semantic role of pre-segmented constituents with automatic parses, and shows 82.8% of accuracy with gold-standard parses. It gives 82% precision and 74.7% recall in both finding the arguments and predicting their semantic role.
  
  
The system gives 82% accuracy in identifying the semantic role of pre-segmented constituents, 65% precision and 61% recall in both segmenting constituents and identifying their semantic role.  
+
In addition, they showed that using full parse trees are much more informative and useful than using the chunked representation for labeling semantic roles with 74.3% precision and 66.4% recall, and 49.5% precision and 35.1% recall, respectively.
  
 
== Key Contribution ==  
 
== Key Contribution ==  
This system is the first statistical model on FrameNet solving the semantic role labeling problem, and future systems use the features introduced in this paper as a baseline. This paper is also very worth to read in that it describes the whole process of semantic role labeling in detail. In addition, they did many various experiments to find out which features, algorithms, and techniques affect the performance of the system.
+
This paper shows the process to build the PropBank corpus, which is one of the most representative corpora for semantic role labeling, and tests the first statistical model of [[Gildea and Jurafsky Computational Linguistics 2002]] on the corpus. This paper is used as a baseline for experiments on the PropBank, too.

Latest revision as of 23:22, 30 November 2010

Citation

Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106.

Online version

MIT Press

Summary

The paper presents how they built The Proposition Bank (PropBank) corpus, and describes an automatic system for Semantic Role Labeling trained on the PropBank corpus.


For automatic determination of semantic role labels, they adopted the features and probability model of Gildea and Jurafsky Computational Linguistics 2002 to the PropBank for their experiments. While Gildea and Jurafsky Computational Linguistics 2002 do not have a gold standard of parse tree, they do have a gold standard of parse trees, and they show improvements in the performance of the system.


The dataset used has annotations for 72,109 predicate-argument structures containing 190,815 individual arguments and containing examples from 2,462 lexical predicates types.


Features used for the system are the phrase type, the parse tree path, the position, the voice, and the head word.


The system was tested with two purposes: 1. to predict the correct semantic role given the constituents for arguments to the predicate, 2. to both find the arguments in the sentence and predict the correct semantic role.


The system shows 80.9% accuracy in predicting the semantic role of pre-segmented constituents with automatic parses, and shows 82.8% of accuracy with gold-standard parses. It gives 82% precision and 74.7% recall in both finding the arguments and predicting their semantic role.


In addition, they showed that using full parse trees are much more informative and useful than using the chunked representation for labeling semantic roles with 74.3% precision and 66.4% recall, and 49.5% precision and 35.1% recall, respectively.

Key Contribution

This paper shows the process to build the PropBank corpus, which is one of the most representative corpora for semantic role labeling, and tests the first statistical model of Gildea and Jurafsky Computational Linguistics 2002 on the corpus. This paper is used as a baseline for experiments on the PropBank, too.