Difference between revisions of "Gildea and Jurafsky Computational Linguistics 2002"

From Cohen Courses
Jump to navigationJump to search
Line 9: Line 9:
 
== Summary ==
 
== Summary ==
  
The [[Category::paper]] presents a system for [[AddressesProblem::Semantic Role Labeling]].  
+
The [[Category::paper]] presents a system for [[AddressesProblem::Semantic Role Labeling]]. This paper describes the process of semantic role labeling in detail, and is very helpful to understand the whole process. Their job is mainly divided into two tasks: finding the frame boundaries and assigning the semantic role to the frames.
  
They divided their extraction job into three tasks below.
+
They define semantic roles at the frame level. They use the FrameNet database including nouns and adjectives as well as verbs. In addition, they also did experiments on more general semantic role (thematic role) labeling.  
* Extraction of medical terms
 
* [[AddressesProblem::Relation Extraction]] 
 
**In this paper, relation extraction means extraction of associated medical concepts. For example, 'Blood pressure' and '144/90' are associated terms in the sentence, "Blood pressure is 144/90".
 
* [[AddressesProblem::Text Classification]]
 
**For example, a patient can be classified as a former smoker, a current smoker, or a non-smoker
 
  
Their approaches are:
+
Features used in this system include phrase type, governing category, parse tree path, position, voice, and head word. They built a classifier by combining probabilities from distributions conditioned on a variety of subsets of the features because they will have seen the combination of all features only a small number of times, providing a poor estimate of the probability. To combine the strengths of the various distributions, they used various ways to obtain an estimate of the full distribution, such as linear interpolation, EM linear interpolation, geometric mean, backoff linear interpolation, and backoff geometric mean.
* An ontology-based approach for extracting medical terms of interest
 
**They used Unified Medical Language System (UMLS). About terms that are not defined in UMLS, they predicted categories of some terms using sentence structures.
 
* A graph-based approach which uses the parsing result of link-grammar parser for [[AddressesProblem::Relation Extraction]]
 
**Notable things in their approach are three. First, they included the processing of negation. Second, when the parser fails, they used a pattern-based approach. Lastly, they replaced multi-word terms with placeholders because the parser did not process the terms.  
 
* an NLP-based feature extraction method coupled with an ID3-based [[AddressesProblem::Decision Tree Learning]] for [[AddressesProblem::Text Classification]]
 
  
 +
To generalize lexical statistics, they compared three different approaches: automatic clustering, use of a hand-built ontological resource, WordNet, and boostrapping. Automatic clustering and WordNet hierarchy were used only for noun phrases.
  
This approach was fairly successful mostly showing over 80% of precision and recall. However, the system was tested on the data written by only a clinician, which means that the style of free-text records was consistent. Nevertheless, the research is worth in that they applied various IE techniques to the free-text clinical records, explain about the problems they encountered.
+
The system gives 82% accuracy in identifying the semantic role of pre-segmented constituents, 65% precision and 61% recall in both segmenting constituents and identifying their semantic role. In this paper, they did many various experiments to find out which features, algorithms, and techniques affect the performance of the system.
  
 
== Related papers ==
 
== Related papers ==
  
 
An interesting follow-up paper is [[RelatedPaper::Denecke and Bernauer AIME 2007]] which uses semantic structures to extract medical information.
 
An interesting follow-up paper is [[RelatedPaper::Denecke and Bernauer AIME 2007]] which uses semantic structures to extract medical information.

Revision as of 03:02, 31 October 2010

Citation

Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245-288.

Online version

MIT Press

Summary

The paper presents a system for Semantic Role Labeling. This paper describes the process of semantic role labeling in detail, and is very helpful to understand the whole process. Their job is mainly divided into two tasks: finding the frame boundaries and assigning the semantic role to the frames.

They define semantic roles at the frame level. They use the FrameNet database including nouns and adjectives as well as verbs. In addition, they also did experiments on more general semantic role (thematic role) labeling.

Features used in this system include phrase type, governing category, parse tree path, position, voice, and head word. They built a classifier by combining probabilities from distributions conditioned on a variety of subsets of the features because they will have seen the combination of all features only a small number of times, providing a poor estimate of the probability. To combine the strengths of the various distributions, they used various ways to obtain an estimate of the full distribution, such as linear interpolation, EM linear interpolation, geometric mean, backoff linear interpolation, and backoff geometric mean.

To generalize lexical statistics, they compared three different approaches: automatic clustering, use of a hand-built ontological resource, WordNet, and boostrapping. Automatic clustering and WordNet hierarchy were used only for noun phrases.

The system gives 82% accuracy in identifying the semantic role of pre-segmented constituents, 65% precision and 61% recall in both segmenting constituents and identifying their semantic role. In this paper, they did many various experiments to find out which features, algorithms, and techniques affect the performance of the system.

Related papers

An interesting follow-up paper is Denecke and Bernauer AIME 2007 which uses semantic structures to extract medical information.