Difference between revisions of "Bartlett et al., ACL-HLT 2008. Automatic Syllabification with Structured SVMs for Letter-to-Phoneme Conversion"

From Cohen Courses
Jump to navigationJump to search
Line 13: Line 13:
 
[[UsesMethod::Structured SVMs | Structured SVMs]] is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in structured SVMs resembles that of an [[HMM|HMM]].
 
[[UsesMethod::Structured SVMs | Structured SVMs]] is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in structured SVMs resembles that of an [[HMM|HMM]].
  
Training structured SVMs is viewed as a multi-class classification problem. For a given training instance <math>x_i</math>, a correct tag sequence <math>y_i</math> is drawn from a set of possible tag sequences <math>Y_i</math>. Each input sequence <math>x</math> has a feature space representation <math>\psi(x,y)</math> to represent a candidate output sequence <math>y</math>.
+
Training structured SVMs is viewed as a multi-class classification problem. For a given training instance <math>x_i</math>, a correct tag sequence <math>y_i</math> is drawn from a set of possible tag sequences <math>Y_i</math>. Each input sequence <math>x</math> has a feature space representation <math>\psi (x,y)</math> to represent a candidate output sequence <math>y</math>.
  
 
== Experiments and Results ==
 
== Experiments and Results ==
  
 
== Related Papers ==
 
== Related Papers ==

Revision as of 17:55, 25 September 2011

Citation

Susan Bartlett, Grzegorz Kondrak and Colin Cherry. 2008. Automatic syllabification with structured SVMs for letter-to-phoneme conversion. In Proceedings of ACL-08: HLT, 2008, pp. 568–576.

Online Version

Automatic syllabification with structured SVMs for letter-to-phoneme conversion.

Summary

This paper describes one of the first successful attempts at integrating automatic syllabification into a letter-to-phoneme conversion system using structured SVMs. The authors obtain substantial improvements in reducing automatic syllabification error rate (measured in WER) against the then state-of-the-art approach. The authors model the problem as an orthographic syllabification task as opposed to phonological syllabification. They treat it as a sequence tagging problem and define new tagging schemes. The method is applied to languages such as German and Dutch, in addition to English.

Method

Structured SVMs

Structured SVMs is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in structured SVMs resembles that of an HMM.

Training structured SVMs is viewed as a multi-class classification problem. For a given training instance , a correct tag sequence is drawn from a set of possible tag sequences . Each input sequence has a feature space representation to represent a candidate output sequence .

Experiments and Results

Related Papers