Difference between revisions of "Bartlett et al., ACL-HLT 2008. Automatic Syllabification with Structured SVMs for Letter-to-Phoneme Conversion"
(31 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
== Summary == | == Summary == | ||
− | This [[Category::paper]] describes one of the first successful attempts at integrating automatic syllabification into a letter-to-phoneme conversion system using structured [[UsesMethod::Support_Vector_Machines | SVMs]]. The authors obtain substantial improvements in reducing automatic syllabification error rate (measured in WER) against the then state-of-the-art approach. The authors model the problem as an orthographic syllabification task as opposed to phonological syllabification. They treat it as a sequence tagging problem and define new tagging schemes. The method is applied to languages such as German and Dutch, in addition to English. | + | This [[Category::paper]] describes one of the first successful attempts at integrating automatic syllabification into a [[AddressesProblem::grapheme to phoneme | letter-to-phoneme conversion]] system using structured [[UsesMethod::Support_Vector_Machines | SVMs]]. The authors obtain substantial improvements in reducing automatic syllabification error rate (measured in WER) against the then state-of-the-art approach. The authors model the problem as an orthographic syllabification task as opposed to phonological syllabification. They treat it as a sequence tagging problem and define new tagging schemes. The method is applied to languages such as German and Dutch, in addition to English. |
== Method == | == Method == | ||
=== Structured SVMs === | === Structured SVMs === | ||
− | [[UsesMethod::Structured SVMs | Structured SVMs]] is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in structured SVMs resembles that of an [[HMM|HMM]]. | + | [[UsesMethod::Structured SVMs | Structured SVMs]] [[Tsochantaridis,_Joachims_,_Support_vector_machine_learning_for_interdependent_and_structured_output_spaces_2004 | (Tsochantaridis et al., 2004)]] is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in [http://en.wikipedia.org/wiki/Structured_SVM structured SVMs] resembles that of an [[HMM|HMM]]. |
Training structured SVMs is viewed as a multi-class classification problem. For a given training instance <math>x_i</math>, a correct tag sequence <math>y_i</math> is drawn from a set of possible tag sequences <math>Y_i</math>. Each input sequence <math>x</math> has a feature space representation <math>\Psi (x,y)</math> to represent a candidate output sequence <math>y</math>. | Training structured SVMs is viewed as a multi-class classification problem. For a given training instance <math>x_i</math>, a correct tag sequence <math>y_i</math> is drawn from a set of possible tag sequences <math>Y_i</math>. Each input sequence <math>x</math> has a feature space representation <math>\Psi (x,y)</math> to represent a candidate output sequence <math>y</math>. | ||
− | The training process works in a manner that weights a correctly classified input sequence more than an incorrectly classified input sequence using a weight vector <math>w</math>. Mathematically, the relationship can be expressed as: | + | The training process works in a manner that weights a correctly classified input sequence more than an incorrectly classified input sequence using a weight vector <math>w</math>. It tries to the maximize the margin between the correct and incorrect sequence. Mathematically, the relationship can be expressed as: |
− | <math> | + | :::::::<math> |
\forall_{i} \forall_{y \in Y_{i,y \neq y_i}} \left[ \Psi(x_i,y_i) \cdot w > \Psi(x_i,y) \cdot w \right] | \forall_{i} \forall_{y \in Y_{i,y \neq y_i}} \left[ \Psi(x_i,y_i) \cdot w > \Psi(x_i,y) \cdot w \right] | ||
</math> | </math> | ||
Line 24: | Line 24: | ||
The model then predicts the highest scoring sequence as follows, | The model then predicts the highest scoring sequence as follows, | ||
− | + | :::::::<math> | |
− | <math> | ||
− | |||
\underset{y \in Y_i}{\operatorname{argmax}} \left[ \Psi(x_i,y) \cdot w \right] | \underset{y \in Y_i}{\operatorname{argmax}} \left[ \Psi(x_i,y) \cdot w \right] | ||
− | |||
</math> | </math> | ||
− | |||
− | The search for | + | |
+ | The search for the highest confidence score is performed by using the Viterbi algorithm. The set of negative or incorrect sequences in <math>Y_i</math> is potentially exponential with respect to <math>x_i</math>. Hence, structured SVMs address this problem by limiting the search space for potential output sequences with the help of an iterative online learning approach. This approach is outlined as follows: | ||
+ | |||
+ | 1. Find out the least scoring or most damaging incorrect sequence <math>y</math> according to the current weight vector <math>w</math>. | ||
+ | |||
+ | 2. Add <math>y</math> to an iteratively increasing list of <math>\bar{Y}_i</math> of incorrect sequences. | ||
+ | |||
+ | 3. Update the weight vector <math>w</math> according to the objective function using the partial <math>\bar{Y}_i</math> sets in place of <math>Y_i</math>. | ||
+ | |||
+ | 4. Go to next training instance and loop from 1 until convergence. | ||
+ | |||
+ | === Tagging Scheme === | ||
+ | |||
+ | ==== Positional Tags ==== | ||
+ | The authors use a modified '''NB tag''' scheme for syllabification which also takes into account the sequential position of <math>N</math> tags for information about syllable length. They call this scheme the '''Numbered NB tag''' scheme. | ||
+ | |||
+ | ==== Structural Tag ==== | ||
+ | A modification to the '''ONC tag scheme''' for tagging vowels and consonants in phonemes of a syllable as ''onset'', ''nucleus'', or ''coda'' is introduced. The modified scheme also incorporates numbered positions of each of the tags. | ||
+ | |||
+ | The final tag set then combines positional and structural tags which encodes information about explicit syllable boundaries as well as the structure of syllables. This scheme is labeled the '''Break ONC tag''' scheme. | ||
== Experiments and Results == | == Experiments and Results == | ||
+ | |||
+ | === Dataset === | ||
+ | * The authors report results on the CELEX and NETtalk corpora. CELEX employs a better syllabification strategy. | ||
+ | * [[::NETtalk corpus | NETtalk]] contains 20k words. It is divided into a train/test set of 13k and 7k words respectively. | ||
+ | * [[::celex corpus | CELEX]] corpus is divided into 14k words for training, 25k words for testing and 5k words as the development set. | ||
+ | |||
+ | === Evaluation === | ||
+ | The paper reports results on syllabification and grapheme-to-phoneme accuracies. | ||
+ | |||
+ | === Results === | ||
+ | Syllabification performance is summarized in table 1 below. Word accuracy percentage for the grapheme to phoneme task with and without syllabification information is also given. | ||
+ | |||
+ | |||
+ | [[File:syllabification_svm.jpg]] [[File:g2p_svm.jpg]] | ||
== Related Papers == | == Related Papers == | ||
+ | [1] [[RelatedPaper::Tsochantaridis,_Joachims_,_Support_vector_machine_learning_for_interdependent_and_structured_output_spaces_2004 | Ioannis Tsochantaridis,Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. Support vector machine learning for interdependent and structured output spaces. In ''Proceedings of the 21st International Conference on Machine Learning'', pages 104–111, July 2004. ]] | ||
+ | |||
+ | [2] [[RelatedPaper::Altun et al., ICML, 2003. | Yasemin Altun, Ioannis Tsochantaridis, and Thomas | ||
+ | Hofmann. 2003. Hidden Markov support vector machines. In ''Proceedings of the 20th International Conference on Machine Learning (ICML)'', pages 3-10.]] |
Latest revision as of 22:48, 30 September 2011
Contents
Citation
Susan Bartlett, Grzegorz Kondrak and Colin Cherry. 2008. Automatic syllabification with structured SVMs for letter-to-phoneme conversion. In Proceedings of ACL-08: HLT, 2008, pp. 568–576.
Online Version
Automatic syllabification with structured SVMs for letter-to-phoneme conversion.
Summary
This paper describes one of the first successful attempts at integrating automatic syllabification into a letter-to-phoneme conversion system using structured SVMs. The authors obtain substantial improvements in reducing automatic syllabification error rate (measured in WER) against the then state-of-the-art approach. The authors model the problem as an orthographic syllabification task as opposed to phonological syllabification. They treat it as a sequence tagging problem and define new tagging schemes. The method is applied to languages such as German and Dutch, in addition to English.
Method
Structured SVMs
Structured SVMs (Tsochantaridis et al., 2004) is a large-margin training method that is used for predicting structured outputs such as tag sequences. The method described in this paper uses structured SVMs that learn tag sequences from training data and perform structured output prediction by finding the highest scoring tag sequence using the Viterbi algorithm. Hence, the decoding problem in structured SVMs resembles that of an HMM.
Training structured SVMs is viewed as a multi-class classification problem. For a given training instance , a correct tag sequence is drawn from a set of possible tag sequences . Each input sequence has a feature space representation to represent a candidate output sequence .
The training process works in a manner that weights a correctly classified input sequence more than an incorrectly classified input sequence using a weight vector . It tries to the maximize the margin between the correct and incorrect sequence. Mathematically, the relationship can be expressed as:
The model then predicts the highest scoring sequence as follows,
The search for the highest confidence score is performed by using the Viterbi algorithm. The set of negative or incorrect sequences in is potentially exponential with respect to . Hence, structured SVMs address this problem by limiting the search space for potential output sequences with the help of an iterative online learning approach. This approach is outlined as follows:
1. Find out the least scoring or most damaging incorrect sequence according to the current weight vector .
2. Add to an iteratively increasing list of of incorrect sequences.
3. Update the weight vector according to the objective function using the partial sets in place of .
4. Go to next training instance and loop from 1 until convergence.
Tagging Scheme
Positional Tags
The authors use a modified NB tag scheme for syllabification which also takes into account the sequential position of tags for information about syllable length. They call this scheme the Numbered NB tag scheme.
Structural Tag
A modification to the ONC tag scheme for tagging vowels and consonants in phonemes of a syllable as onset, nucleus, or coda is introduced. The modified scheme also incorporates numbered positions of each of the tags.
The final tag set then combines positional and structural tags which encodes information about explicit syllable boundaries as well as the structure of syllables. This scheme is labeled the Break ONC tag scheme.
Experiments and Results
Dataset
- The authors report results on the CELEX and NETtalk corpora. CELEX employs a better syllabification strategy.
- [[::NETtalk corpus | NETtalk]] contains 20k words. It is divided into a train/test set of 13k and 7k words respectively.
- [[::celex corpus | CELEX]] corpus is divided into 14k words for training, 25k words for testing and 5k words as the development set.
Evaluation
The paper reports results on syllabification and grapheme-to-phoneme accuracies.
Results
Syllabification performance is summarized in table 1 below. Word accuracy percentage for the grapheme to phoneme task with and without syllabification information is also given.