Daume ICML 2009
In progress by Francis Keith
Contents
Citation
"Unsupervised Search-based Structured Prediction", Hal Daume III, ICML 2009
Online Version
An online version of the paper can be found here [1]
Summary
This paper details a methodology for unsupervised SEARN. It compares the results to other methods, first on synthetic data, and then on other methods for unsupervised dependency parsing.
Algorithm
The basic SEARN algorithm is described (see the SEARN wiki article for more background).
In the supervised form, the algorithm uses a sample space of pairs, where is the input and is the true output. In the unsupervised case, we must account for the fact that the algorithm must be run on an input of simply , with the classifier still producing .
The proposed solution is to essentially predict , and then perform the normal prediction. The loss function will only be dependent on , as, while we will be predicting , we do not have any observed "true" outputs.
Given an input , which is a sequence of length , we consider the true output to be of length , where the first components will be drawn from the possible vocabulary for (that is, the possible outputs), and represent the latent structure (which we will refer to as components are the drawn from the vocabulary for the input, . We can then use SEARN on this input. It is important to note that constructing features for these two parts is different:
- For , the latent components, they can be based on and the partial latent structure (i.e., at , we can look at )
- For , the input components, they can be based on and . However, the paper notes that in designing features here, the ideal feature set will be dependent on correctly predicting the latent structure correctly. This makes intuitive sense - if we correctly predict , then we have an "optimal" label set.
In dealing with , the optimal (and thus initial) policy, it already predicts fully the true input, and it can arbitrarily/randomly predict the latent input. This gives us the ability to run the first iteration of SEARN.
The paper describes the unsupervised algorithm as follows:
- First iteration: Use to classify the sequence of latent and input components. Note that randomly or arbitrarily produces classifications for the latent components, and all of the costs are 0, so this won't even induce any update to the classifier for these components. The important piece is that does predict classifications for the input components, and recall that these predictions will produce input symbols, taken from the true input.
- Second iteration: Given that due to the update, we no longer are guaranteed to produce zero-cost classifications for the latent component. Given that features for the latent structure are based partially on , they get either high costs if the classifier performs poorly, or low costs if it performs well. As a result, the classifier will begin to predict classifications for the latent component.
- More iterations will cause the output to be closer to the learned classifier and further from the "optimal" policy (which arbitrarily produced the classifications).
The paper also shows that, given a certain set of definitions, this approach is pretty much equivalent to the EM algorithm. It is also noteworthy that the unsupervised algorithm no longer has all of the theoretical guarantees of the supervised algorithm.
Experiments
They use two experiments in the paper. The first is a synthetic experiment using data sampled from HMMs and comparing it to other methods, while the second is applying the method to dependency parsing and comparing it to other, similar methods.
Synthetic Experiments
The first experiments involve generating synthetic data from a first order HMM and a second order HMM. The "truth" is produced by using the generating HMM to predict the sequence. The methods compared are: EM, SEARN with HMM features and using naive Bayes, and SEARN with a logistic regression classifier.
Model | States | Truth (Error Rate) | EM (Error Rate) | SEARN-NB (Error Rate) | SEARN-LR (Error Rate) |
---|---|---|---|---|---|
1st Order HMM | K = 2 | ||||
1st Order HMM | K = 5 | ||||
1st Order HMM | K = 10 | ||||
2nd Order HMM | K = 2 | ||||
2nd Order HMM | K = 5 | ||||
2nd Order HMM | K = 10 |
The results from the first order HMM are uninteresting, as they are all comparable (i.e. within the standard deviation of the optimal algorithm).
Of the second order HMM results, the best performer is always the SEARN with Logistic Regression. This is due to the richer set of features that it contains.
Dependency Parsing Experiments
The more interesting experiment is about dependency parsing. It uses the WSJ10 corpus, with:
- 5301 sentences of training data
- 531 sentences of development data
- 530 sentences of blind test data
The systems compared are as follows (full paper citations at the end):
- Rand-Gen is a random generative baseline
- Rand-SEARN is the baseline given by the SEARN initial policy
- 2 versions of the model from Klein and Manning, which provides an EM-based model
- K+M:Rand-Init is initialized with a random value
- K+M:Smart-Init is initialized intelligently
- 3 variants on Smith and Eisner (S+E:Length, S+E:DelOrTrans1, S+E:Trans1)
- SEARN: Unsup is unsupervised SEARN
- S+E: Sup is Smith and Eisner's supervised model (for an upper bound)
- SEARN: Sup is supervised SEARN, also for an upper bound
The following table reports the accuracy of each method.
Algorithm | Training Accuracy | Test Accuracy | # of iterations |
---|---|---|---|
Rand-Gen | |||
Rand-SEARN | |||
K+M:Rand-Init | 63.3 | ||
K+M:Smart-Init | 64.1 | ||
S+E:Length | 173.1 | ||
S+E:DelOrTrans1 | 132.2 | ||
S+E:Trans1 | 173.4 | ||
SEARN: Unsup | 27.6 | ||
S+E: Sup | 350.5 | ||
SEARN: Sup | 24.4 |
Related Papers
- Smith, N. A., & Eisner, J. (2005). Guiding unsupervised grammar induction using contrastive estimation. IJCAI Wk. on Grammatical Inference Apps (pp. 73-82).
- Klein, D., & Manning, C. (2004). Corpus-based induction of syntactic structure: Models of dependency and constituency. Proc. Conf. of the Assoc. for Computational Linguistics (pp. 478-485)