Difference between revisions of "Saul and Pereira, EMNLP 1997"
(Created page with ''''Aggregate and mixed-order Markov modles for statistical language processing''' ==Citation== Lawrence Saul and Fernando Pereira. Aggregate and mixed-order Markov models for s…') |
|||
Line 1: | Line 1: | ||
'''Aggregate and mixed-order Markov modles for statistical language processing''' | '''Aggregate and mixed-order Markov modles for statistical language processing''' | ||
+ | This [[Category::Paper|paper]] can be found at [http://acl.ldc.upenn.edu/W/W97/W97-0309.pdf] | ||
==Citation== | ==Citation== | ||
Lawrence Saul and Fernando Pereira. Aggregate and mixed-order Markov models for statistical language processing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 81–89, Providence, Rhode Island, USA, August 1997. | Lawrence Saul and Fernando Pereira. Aggregate and mixed-order Markov models for statistical language processing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 81–89, Providence, Rhode Island, USA, August 1997. | ||
+ | |||
+ | ==Summary== | ||
+ | The authors wanted to train a language model that used fewer parameters than an n-gram model, but still performed well. They developed aggregate and mixed-order markov models. While similar in some ways (mainly their Markovian properties), the new methods give fewer words 0 probability | ||
+ | |||
+ | ==Future Work== | ||
+ | Aggregate Markov models can kinda be viewed as approximations of a full bigram model. In this way, the method is similar to [[SVD]] decomposition, which might be related. The classic question of generative vs discriminative models comes up. The authors argue that their generative model is fine because they train the mixture of generative models such that they model reality. Theirs models also train in a fraction of the time spent training Rosenfeld's models. | ||
+ | |||
+ | ==Related Work== | ||
+ | * [[RelatedPaper::Rosenfeld, Computer Speech and Language 1996]] worked on language models, but from a discriminative stance. | ||
+ | * [[RelatedPaper::Jelinek et al, Advances in Speech Signal Processing 1992]] |
Revision as of 03:10, 1 December 2011
Aggregate and mixed-order Markov modles for statistical language processing
This paper can be found at [1]
Contents
Citation
Lawrence Saul and Fernando Pereira. Aggregate and mixed-order Markov models for statistical language processing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 81–89, Providence, Rhode Island, USA, August 1997.
Summary
The authors wanted to train a language model that used fewer parameters than an n-gram model, but still performed well. They developed aggregate and mixed-order markov models. While similar in some ways (mainly their Markovian properties), the new methods give fewer words 0 probability
Future Work
Aggregate Markov models can kinda be viewed as approximations of a full bigram model. In this way, the method is similar to SVD decomposition, which might be related. The classic question of generative vs discriminative models comes up. The authors argue that their generative model is fine because they train the mixture of generative models such that they model reality. Theirs models also train in a fraction of the time spent training Rosenfeld's models.
Related Work
- Rosenfeld, Computer Speech and Language 1996 worked on language models, but from a discriminative stance.
- Jelinek et al, Advances in Speech Signal Processing 1992