Selen writeup of Klein and Manning

From Cohen Courses
Jump to navigationJump to search

This is a review of the paper Klein_2002_conditional_structure_versus_conditional_estimation_in_nlp_models by user:Selen.

In this paper they compare different methods used in statistical NLP task to see which property of a technique actually makes the difference.

They first take one method, naive bayes and test its performance using different objective functions, and optimization techniques.

Then they compare different model structures HMMs and CMMS

They finally conclude that performance highly depends on features, and if the features are fixed it depends on model structure.

I like this paper in the sense they apply a scientific method to find out which method performs better instead of throwing everything in and then comparing results.

I would love to see a comparison with CRFs and memm's also