Apappu writeup of Cohen and Carvalho

From Cohen Courses
Jump to navigationJump to search

Cohen_2005_stacked_sequential_learning by user:Apappu

Sequential stacked learning

  • This paper talks about improving the performance on sequence partitioning tasks by a arbitrary base learner augmented with a meta learning algorithm (in this case sequential stacked learning).


  • I liked this paper because it tries to address the recurrent problem of mismatch between the data used for training and testing. This issue is well projected with the help of MEMMs poor performance without stack learning in place.


  • This meta-learning method not only accounts for history but also for the future and the experiments on benchmark tasks show that a window size of 5 (on both history and future) and split size (K) of 5 improves the performance of ME and CRF after augmented with sequential stacked learner.


  • Under the discussion section, it was mentioned that in a scenario like data is plentiful but training time is limited, it is fine to split the data into just two halves and the training time is unchanged for a linear time learner. It would be interesting to see how the performance changes on these benchmark tasks or additional tasks like music-2 or music-5.


  • If you use stacked sequential learning with naive Bayes as the base learning and K=10,
 it should be K+2 = 12 times slower !