Difference between revisions of "Martins et al. EMNLP 2011. Structured Sparsity in Structured Prediction"

From Cohen Courses
Jump to navigationJump to search
(Created page with '{{MyCiteconference | booktitle = Proceedings of Conference on Empirical Methods in Natural Language Processing | coauthors = N. Smith, P. Aguiar, M. Fgueiredo | date = 2011| firs…')
 
Line 7: Line 7:
 
== Summary ==
 
== Summary ==
  
 +
This paper is concerned with the problem of model selection in learning. They seek to incorporate a joint decision making about candidate features, and promote sparsity (not just in terms of the number of features). Their approach to that is via regularizers which can encode prior knowledge and hence guide feature selection by modeling the structure of the feature space.
 +
 +
In NLP, the <math>L_1</math> and <math>L_2</math> regularizers are by far the most common choice of regularization.
  
 
== Brief description of the method ==
 
== Brief description of the method ==

Revision as of 19:16, 24 September 2011

Structured Sparsity in Structured Prediction, by A. Martins, N. Smith, P. Aguiar, M. Fgueiredo. In Proceedings of Conference on Empirical Methods in Natural Language Processing, 2011.

This Paper is available online [1].

Under Construction

Summary

This paper is concerned with the problem of model selection in learning. They seek to incorporate a joint decision making about candidate features, and promote sparsity (not just in terms of the number of features). Their approach to that is via regularizers which can encode prior knowledge and hence guide feature selection by modeling the structure of the feature space.

In NLP, the and regularizers are by far the most common choice of regularization.

Brief description of the method

Experimental Result

Dataset

Related Papers