Difference between revisions of "Xuehan Xiong's project abstract"

From Cohen Courses
Jump to navigationJump to search
 
(6 intermediate revisions by the same user not shown)
Line 5: Line 5:
 
In lots of NLP tasks, given a limited amount of labeled data semi-supervised learning is able to take advantage of
 
In lots of NLP tasks, given a limited amount of labeled data semi-supervised learning is able to take advantage of
 
the "cheap" unlabeled data and outperform the same supervised techniques.  
 
the "cheap" unlabeled data and outperform the same supervised techniques.  
Stacked Sequential Learning [] also shows its advantage over probabilistic graphical models on various NLP tasks.
+
Stacked Sequential Learning also shows its advantage over probabilistic graphical models on various NLP tasks.
 
However, little work has been done to extend stacking into a semi-supervised framework.
 
However, little work has been done to extend stacking into a semi-supervised framework.
  
Line 16: Line 16:
  
 
4. Analyze the reason why it performs better or worse than supervised stacking.
 
4. Analyze the reason why it performs better or worse than supervised stacking.
 +
 +
== Techniques ==
 +
First try out some basic semi-supervised learning algorithms as the base learner of stacking,
 +
such as K.Nigam, et al. [http://www.kamalnigam.com/papers/emcat-mlj99.pdf],
 +
Y. Grandvalet [http://www.eprints.pascal-network.org/archive/00001978/01/grandvalet05.pdf],
 +
and K. P. Bennett [http://www1.cs.columbia.edu/~dplewis/candidacy/bennett98semisupervised.pdf].
 +
Then, based on the outcome I will try other ways to improve the algorithm.
  
 
== Experiments ==
 
== Experiments ==
Line 31: Line 38:
 
contains web pages from 4 universities, labeled with whether they are professor, student, project, or other pages.
 
contains web pages from 4 universities, labeled with whether they are professor, student, project, or other pages.
  
== Techniques ==
+
3. The same experiments that W. Cohen did in his stacking paper.  
First try out some basic semi-supervised learning algorithms as the base learner of stacking,
+
In this case we can directly compare the supervised stacking and semi-supervised version.
such as K.Nigam, et al. [http://www.kamalnigam.com/papers/emcat-mlj99.pdf]
+
This depends on the availability of the data.
 
 
    * Which data you plan to use.
 
    * What you plan to do with the data, what questions you plan to answer, and if appropriate, who will be working on what aspects of the problem.
 
    * Why you think this is interesting - and if you published the work, what community (eg, what conference) you think the work would be most relevant to.
 
    * Any relevant superpowers you might have
 
    * How you plan to evaluate your work- and if you need to do any labeling for training and evaluation, estimate how long this will take.
 
    * What techniques you plan to use - and some citations for these methods. If you plan to use existing implementations, even as baselines, also describe where they are available and what evidence you have that they are stable.
 

Latest revision as of 00:12, 8 October 2010

Team

Xuehan Xiong. [xxiong@andrew.cmu.edu]

Motivation

In lots of NLP tasks, given a limited amount of labeled data semi-supervised learning is able to take advantage of the "cheap" unlabeled data and outperform the same supervised techniques. Stacked Sequential Learning also shows its advantage over probabilistic graphical models on various NLP tasks. However, little work has been done to extend stacking into a semi-supervised framework.

Goal

1. Extend stacked sequential learning to a semi-supervised base.

2. Compare this algorithm with other structural semi-supervised algorithms.

3. Compare this approach with the original stacking.

4. Analyze the reason why it performs better or worse than supervised stacking.

Techniques

First try out some basic semi-supervised learning algorithms as the base learner of stacking, such as K.Nigam, et al. [1], Y. Grandvalet [2], and K. P. Bennett [3]. Then, based on the outcome I will try other ways to improve the algorithm.

Experiments

To better understand the pros and cons of my algorithm, I will run different the algorithms over different tasks if time allows. The experiments to do as follows:

1. I will evaluate my algorithm on the task of Named Entity Recognition for emails. I will use a public available email datasets. [4]

2. Also I will run my algorithm on another popular task -- web page classification. Co-training has been shown to be very effective on this task. It would be interesting to compare my algorithm with co-training. This dataset [5] contains web pages from 4 universities, labeled with whether they are professor, student, project, or other pages.

3. The same experiments that W. Cohen did in his stacking paper. In this case we can directly compare the supervised stacking and semi-supervised version. This depends on the availability of the data.