Difference between revisions of "Xuehan Xiong's project abstract"

From Cohen Courses
Jump to navigationJump to search
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Team ==     
 
== Team ==     
 
Xuehan Xiong. [xxiong@andrew.cmu.edu]
 
Xuehan Xiong. [xxiong@andrew.cmu.edu]
 +
 +
== Motivation ==
 +
In lots of NLP tasks, given a limited amount of labeled data semi-supervised learning is able to take advantage of
 +
the "cheap" unlabeled data and outperform the same supervised techniques.
 +
Stacked Sequential Learning also shows its advantage over probabilistic graphical models on various NLP tasks.
 +
However, little work has been done to extend stacking into a semi-supervised framework.
  
 
== Goal ==
 
== Goal ==
1. Extend sequential stacked learning to a semi-supervised base.
+
1. Extend stacked sequential learning to a semi-supervised base.
  
 
2. Compare this algorithm with other structural semi-supervised algorithms.
 
2. Compare this algorithm with other structural semi-supervised algorithms.
Line 9: Line 15:
 
3. Compare this approach with the original stacking.
 
3. Compare this approach with the original stacking.
  
4. Analyze the reason why it perform better or worse than supervised stacking.
+
4. Analyze the reason why it performs better or worse than supervised stacking.
 +
 
 +
== Techniques ==
 +
First try out some basic semi-supervised learning algorithms as the base learner of stacking,
 +
such as K.Nigam, et al. [http://www.kamalnigam.com/papers/emcat-mlj99.pdf],
 +
Y. Grandvalet [http://www.eprints.pascal-network.org/archive/00001978/01/grandvalet05.pdf],
 +
and K. P. Bennett [http://www1.cs.columbia.edu/~dplewis/candidacy/bennett98semisupervised.pdf].
 +
Then, based on the outcome I will try other ways to improve the algorithm.
  
 
== Experiments ==
 
== Experiments ==
Line 16: Line 29:
 
The experiments to do as follows:
 
The experiments to do as follows:
  
1. I will evaluate my algorithm on task of Named Entity Recognition for emails.
+
1. I will evaluate my algorithm on the task of Named Entity Recognition for emails.
 
I will use a public available email datasets. [http://www.cs.cmu.edu/~einat/datasets.html]
 
I will use a public available email datasets. [http://www.cs.cmu.edu/~einat/datasets.html]
  
2. Also I will run my algorithm on another popular task webpage classification. Co-training
+
2. Also I will run my algorithm on another popular task -- web page classification. Co-training
 
has been shown to be very effective on this task.
 
has been shown to be very effective on this task.
 
It would be interesting to compare my algorithm with co-training.
 
It would be interesting to compare my algorithm with co-training.
 
This dataset [http://www-2.cs.cmu.edu/~webkb/]
 
This dataset [http://www-2.cs.cmu.edu/~webkb/]
contains webpages from 4 universities, labeled with whether they are professor, student, project, or other pages.
+
contains web pages from 4 universities, labeled with whether they are professor, student, project, or other pages.
 
 
== Techniques ==
 
First try out some basic semi-supervised learning algorithms as the base learner of stacking,
 
such as K.Nigam, et al. [http://www.kamalnigam.com/papers/emcat-mlj99.pdf]
 
  
    * Which data you plan to use.
+
3. The same experiments that W. Cohen did in his stacking paper.  
    * What you plan to do with the data, what questions you plan to answer, and if appropriate, who will be working on what aspects of the problem.
+
In this case we can directly compare the supervised stacking and semi-supervised version.
    * Why you think this is interesting - and if you published the work, what community (eg, what conference) you think the work would be most relevant to.
+
This depends on the availability of the data.
    * Any relevant superpowers you might have
 
    * How you plan to evaluate your work- and if you need to do any labeling for training and evaluation, estimate how long this will take.
 
    * What techniques you plan to use - and some citations for these methods. If you plan to use existing implementations, even as baselines, also describe where they are available and what evidence you have that they are stable.
 

Latest revision as of 01:12, 8 October 2010

Team

Xuehan Xiong. [xxiong@andrew.cmu.edu]

Motivation

In lots of NLP tasks, given a limited amount of labeled data semi-supervised learning is able to take advantage of the "cheap" unlabeled data and outperform the same supervised techniques. Stacked Sequential Learning also shows its advantage over probabilistic graphical models on various NLP tasks. However, little work has been done to extend stacking into a semi-supervised framework.

Goal

1. Extend stacked sequential learning to a semi-supervised base.

2. Compare this algorithm with other structural semi-supervised algorithms.

3. Compare this approach with the original stacking.

4. Analyze the reason why it performs better or worse than supervised stacking.

Techniques

First try out some basic semi-supervised learning algorithms as the base learner of stacking, such as K.Nigam, et al. [1], Y. Grandvalet [2], and K. P. Bennett [3]. Then, based on the outcome I will try other ways to improve the algorithm.

Experiments

To better understand the pros and cons of my algorithm, I will run different the algorithms over different tasks if time allows. The experiments to do as follows:

1. I will evaluate my algorithm on the task of Named Entity Recognition for emails. I will use a public available email datasets. [4]

2. Also I will run my algorithm on another popular task -- web page classification. Co-training has been shown to be very effective on this task. It would be interesting to compare my algorithm with co-training. This dataset [5] contains web pages from 4 universities, labeled with whether they are professor, student, project, or other pages.

3. The same experiments that W. Cohen did in his stacking paper. In this case we can directly compare the supervised stacking and semi-supervised version. This depends on the availability of the data.