|
|
(13 intermediate revisions by the same user not shown) |
Line 1: |
Line 1: |
− | == Project Proposal I ==
| |
| | | |
− | == Team Members ==
| |
− |
| |
− | Xuehan Xiong [xxiong@andrew.cmu.edu]
| |
− |
| |
− | == Goal ==
| |
− | 1. A revisit of boosting.
| |
− |
| |
− | 2. Extend a stacked hierarchical model recently developed for vision tasks and apply it
| |
− | to
| |
− |
| |
− | == Motivation ==
| |
− | 1.
| |
− | In the traditional boosting, within each iteration the mis-classified samples are weighted more
| |
− | in the next round. However, these errors are made from training data. In my algorithm,
| |
− | I will give more weight to the data that are mis-labeled from cross-validation process,
| |
− | as in stacking.
| |
− |
| |
− | 2. The intuition of stacked hierarchical model is that
| |
− | predictions from one level of the hierarchy should help to predict the entities in the level above or below.
| |
− | Besides using neighbors' predictions, parent or/and children predictions may also be "stacked" into one's feature vector.
| |
− | Different from LDA, this model can only be used in a supervised mode.
| |
− |
| |
− | == Dataset ==
| |
− |
| |
− | == Superpowers ==
| |
− |
| |
− | Experience with CRF and stacking in the domain of computer vision.
| |
− |
| |
− | == What question you want to answer ==
| |
− | 1. I want to know whether the proposed algorithm will outperform
| |
− | the traditional Ada-boost.
| |
− |
| |
− | 2.
| |