Multi-Domain Learning: When Do Domains Matter
This a Paper discussed in Social Media Analysis 11-772 in Autumn 2012.
Citation
Multi-Domain Learning: When Do Domains Matter? Mahesh Joshi, Mark Dredze, William Cohen and Carolyn Rose. EMNLP-CoNLL-2012.
Online version
Multi-Domain Learning: When Do Domains Matter?
Summary
This paper studies existing multi-domain learning approaches with respect to two questions: first, are multi-domain learning improvements the result of ensemble learning effects? Second, are multi-domain methods improving because they capture domain-specific class biases.
They experimented on two dataset, Amazon dataset and ConVote dataset.
Evaluation
They compare three existing multi-domain methods, i.e. Frustratingly easy domain adaptation (FEDA), Multi-domain regularization(MDR) and multi-task relationship learning (MTRL) with access to 'True Domain' lables and methods that use 'random domain' with single classifier to test if the domain helps. The results show that it is indeed that some multi-domain learning methods simply benefit from the ensemble learning effect.
To test is MDL methods improvements because they capture domain-specific class biases, they create 4 random versions of data, each with some domain-specific class-bias. Different from previous experiments, the results here show significant improvements in almost all cases, which concludes that multi-domain learning results can be highly influenced by systematic differences in class bias across domains.
Discussion
Multi-domain learning has been a hot topic for many years, but the improvements from the baselines are always small. From this paper, we again see that most of the improvements come from ensemble learning and domain-specific class bias. If we can compare with the time-consuming, we can see that in a lot of cases, multi-domain learning is not worth.