Boosting Multi-Task Weak Learners and Applications
Jean-Baptiste Faddoul, Boris Chidlovskii, Fabien Torre, Rémi Gilleron
Learning multiple related tasks from data simultaneously can improve predictive performance relative to learning these tasks independently. In this paper we propose a novel multitask learning approach called MTAdaboost; which it extends
Adaboost algorithm Ill to the multi-tast setting by introducing a new multi-task weak learner which is a multi-task stumb that learns the relatedness between tasks in different regions of the space. MTAdaboost can learn multiple tasks without imposing the constraint of sharing the same label set and/or examples between tasks. Moreover, it permits to relax the conventional
hypothesis on related tasks behaving similarly in the whole space.
Our approach is analyzed theoretically and tested empirically to show that its iterative algorithm can converge well to an accurate model that performs better than single task learning models.
9th International Conference on Machine Learning and Applications, Washington, US, 12-14 December2010