Adaptive weight assignment scheme for multi-task learning

Aminul Huq, Mst. Tasnim Pervin

Abstract


Deep learning based models are used regularly in every applications nowadays. Gen- erally we train a single model on a single task. However, we can train multiple tasks on a single model under multi-task learning (MTL) settings. This provides us many benefits like lesser training time, training a single model for multiple tasks, reducing overfitting, and improving performances. To train a model in multi-task learning settings we need to sum the loss values from different tasks. In vanilla multi-task learning settings we assign equal weights but since not all tasks are of similar difficulty we need to allocate more weight to tasks which are more difficult. Also improper weight assignment reduces the performance of the model. We propose a simple weight assignment scheme in this paper which improves the performance of the model and puts more emphasis on difficult tasks. We tested our methods performance on both image and textual data and also compared performance against two popular weight assignment methods. Empirical results suggest that our proposed method achieves better results compared to other popular methods.

Keywords


Adaptive weight assignment; Deep learning; Dynamic weight average; Multi-task learning; Uncertainty weights

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v11.i1.pp173-178

Refbacks

  • There are currently no refbacks.


View IJAI Stats

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.