Publication | Closed Access
End-To-End Multi-Task Learning With Attention
1.1K
Citations
20
References
2019
Year
Unknown Venue
Few-shot LearningStructured PredictionConvolutional Neural NetworkEngineeringMachine LearningMultimodal LearningMulti-task Attention NetworkNatural Language ProcessingImage AnalysisData SciencePattern RecognitionMulti-task LearningRobot LearningVideo TransformerMachine TranslationEnd-to-end Multi-task LearningMachine VisionGlobal Feature PoolVision Language ModelComputer ScienceDeep LearningComputer Vision
The authors propose a novel multi‑task learning architecture that learns task‑specific feature‑level attention. Their Multi‑Task Attention Network (MTAN) uses a shared backbone with a global feature pool and a soft‑attention module per task, enabling both shared and task‑specific feature learning, is end‑to‑end trainable, and can be built on any feed‑forward network while remaining parameter efficient. MTAN achieves state‑of‑the‑art performance on diverse image‑to‑image and classification benchmarks and is less sensitive to multi‑task loss weighting schemes than existing methods. Code is available at https://github.com/lorenmt/mtan.
We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.
| Year | Citations | |
|---|---|---|
Page 1
Page 1