Concepedia

TLDR

Pre‑trained language models are central to NLP, yet multilingual variants are costly and limited by the size and diversity of non‑English data. The authors introduce ARBERT and MARBERT to address these limitations for diverse Arabic varieties and present the ARLUE benchmark for evaluation. ARBERT and MARBERT are deep bidirectional transformer models, and ARLUE comprises 42 datasets across six task clusters to enable standardized experiments. Fine‑tuning on ARLUE yields state‑of‑the‑art results on 37 of 48 classification tasks, with the best model scoring 77.40 and outperforming XLM‑R Large, and both models and the benchmark are publicly released.

Abstract

Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large (~ 3.4 x larger size). Our models are publicly available at this https URL and ARLUE will be released through the same repository.

References

YearCitations

Page 1