Publication | Open Access
M6: A Chinese Multimodal Pretrainer
48
Citations
34
References
2021
Year
Chinese Multimodal PretrainerEngineeringMachine LearningMultimodal LearningMotor ControlSingle ModalitySpeech RecognitionNatural Language ProcessingMultimodal LlmImage AnalysisText-to-image RetrievalData ScienceRobot LearningMachine TranslationHealth SciencesMultiple ModalitiesVision Language ModelMultimodal TranslationDeep LearningComputer VisionLargest DatasetSpeech Processing
In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1.9TB images and 292GB texts that cover a wide range of domains. We propose a cross-modal pretraining method called M6, referring to Multi-Modality to Multi-Modality Multitask Mega-transformer, for unified pretraining on the data of single modality and multiple modalities. We scale the model size up to 10 billion and 100 billion parameters, and build the largest pretrained model in Chinese. We apply the model to a series of downstream applications, and demonstrate its outstanding performance in comparison with strong baselines. Furthermore, we specifically design a downstream task of text-guided image generation, and show that the finetuned M6 can create high-quality images with high resolution and abundant details.
| Year | Citations | |
|---|---|---|
Page 1
Page 1