Concepedia

TLDR

Foundation models are large pre‑trained, task‑agnostic systems that can be fine‑tuned or used in few‑shot settings, yet no such models have been developed for geospatial artificial intelligence. This study investigates the promises and challenges of creating multimodal foundation models for GeoAI and discusses the unique risks associated with their development. The authors evaluate existing foundation models on seven geospatial tasks spanning semantics, health, urban, and remote sensing, and propose a multimodal model that aligns diverse geospatial data types. They find that for text‑only tasks, large language models can outperform task‑specific models in zero‑shot or few‑shot settings, but for multimodal tasks they lag behind, highlighting multimodality as a key challenge.

Abstract

Large pre-trained models, also known as foundation models (FMs), are trained in a task-agnostic manner on large-scale data and can be adapted to a wide range of downstream tasks by fine-tuning, few-shot, or even zero-shot learning. Despite their successes in language and vision tasks, we have yet seen an attempt to develop foundation models for geospatial artificial intelligence (GeoAI). In this work, we explore the promises and challenges of developing multimodal foundation models for GeoAI. We first investigate the potential of many existing FMs by testing their performances on seven tasks across multiple geospatial subdomains including Geospatial Semantics, Health Geography, Urban Geography, and Remote Sensing. Our results indicate that on several geospatial tasks that only involve text modality such as toponym recognition, location description recognition, and US state-level/county-level dementia time series forecasting, these task-agnostic LLMs can outperform task-specific fully-supervised models in a zero-shot or few-shot learning setting. However, on other geospatial tasks, especially tasks that involve multiple data modalities (e.g., POI-based urban function classification, street view image-based urban noise intensity classification, and remote sensing image scene classification), existing foundation models still underperform task-specific models. Based on these observations, we propose that one of the major challenges of developing a FM for GeoAI is to address the multimodality nature of geospatial tasks. After discussing the distinct challenges of each geospatial data modality, we suggest the possibility of a multimodal foundation model which can reason over various types of geospatial data through geospatial alignments. We conclude this paper by discussing the unique risks and challenges to develop such a model for GeoAI.