Publication | Open Access
Semantic-Enhanced Graph Convolutional Neural Networks for Multi-Scale Urban Functional-Feature Identification Based on Human Mobility
11
Citations
60
References
2024
Year
Convolutional Neural NetworkGraph Representation LearningMachine LearningEngineeringSmart CitySpatiotemporal Data FusionGraph Signal ProcessingUrban ScienceSocial SciencesRepresentation LearningSpatial NetworkData SciencePrecise IdentificationGlobal Urban PlanningHuman MobilityMobility DataSpatiotemporal DiagnosticsFeature LearningDeep Learning MethodGeographyUrban PlanningDeep LearningUrban GeographyUrban Spatial UnitsGraph Neural NetworkBig Spatiotemporal Data Analytics
Precise identification of spatial unit functional features in the city is a pre-condition for urban planning and policy-making. However, inferring unknown attributes of urban spatial units from data mining of spatial interaction remains a challenge in geographic information science. Although neural-network approaches have been widely applied to this field, urban dynamics, spatial semantics, and their relationship with urban functional features have not been deeply discussed. To this end, we proposed semantic-enhanced graph convolutional neural networks (GCNNs) to facilitate the multi-scale embedding of urban spatial units, based on which the identification of urban land use is achieved by leveraging the characteristics of human mobility extracted from the largest mobile phone datasets to date. Given the heterogeneity of multi-modal spatial data, we introduced the combination of a systematic data-alignment method and a generative feature-fusion method for the robust construction of heterogeneous graphs, providing an adaptive solution to improve GCNNs’ performance in node-classification tasks. Our work explicitly examined the scale effect on GCNN backbones, for the first time. The results prove that large-scale tasks are more sensitive to the directionality of spatial interaction, and small-scale tasks are more sensitive to the adjacency of spatial interaction. Quantitative experiments conducted in Shenzhen demonstrate the superior performance of our proposed framework compared to state-of-the-art methods. The best accuracy is achieved by the inductive GraphSAGE model at the scale of 250 m, exceeding the baseline by 25.4%. Furthermore, we innovatively explained the role of spatial-interaction factors in the identification of urban land use through the deep learning method.
| Year | Citations | |
|---|---|---|
Page 1
Page 1