Real-time human mobility modeling with multi-view learning

Desheng Zhang, Tian He, Fan Zhang

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Real-time human mobility modeling is essential to various urban applications. To model such human mobility, numerous data-driven techniques have been proposed. However, existing techniques are mostly driven by data from a single view, for example, a transportation view or a cellphone view, which leads to overfitting of these single-view models. To address this issue, we propose a human mobility modeling technique based on a generic multi-view learning framework called coMobile. In coMobile, we first improve the performance of single-view models based on tensor decomposition with correlated contexts, and then we integrate these improved single-view models together for multi-view learning to iteratively obtain mutually reinforced knowledge for real-time human mobility at urban scale.We implement coMobile based on an extremely large dataset in the Chinese city Shenzhen, including data about taxi, bus, and subway passengers along with cellphone users, capturing more than 27 thousand vehicles and 10 million urban residents. The evaluation results show that our approach outperforms a single-view model by 51% on average. More importantly, we design a novel application where urban taxis are dispatched based on unaccounted mobility demand inferred by coMobile.

Original languageEnglish (US)
Article number22
JournalACM Transactions on Intelligent Systems and Technology
Volume9
Issue number3
DOIs
StatePublished - Dec 2017

Bibliographical note

Publisher Copyright:
© 2017 ACM.

Keywords

  • Mobility model
  • Model integration
  • Smart cities

Fingerprint

Dive into the research topics of 'Real-time human mobility modeling with multi-view learning'. Together they form a unique fingerprint.

Cite this