Many real-world phenomena are observed at multiple resolutions. Predictive models designed to predict these phenomena typically consider different resolutions separately. This approach might be limiting in applications where predictions are desired at fine resolutions but available training data is scarce. In this paper, we propose classification algorithms that leverage supervision from coarser resolutions to help train models on finer resolutions. The different resolutions are modeled as different views of the data in a multi-view framework that exploits the complementarity of features across different views to improve models on both views. Unlike traditional multi-view learning problems, the key challenge in our case is that there is no one-to-one correspondence between instances across different views in our case, which requires explicit modeling of the correspondence of instances across resolutions. We propose to use the features of instances at different resolutions to learn the correspondence between instances across resolutions using attention mechanism.Experiments on the real-world application of mapping urban areas using satellite observations and sentiment classification on text data show the effectiveness of the proposed methods.
|Original language||English (US)|
|Title of host publication||Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020|
|Editors||Carlotta Demeniconi, Nitesh Chawla|
|Publisher||Society for Industrial and Applied Mathematics Publications|
|Number of pages||9|
|State||Published - 2020|
|Event||2020 SIAM International Conference on Data Mining, SDM 2020 - Cincinnati, United States|
Duration: May 7 2020 → May 9 2020
|Name||Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020|
|Conference||2020 SIAM International Conference on Data Mining, SDM 2020|
|Period||5/7/20 → 5/9/20|
Bibliographical noteFunding Information:
This research was supported by National Science Foundation under the grant IIS-1838159. Access to computing facilities was provided by the University of Minnesota Supercomputing Institute.
Copyright © 2020 by SIAM.