Learning to Minimize the Remainder in Supervised Learning

Yan Luo, Yongkang Wong, Mohan S. Kankanhalli, Catherine Zhao

Research output: Contribution to journalArticlepeer-review

Abstract

The learning process of deep learning methods usually updates the model's parameters in multiple iterations. Each iteration can be viewed as the first-order approximation of Taylor's series expansion. The remainder, which consists of higher-order terms, is usually ignored in the learning process for simplicity. This learning scheme empowers various multimedia-based applications, such as image retrieval, recommendation system, and video search. Generally, multimedia data (e.g. images) are semantics-rich and high-dimensional, hence the remainders of approximations are possibly non-zero. In this work, we consider that the remainder is informative and study how it affects the learning process. To this end, we propose a new learning approach, namely gradient adjustment learning (GAL), to leverage the knowledge learned from the past training iterations to adjust vanilla gradients, such that the remainders are minimized and the approximations are improved. The proposed GAL is model- and optimizer-agnostic, and is easy to adapt to the standard learning framework. It is evaluated on three tasks, i.e. image classification, object detection, and regression, with state-of-the-art models and optimizers. The experiments show that the proposed GAL consistently enhances the evaluated models, whereas the ablation studies validate various aspects of the proposed GAL. The code is available at https://github.com/luoyan407/gradient_adjustment.git.

Original languageEnglish (US)
Pages (from-to)1738-1748
Number of pages11
JournalIEEE Transactions on Multimedia
Volume25
DOIs
StatePublished - 2023

Bibliographical note

Publisher Copyright:
IEEE

Keywords

  • Deep learning
  • gradient adjustment
  • remainder
  • supervised learning

Fingerprint

Dive into the research topics of 'Learning to Minimize the Remainder in Supervised Learning'. Together they form a unique fingerprint.

Cite this