Learning to Minimize the Remainder in Supervised Learning

Yan Luo, Yongkang Wong, Mohan S. Kankanhalli, Catherine Zhao

Research output: Contribution to journalArticlepeer-review


The learning process of deep learning methods usually updates the model's parameters in multiple iterations. Each iteration can be viewed as the first-order approximation of Taylor's series expansion. The remainder, which consists of higher-order terms, is usually ignored in the learning process for simplicity. This learning scheme empowers various multimedia based applications, such as image retrieval, recommendation system, and video search. Generally, multimedia data (e.g., images) are semantics-rich and high-dimensional, hence the remainders of approximations are possibly non-zero. In this work, we consider the remainder to be informative and study how it affects the learning process. To this end, we propose a new learning approach, namely gradient adjustment learning (GAL), to leverage the knowledge learned from the past training iterations to adjust vanilla gradients, such that the remainders are minimized and the approximations are improved. The proposed GAL is model- and optimizer-agnostic, and is easy to adapt to the standard learning framework. It is evaluated on three tasks, i.e., image classification, object detection, and regression, with state-of-the-art models and optimizers. The experiments show that the proposed GAL consistently enhances the evaluated models, whereas the ablation studies validate various aspects of the proposed GAL. The code is available at \url{https://github.com/luoyan407/gradient_adjustment.git}.

Original languageEnglish (US)
JournalIEEE Transactions on Multimedia
StateAccepted/In press - 2022

Bibliographical note

Publisher Copyright:


  • Computational modeling
  • Object detection
  • Optimization methods
  • Standards
  • Stochastic processes
  • Supervised learning
  • Task analysis
  • Training
  • deep learning
  • gradient adjustment
  • remainder


Dive into the research topics of 'Learning to Minimize the Remainder in Supervised Learning'. Together they form a unique fingerprint.

Cite this