TY - JOUR
T1 - Learning to Minimize the Remainder in Supervised Learning
AU - Luo, Yan
AU - Wong, Yongkang
AU - Kankanhalli, Mohan S.
AU - Zhao, Catherine
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - The learning process of deep learning methods usually updates the model's parameters in multiple iterations. Each iteration can be viewed as the first-order approximation of Taylor's series expansion. The remainder, which consists of higher-order terms, is usually ignored in the learning process for simplicity. This learning scheme empowers various multimedia-based applications, such as image retrieval, recommendation system, and video search. Generally, multimedia data (e.g. images) are semantics-rich and high-dimensional, hence the remainders of approximations are possibly non-zero. In this work, we consider that the remainder is informative and study how it affects the learning process. To this end, we propose a new learning approach, namely gradient adjustment learning (GAL), to leverage the knowledge learned from the past training iterations to adjust vanilla gradients, such that the remainders are minimized and the approximations are improved. The proposed GAL is model- and optimizer-agnostic, and is easy to adapt to the standard learning framework. It is evaluated on three tasks, i.e. image classification, object detection, and regression, with state-of-the-art models and optimizers. The experiments show that the proposed GAL consistently enhances the evaluated models, whereas the ablation studies validate various aspects of the proposed GAL. The code is available at https://github.com/luoyan407/gradient_adjustment.git.
AB - The learning process of deep learning methods usually updates the model's parameters in multiple iterations. Each iteration can be viewed as the first-order approximation of Taylor's series expansion. The remainder, which consists of higher-order terms, is usually ignored in the learning process for simplicity. This learning scheme empowers various multimedia-based applications, such as image retrieval, recommendation system, and video search. Generally, multimedia data (e.g. images) are semantics-rich and high-dimensional, hence the remainders of approximations are possibly non-zero. In this work, we consider that the remainder is informative and study how it affects the learning process. To this end, we propose a new learning approach, namely gradient adjustment learning (GAL), to leverage the knowledge learned from the past training iterations to adjust vanilla gradients, such that the remainders are minimized and the approximations are improved. The proposed GAL is model- and optimizer-agnostic, and is easy to adapt to the standard learning framework. It is evaluated on three tasks, i.e. image classification, object detection, and regression, with state-of-the-art models and optimizers. The experiments show that the proposed GAL consistently enhances the evaluated models, whereas the ablation studies validate various aspects of the proposed GAL. The code is available at https://github.com/luoyan407/gradient_adjustment.git.
KW - Deep learning
KW - gradient adjustment
KW - remainder
KW - supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85126308772&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126308772&partnerID=8YFLogxK
U2 - 10.1109/TMM.2022.3158066
DO - 10.1109/TMM.2022.3158066
M3 - Article
AN - SCOPUS:85126308772
SN - 1520-9210
VL - 25
SP - 1738
EP - 1748
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -