Training Artificial Neural Networks by Generalized Likelihood Ratio Method: An Effective Way to Improve Robustness

Li Xiao, Yijie Peng, L. Jeff Hong, Zewu Ke, Shuhuai Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

In this work, we proposed a generalized likelihood ratio method capable of training the artificial neural networks with more flexibility: (a)training with discrete activation and loss functions, while the traditional back propagation method cannot train the artificial neural networks with such activations and loss; (b)involving neuronal noises during training and prediction, which will improve the freedom of the model and make it more adaptable to the real environment, especially when environmental noises exist. Numerical results show that the robustness of various artificial neural networks trained by the new method is significantly improved when the input data is affected by both the natural noises and adversarial attacks.

Original languageEnglish (US)
Title of host publication2020 IEEE 16th International Conference on Automation Science and Engineering, CASE 2020
PublisherIEEE Computer Society
Pages1343-1348
Number of pages6
ISBN (Electronic)9781728169040
DOIs
StatePublished - Aug 2020
Externally publishedYes
Event16th IEEE International Conference on Automation Science and Engineering, CASE 2020 - Hong Kong, Hong Kong
Duration: Aug 20 2020Aug 21 2020

Publication series

NameIEEE International Conference on Automation Science and Engineering
Volume2020-August
ISSN (Print)2161-8070
ISSN (Electronic)2161-8089

Conference

Conference16th IEEE International Conference on Automation Science and Engineering, CASE 2020
Country/TerritoryHong Kong
CityHong Kong
Period8/20/208/21/20

Bibliographical note

Publisher Copyright:
© 2020 IEEE.

Fingerprint

Dive into the research topics of 'Training Artificial Neural Networks by Generalized Likelihood Ratio Method: An Effective Way to Improve Robustness'. Together they form a unique fingerprint.

Cite this