A deep reinforcement learning framework for energy management of extended range electric delivery vehicles

Pengyue Wang, Yan Li, Shashi Shekhar, William Northrop

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Rule-based (RB) energy management strategies are widely used in hybrid-electric vehicles because they are easy to implement and can be used without prior knowledge about future trips. In the literature, parameters used in RB methods are tuned and designed using known driving cycles. Although promising results have been demonstrated, it is difficult to apply such cycle-specific methods on real trips of last-mile delivery vehicles that have significant trip-to-trip differences in distance and energy intensity. In this paper, a reinforcement learning method and a RB strategy is used to improve the fuel economy of an in-use extended range electric vehicle (EREV) used in a last-mile package delivery application. An intelligent agent is trained on historical trips of a single delivery vehicle to tune a parameter in the engine-generator control logic during the trip using real-time information. The method is demonstrated on actual historical delivery trips in a simulation environment. An average of 19.5% in fuel efficiency improvement in miles per gallon gasoline equivalent is achieved on 44 test trips with a distance range of 31 miles to 54 miles not used for training, demonstrating promise to generalize the method. The presented framework is extendable to other RB methods and EREV applications like transit buses and commuter vehicles where similar trips are frequently repeated day-to-day.

Original languageEnglish (US)
Title of host publication2019 IEEE Intelligent Vehicles Symposium, IV 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1837-1842
Number of pages6
ISBN (Electronic)9781728105604
DOIs
StatePublished - Jun 1 2019
Event30th IEEE Intelligent Vehicles Symposium, IV 2019 - Paris, France
Duration: Jun 9 2019Jun 12 2019

Publication series

NameIEEE Intelligent Vehicles Symposium, Proceedings
Volume2019-June

Conference

Conference30th IEEE Intelligent Vehicles Symposium, IV 2019
CountryFrance
CityParis
Period6/9/196/12/19

Fingerprint

Energy Management
Energy management
Reinforcement learning
Reinforcement Learning
Electric vehicles
Range of data
Electric Vehicle
Intelligent agents
Hybrid vehicles
Fuel economy
Gasoline
Engines
Cycle
Hybrid Electric Vehicle
Intelligent Agents
Simulation Environment
Prior Knowledge
Framework
Engine
Generator

Cite this

Wang, P., Li, Y., Shekhar, S., & Northrop, W. (2019). A deep reinforcement learning framework for energy management of extended range electric delivery vehicles. In 2019 IEEE Intelligent Vehicles Symposium, IV 2019 (pp. 1837-1842). [8813890] (IEEE Intelligent Vehicles Symposium, Proceedings; Vol. 2019-June). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IVS.2019.8813890

A deep reinforcement learning framework for energy management of extended range electric delivery vehicles. / Wang, Pengyue; Li, Yan; Shekhar, Shashi; Northrop, William.

2019 IEEE Intelligent Vehicles Symposium, IV 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 1837-1842 8813890 (IEEE Intelligent Vehicles Symposium, Proceedings; Vol. 2019-June).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Wang, P, Li, Y, Shekhar, S & Northrop, W 2019, A deep reinforcement learning framework for energy management of extended range electric delivery vehicles. in 2019 IEEE Intelligent Vehicles Symposium, IV 2019., 8813890, IEEE Intelligent Vehicles Symposium, Proceedings, vol. 2019-June, Institute of Electrical and Electronics Engineers Inc., pp. 1837-1842, 30th IEEE Intelligent Vehicles Symposium, IV 2019, Paris, France, 6/9/19. https://doi.org/10.1109/IVS.2019.8813890
Wang P, Li Y, Shekhar S, Northrop W. A deep reinforcement learning framework for energy management of extended range electric delivery vehicles. In 2019 IEEE Intelligent Vehicles Symposium, IV 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 1837-1842. 8813890. (IEEE Intelligent Vehicles Symposium, Proceedings). https://doi.org/10.1109/IVS.2019.8813890
Wang, Pengyue ; Li, Yan ; Shekhar, Shashi ; Northrop, William. / A deep reinforcement learning framework for energy management of extended range electric delivery vehicles. 2019 IEEE Intelligent Vehicles Symposium, IV 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 1837-1842 (IEEE Intelligent Vehicles Symposium, Proceedings).
@inproceedings{358a8a8bdb434853842146e40038cf4e,
title = "A deep reinforcement learning framework for energy management of extended range electric delivery vehicles",
abstract = "Rule-based (RB) energy management strategies are widely used in hybrid-electric vehicles because they are easy to implement and can be used without prior knowledge about future trips. In the literature, parameters used in RB methods are tuned and designed using known driving cycles. Although promising results have been demonstrated, it is difficult to apply such cycle-specific methods on real trips of last-mile delivery vehicles that have significant trip-to-trip differences in distance and energy intensity. In this paper, a reinforcement learning method and a RB strategy is used to improve the fuel economy of an in-use extended range electric vehicle (EREV) used in a last-mile package delivery application. An intelligent agent is trained on historical trips of a single delivery vehicle to tune a parameter in the engine-generator control logic during the trip using real-time information. The method is demonstrated on actual historical delivery trips in a simulation environment. An average of 19.5{\%} in fuel efficiency improvement in miles per gallon gasoline equivalent is achieved on 44 test trips with a distance range of 31 miles to 54 miles not used for training, demonstrating promise to generalize the method. The presented framework is extendable to other RB methods and EREV applications like transit buses and commuter vehicles where similar trips are frequently repeated day-to-day.",
author = "Pengyue Wang and Yan Li and Shashi Shekhar and William Northrop",
year = "2019",
month = "6",
day = "1",
doi = "10.1109/IVS.2019.8813890",
language = "English (US)",
series = "IEEE Intelligent Vehicles Symposium, Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "1837--1842",
booktitle = "2019 IEEE Intelligent Vehicles Symposium, IV 2019",

}

TY - GEN

T1 - A deep reinforcement learning framework for energy management of extended range electric delivery vehicles

AU - Wang, Pengyue

AU - Li, Yan

AU - Shekhar, Shashi

AU - Northrop, William

PY - 2019/6/1

Y1 - 2019/6/1

N2 - Rule-based (RB) energy management strategies are widely used in hybrid-electric vehicles because they are easy to implement and can be used without prior knowledge about future trips. In the literature, parameters used in RB methods are tuned and designed using known driving cycles. Although promising results have been demonstrated, it is difficult to apply such cycle-specific methods on real trips of last-mile delivery vehicles that have significant trip-to-trip differences in distance and energy intensity. In this paper, a reinforcement learning method and a RB strategy is used to improve the fuel economy of an in-use extended range electric vehicle (EREV) used in a last-mile package delivery application. An intelligent agent is trained on historical trips of a single delivery vehicle to tune a parameter in the engine-generator control logic during the trip using real-time information. The method is demonstrated on actual historical delivery trips in a simulation environment. An average of 19.5% in fuel efficiency improvement in miles per gallon gasoline equivalent is achieved on 44 test trips with a distance range of 31 miles to 54 miles not used for training, demonstrating promise to generalize the method. The presented framework is extendable to other RB methods and EREV applications like transit buses and commuter vehicles where similar trips are frequently repeated day-to-day.

AB - Rule-based (RB) energy management strategies are widely used in hybrid-electric vehicles because they are easy to implement and can be used without prior knowledge about future trips. In the literature, parameters used in RB methods are tuned and designed using known driving cycles. Although promising results have been demonstrated, it is difficult to apply such cycle-specific methods on real trips of last-mile delivery vehicles that have significant trip-to-trip differences in distance and energy intensity. In this paper, a reinforcement learning method and a RB strategy is used to improve the fuel economy of an in-use extended range electric vehicle (EREV) used in a last-mile package delivery application. An intelligent agent is trained on historical trips of a single delivery vehicle to tune a parameter in the engine-generator control logic during the trip using real-time information. The method is demonstrated on actual historical delivery trips in a simulation environment. An average of 19.5% in fuel efficiency improvement in miles per gallon gasoline equivalent is achieved on 44 test trips with a distance range of 31 miles to 54 miles not used for training, demonstrating promise to generalize the method. The presented framework is extendable to other RB methods and EREV applications like transit buses and commuter vehicles where similar trips are frequently repeated day-to-day.

UR - http://www.scopus.com/inward/record.url?scp=85072274466&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072274466&partnerID=8YFLogxK

U2 - 10.1109/IVS.2019.8813890

DO - 10.1109/IVS.2019.8813890

M3 - Conference contribution

AN - SCOPUS:85072274466

T3 - IEEE Intelligent Vehicles Symposium, Proceedings

SP - 1837

EP - 1842

BT - 2019 IEEE Intelligent Vehicles Symposium, IV 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -