AI-based Energy Management Strategies for P2 Plug-in Hybrid Electric Vehicles

Abstract

Plug-in Hybrid Electric Vehicles (PHEVs) offer a promising solution for the increasing CO2 emission problem. However, the improved economy of PHEVs strongly depends on the control strategy that decides on the power distribution between the Internal Combustion Engine (ICE) and the electric battery. Traditional rule-based control strategies are no more practical considering the increasing and more complex control objectives introduced by the emerging technologies such as automated driving and connected vehicles. In this study, an advanced Energy Management Strategy (EMS) based on Deterministic Dynamic Programming (DDP) and Reinforcement Learning (RL) is developed. DDP solves a finite-horizon optimization problem given the driving cycle a priori to obtain a global optimal vehicle power distribution that contributes mostly to the fuel economy improvements. DDP results are used to benchmark the subsequent RL-developed algorithms’ performance. In the newly proposed control strategy, an adaptive online learning RL agent is introduced into the existing Hybrid Control Unit (HCU) architecture solving the EMS for near-optimal solutions. The objective is to minimize the vehicle’s expected total fuel consumption with a proper battery depletion rate besides penalizing the frequent engine on/off switching. Several RL-based algorithms have been experimented with using a vehicle model simulation. As a result, an Extended Deep Q-Network (E-DQN) agent is proposed by the thesis, trained on one cycle, and deployed on two other cycles to evaluate the performance. The thesis findings showed that E-DQN outperformed the rule-based strategy achieving up to 10.46% improvement in fuel economy closer to the DP performance alongside providing adequate compliance with the vehicle drivability and driver com-fort objectives.

Type