Partially Connected Automated Vehicle Cooperative Control Strategy with a Deep Reinforcement Learning Approach

Haotian Shi, Yang Zhou, Keshu Wu, Xin Wang, Yangxin Lin, Bin Ran

This paper proposes a cooperative strategy of connected and automated vehicles (CAVs) longitudinal control for partially connected and automated traffic environment based on deep reinforcement learning (DRL) algorithm, which enhances the string stability of mixed traffic, car following efficiency, and energy efficiency. Since the sequences of mixed traffic are combinatory, to reduce the training dimension and alleviate communication burdens, we decomposed mixed traffic into multiple subsystems where each subsystem is comprised of human-driven vehicles (HDV) followed by cooperative CAVs. Based on that, a cooperative CAV control strategy is developed based on a deep reinforcement learning algorithm, enabling CAVs to learn the leading HDV's characteristics and make longitudinal control decisions cooperatively to improve the performance of each subsystem locally and consequently enhance performance for the whole mixed traffic flow. For training, a distributed proximal policy optimization is applied to ensure the training convergence of the proposed DRL. To verify the effectiveness of the proposed method, simulated experiments are conducted, which shows the performance of our proposed model has a great generalization capability of dampening oscillations, fulfilling the car following and energy-saving tasks efficiently under different penetration rates and various leading HDVs behaviors. Keywords: partially connected automated traffic environment, cooperative control, deep reinforcement learning, traffic oscillation dampening, energy efficiency

Knowledge Graph



Sign up or login to leave a comment