2022 5th International Conference on Power Electronics and Control Engineering (ICPECE 2022)

Prof. Yajun Liu

Experience: 

Prof. Yajun Liu is from the School of Mechanical and Automotive Engineering, South China University of Technology. His research interests are the process and mechanism of manufacturing systems and optimization control technology. Prof. Liu has won several academic awards such as the "Guangdong Provincial Outstanding Postgraduate and First Prize of Tsang Hin-chi scholarship," the "Elec & Eltek Innovation Award," the "Research Paper Award in Modern Manufacturing," and the "Outstanding Paper Award of 20th International Conference on Machinery and Electronics 2016." His research results include developing the world's first intelligent controller for fuel filling for industrialized promotion and chairing the core control system optimization design projects for several industry-leading enterprises (Tokheim, Johnson Controls, Henglitai, etc.) and achieving performance improvement of key technical parameters. 

His representative patents are: 

       (1) Frequency variable oil gas recovery control system for oiling machine with self-calibrated gas liquid ratio, PCT number: PCT/CN2015/082837; 

       (2) Frequency variable oil gas recovery control system for oiling machine with self-calibrated gas liquid ratio, patent number: 2014208249107; 

    (3) Variable-frequency hydraulic control system for fuel dispenser and novel precision control method for quantitative fuel filling thereof, patent number: CN102173372B.


Speech Title: Controller Parameters Autotuning Based on Hierarchical Reinforcement Learning Algorithms

Abstract: Reinforcement learning (RL) algorithms have emerged as a promising approach for tuning control parameters in micro-controllers. However, the existing algorithm lacks effective improvement to adapt to the limited computing power of micro-controller. In order to apply the empirical knowledge of RL agent to Micro-controllers effectively, the split-type RL algorithm is proposed. The core of the algorithm is that the actor network runs in the       micro-controller and the critic network runs in the host computer. Since actor-critic RL agents use actor networks to make decisions, this approach enables micro-controllers to use RL empirical knowledge with lower computing power. To improve sample efficiency, an improved asynchronous update strategy is applied. In order to prove that our improvement is necessary and effective, a real physical system is designed and built, and the ability of the proposed algorithm to suppress interference is demonstrated. The neural network controller designed by this method can reduce the computational cost when applying reinforcement learning experience. And it can avoid the impact of the“train-to-real”gap. The algorithm also guaranteed the security of the algorithm exploration and deployment processes, and fully considers the actual engineering conditions.