Tianjin University
Deep Reinforcement
Learning Laboratory
Our lab has several Ph.D. and Master positions. If you are interested in our research, please send us your CV. (jianye.hao@tju.edu.cn / yanzheng@tju.edu.cn)

实验室长期接受优秀同学交流学习,攻读硕士/博士学位的同学加入。同时欢迎感兴趣学部(院)夏令营活动的同学进行邮件联系!
News
lightbulb
May 5, 2023 - Three papers accepted by ICML 2023:
"ChiPFormer: Transferable Chip Placement via Offline Decision Transformer","MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL","RACE: Improve Multi-Agent Reinforcement Learning with Representation Asymmetry and Collaborative Evolution"
stars
Jan 21, 2023 - Seven papers accepted by ICLR 2023:
"ERL-Re2: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation","Breaking the Curse of Dimensionality in Multiagent State Space: A Unified Agent Permutation Framework","EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model","Learnable Behavior Control: Breaking Atari Human World Records via Sample-Efficient Behavior Selection","DAG Matters! GFlowNets Enhanced Explainer for Graph Neural Networks","CFlowNets: Continuous control with Generative Flow Network","Out-of-distribution Detection with Implicit Outlier Transformation"
rocket_launch
Nov 25, 2022 - Four papers accepted by AAAI 2023:
"SplitNet: A Reinforcement Learning based Sequence Splitting Method for the MinMax Multiple Travelling Salesman Problem","Neighbor Auto-Grouping Graph Neural Networks for Handover Parameter Configuration in Cellular Network","Structure Aware Incremental Learning with Personalized Imitation Weights for Recommender Systems","Models as Agents: Optimizing Muti-Step Predictions of interactive Local Models in Model-Based Multi-Agent Reinforcement Learning"
campaign
Sep 15, 2022 - Seven papers accepted by NeurIPS 2022:
"Multiagent Q-learning with Sub-Team Coordination","Transformer-based Working Memory for Multiagent Reinforcement Learning with Action Parsing","GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis","Versatile Multi-stage Graph Neural Network for Circuit Representation","The Policy-gradient Placement and Generative Routing Neural Networks for Chip Design","DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning","Plan To Predict: Learning an Uncertainty-Foreseeing Model For Model-Based Reinforcement Learning"
READ MORE
Recent Research
test
2023-09-11: Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles. However, existing works on combining Deep RL and EA have two common drawbacks. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation.Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles. However, existing works on combining Deep RL and EA have two common drawbacks. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation.Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles. However, existing works on combining Deep RL and EA have two common drawbacks. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation.
ERL-Re2: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
2023-09-10: Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles. However, existing works on combining Deep RL and EA have two common drawbacks. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation.
Boosting Multiagent Reinforcement Learning via Permutation Invariant and Permutation Equivariant Networks
2023-09-09: The state space in Multiagent Reinforcement Learning (MARL) grows exponentially with the agent number. Such a curse of dimensionality results in poor scalability and low sample efficiency, inhibiting MARL for decades. To break this curse, we propose a unified agent permutation framework that exploits the permutation invariance (PI) and permutation equivariance (PE) inductive biases to reduce the multiagent state space. Our insight is that permuting the order of entities in the factored multiagent state space does not change the information.
MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
2023-08-01: Recently, diffusion model shines as a promising backbone for the sequence modeling paradigm in offline reinforcement learning. However, these works mostly lack the generalization ability across tasks with reward or dynamics change. To tackle this challenge, in this paper we propose a task-oriented conditioned diffusion planner for offline meta-RL(MetaDiffuser), which considers the generalization problem as conditional trajectory generation task with contextual representation. The key is to learn a context conditioned diffusion model which can generate task-oriented trajectories for planning across diverse tasks. To enhance the dynamics consistency of the generated trajectories while encouraging trajectories to achieve high returns, we further design a dual-guided module in the sampling process of the diffusion model. The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task and the flexibility to incorporate with different task representation method. The experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines, demonstrating the outstanding conditional generation ability of diffusion architecture.
READ MORE