Our Mission


The PKU Alignment and Interaction Research Lab (PAIR Lab) is dedicated to addressing key challenges in decision making, strategic interactions, and value alignment for artificial general intelligence (AGI). We specialize in reinforcement learning for intelligent decisions, multi-agent systems for complex strategic interactions, and alignment techniques for harmonizing AGI with human values and intentions. Our integrative approach aims to steer AGI development towards a safe, beneficial future aligned with the progression of humanity. Our research focus includes:

  • Reinforcement Learning: safe RL, MARL, meta RL, offline RL, PbRL
  • Game Theory: solution concepts, game decomposition, meta-game analysis
  • Alignment: RLHF, multi-agent alginment, self-alignment, constitutional AI