PAIR Lab: PKU Alignment and Interaction Research Lab
PAIR Lab: PKU Alignment and Interaction Research Lab
Open-Source Projects
People
Talks
Publications
Resources
Contact
LLM Test Time Preference Adaptation
Amulet: ReAlignment During Test Time for Personalized Preference Adaptation of LLMs
How to align large language models (LLMs) with user preferences from a static general dataset has been frequently studied. However, …
Zhaowei Zhang
,
Fengshuo Bai
,
Qizhi Chen
,
Chengdong Ma
,
Mingzhi Wang
,
Haoran Sun
,
Zilong Zheng
,
Yaodong Yang
PDF
Cite
Code
Cite
×