|
Wei-Lin Chen
I am a 2nd-year CS PhD student at University of Virginia and a part-time student researcher at Google.
I am fortunate to be advised by Yu Meng and am grateful to be supported by UVA Provost's Fellowship and UVA Computer Science Scholar Fellowship.
Nowadays, I think about
(1) how to measure LLM's reasoning effort beyond superficial features like token length, and
(2) how to ensure LLM-based evaluators succeed in non-trivially verifiable domains by grounding their behavior in insights drawn from verifiable tasks.
Google Scholar
/
LinkedIn
/
CV
/
X (Twitter)
/
Email
|
|
Select Research
*equal contribution
|
Do LLM Evaluators Prefer Themselves for a Reason?
Wei-Lin Chen, Zhepei Wei, Xinyu Zhu, Shi Feng, Yu Meng
Preprint
arXiv /
thread
|
The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning
Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, Yu Meng
NeurIPS 2025
arXiv /
thread
|
Evaluating Large Language Models as Expert Annotators
Yu-Min Tseng, Wei-Lin Chen, Chung-Chi Chen, Hsin-Hsi Chen
COLM 2025
arXiv /
thread
|
InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales
Zhepei Wei, Wei-Lin Chen, Yu Meng
ICLR 2025
arXiv /
thread
|
Two Tales of Persona in LLMs: A Survey of Role-Playing and Personalization
Yu-Min Tseng\(^*\), Yu-Chao Huang\(^*\), Teng-Yun Hsiao\(^*\),Wei-Lin Chen\(^*\), Chao-Wei Huang, Yu Meng, Yun-Nung Chen
EMNLP 2024 Findings
arXiv /
repo
|
Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations
Wei-Lin Chen\(^*\), Cheng-Kuang Wu\(^*\), Yun-Nung Chen, Hsin-Hsi Chen
EMNLP 2023
arXiv /
poster
|
|