Xinye Wanyan

PhD Candidate, RMIT University

Large language Model’s (LLMs) have shown high-quality semantic comprehension ability and extensive external knowledge, which has been incorporated into recommendation systems as multiple functions. However, existing bias evaluation pipelines designed for conventional recommendation systems are not fully applicable to recommendation systems via LLM (RecLLM) and most bias mitigation methods are limited to a single intervention stage, rendering them inadequate for addressing the overall bias of the complex RecLLMs. Xinye will introduce a comprehensive evaluation framework designed to assess the biases within RecLLMs and their constituent sub-modules (Wanyan et al., 2025). In addition, a calibrated synthetic benchmark dataset, leveraging LLMs, will be developed to facilitate the bias evaluation and mitigation experiments.

Xinye is a scholarship recipient of the ARC Centre for Automated Decision-Making and Society (ADM+S) is supervised by Prof. Jeffrey Chan and Dr. Danula Hettiachchi.

 

References

  1. CIKM
    Wanyan2025-ta.png
    Temporal-Aware User Behaviour Simulation with Large Language Models for Recommender Systems
    Xinye Wanyan, Danula Hettiachchi, Chenglong Ma, Ziqi Xu, and Jeffrey Chan
    In Proceedings of the 34th ACM International Conference on Information and Knowledge Management (CIKM ’25), 2025
    To appear