
I am a computer scientist and researcher specializing in deep learning, information retrieval, search and ranking algorithms, recommendation systems, and responsible AI. Recently, I received my Ph.D. in Computer Science at Santa Clara University, where my research focuses on enhancing search engine ranking through multi-task learning and ensuring fairness in AI systems. I also hold an M.Sc. in Web Science and Big Data Analytics from University College London and a B.Sc. in Computer Science from Coventry University, providing me with a strong international academic foundation.
My research aims to improve the accuracy, efficiency, and fairness of search and ranking systems while addressing bias in large language models (LLMs). I have developed multi-task learning frameworks that enhance product ranking, significantly improving click-through rate predictions, with my work published in the ACM Web Conference (WWW 2022) and ACM Transactions on Information Systems (TOIS).
I am also deeply committed to AI fairness and responsible AI, ensuring that large language models—the backbone of modern chatbots and virtual assistants—operate equitably across diverse user groups. My research on LLM fairness, including evaluating bias in Retrieval-Augmented Generation (RAG), LLM-based ranking models, LLM reasoning steps, and large vision-language models (LVLMs), has been featured in NAACL 2024, COLING 2025, and EMNLP 2025.
Beyond academia, I have gained industry experience as a Data Science Intern at Walmart Global Tech, where I optimized search and ranking models, and as a Visiting Researcher at NTT DOCOMO Innovations, developing deep learning applications for real-world challenges. Before embarking on my Ph.D., I co-founded and led a technology startup in Beijing, where I designed and deployed large-scale news recommendation and online advertising systems, serving millions of users. This entrepreneurial experience provided me with deep insights into real-world AI applications and further fueled my passion for advancing responsible AI research.
Feel free to connect with me on LinkedIn to discuss research, AI fairness, and more!
News
Our work on Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts was accepted by EMNLP 2025.
Our work on Does Reasoning Introduce Bias? A Study of Social Bias Evaluation and Mitigation in LLM Reasoning was accepted by EMNLP 2025.
I recently graduated from Santa Clara University. My dissertation, Neural Ranking in Sparse Data Environments, explored methods to improve search ranking under sparse data.
Our work on Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models was accepted by Knowledge-Based Systems (KBS).
Our work on Does RAG Introduce Unfairness in LLMs? Evaluating Fairness in Retrieval-Augmented Generation Systems was accepted by the 31st International Conference on Computational Linguistics (COLING 2025).
Our work on Meta Learning to Rank for Sparsely Supervised Queries was accepted by ACM Transactions on Information Systems (TOIS).
Our work on Passage-specific Prompt Tuning for Passage Reranking in Question Answering with Large Language Models was accepted by The Second Workshop on Generative Information Retrieval (Gen-IR 2024).
Our work on Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers was accepted by the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) (NAACL 2024).
Our work on A Multi-task Learning Framework for Product Ranking with BERT was accepted by the ACM Web Conference 2022 (WWW 2022).