TY - JOUR AU - Joachims, Thorsten AB - Abstract:There is a growing interest in natural language-based user profiles for recommender systems, which aims to enhance transparency and scrutability compared with embedding-based methods. Existing studies primarily generate these profiles using zero-shot inference from large language models (LLMs), but their quality remains insufficient, leading to suboptimal recommendation performance. In this paper, we introduce LangPTune, the first end-to-end training framework to optimize LLM-generated user profiles. Our method significantly outperforms zero-shot approaches by explicitly training the LLM for the recommendation objective. Through extensive evaluations across diverse training configurations and benchmarks, we demonstrate that LangPTune not only surpasses zero-shot baselines but can also matches the performance of state-of-the-art embedding-based methods. Finally, we investigate whether the training procedure preserves the interpretability of these profiles compared to zero-shot inference through both GPT-4 simulations and crowdworker user studies. Implementation of LangPTune can be found at this https URL. TI - End-to-end Training for Recommendation with Language-based User Profiles JF - Computing Research Repository DO - 10.48550/arxiv.2410.18870 DA - 2025-02-12 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/end-to-end-training-for-recommendation-with-language-based-user-8yR0d0g2HY VL - 2025 IS - 2410 DP - DeepDyve ER -