id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.11432#83
A Survey on Large Language Model based Autonomous Agents
[55] Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, and Yong Li. S3: Social-network simulation system with large language model-empowered agents. arXiv preprint arXiv:2307.14984, 2023. [56] Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, and Yongfeng Zhang. OpenAGI: When llm meets domain experts. arXiv preprint arXiv:2304.04370, 2023. [57] Zorik Gekhman, Nadav Oved, Orgad Keller, Idan Szpektor, and Roi Reichart. On the robustness of dialogue history representation in conversational question answering: a comprehensive study and a new prompt-based method. Transactions of the Association for Computational Linguistics, 11:351â 366, 2023. [58] Maitrey Gramopadhye and Daniel Szafir. Generating executable action plans with environmentally-aware language models. arXiv preprint arXiv:2210.04964, 2022. [59] Igor Grossmann, Matthew Feinberg, Dawn C Parker, Nicholas A Christakis, Philip E Tetlock, and William A Cunningham. Ai and the transformation of social science research. Science, 380(6650):1108â 1109, 2023. [60] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.
2308.11432#82
2308.11432#84
2308.11432
[ "2307.03109" ]
2308.11432#84
A Survey on Large Language Model based Autonomous Agents
Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. [61] Sil Hamilton. Blind judgement: Agent-based supreme court modelling with GPT. arXiv preprint arXiv:2301.05327, 2023. 28 [62] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.
2308.11432#83
2308.11432#85
2308.11432
[ "2307.03109" ]
2308.11432#85
A Survey on Large Language Model based Autonomous Agents
Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023. [63] Zhuolun He, Haoyuan Wu, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng Zheng, and Bei Yu. Chateda: A large language model powered autonomous agent for eda, 2023. [64] Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. MetaGPT: Meta programming for multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
2308.11432#84
2308.11432#86
2308.11432
[ "2307.03109" ]
2308.11432#86
A Survey on Large Language Model based Autonomous Agents
[65] John J Horton. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research, 2023. [66] Bin Hu, Chenyang Zhao, Pu Zhang, Zihao Zhou, Yuanhang Yang, Zenglin Xu, and Bin Liu. Enabling intelligent interactions between an agent and an llm: A reinforcement learning approach. arXiv preprint arXiv:2306.03604, 2023. [67] Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. Chatdb: Augmenting LLMs with databases as their symbolic memory. arXiv preprint arXiv:2306.03901, 2023. [68] Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Emotionally numb or empathetic? evaluating how llms feel using emotionbench. arXiv preprint arXiv:2308.03656, 2023. [69] Jie Huang and Kevin Chen-Chuan Chang.
2308.11432#85
2308.11432#87
2308.11432
[ "2307.03109" ]
2308.11432#87
A Survey on Large Language Model based Autonomous Agents
Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022. [70] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â 9147. PMLR, 2022. [71] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al.
2308.11432#86
2308.11432#88
2308.11432
[ "2307.03109" ]
2308.11432#88
A Survey on Large Language Model based Autonomous Agents
Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022. [72] Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, and Stephen MacNeil. Memory sandbox: Transparent and interactive memory management for conversational agents. arXiv preprint arXiv:2308.01542, 2023. [73] Sajed Jalil, Suzzana Rafi, Thomas D LaToza, Kevin Moran, and Wing Lam. ChatGPT and software testing education:
2308.11432#87
2308.11432#89
2308.11432
[ "2307.03109" ]
2308.11432#89
A Survey on Large Language Model based Autonomous Agents
Promises & perils. In 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pages 4130â 4137. IEEE, 2023. [74] Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â
2308.11432#88
2308.11432#90
2308.11432
[ "2307.03109" ]
2308.11432#90
A Survey on Large Language Model based Autonomous Agents
38, 2023. [75] Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, and He Liang. Cgmi: Config- urable general multi-agent interaction framework, 2023. [76] Oliver P John, Eileen M Donahue, and Robert L Kentle. Big five inventory. Journal of Personality and Social Psychology, 1991. [77] John A Johnson. Measuring thirty facets of the five factor model with a 120-item public domain inventory: Development of the ipip-neo-120. Journal of research in personality, 51:78â 89, 2014. [78] Sungmin Kang, Juyeon Yoon, and Shin Yoo.
2308.11432#89
2308.11432#91
2308.11432
[ "2307.03109" ]
2308.11432#91
A Survey on Large Language Model based Autonomous Agents
Large language models are few-shot testers: In 2023 IEEE/ACM 45th International Exploring LLM-based general bug reproduction. Conference on Software Engineering (ICSE), pages 2312â 2323. IEEE, 2023. [79] Yeonghun Kang and Jihan Kim. Chatmof: An autonomous ai system for predicting and generating metal-organic frameworks. arXiv preprint arXiv:2308.01423, 2023. [80] Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, et al. Mrkl systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning. arXiv preprint arXiv:2205.00445, 2022.
2308.11432#90
2308.11432#92
2308.11432
[ "2307.03109" ]
2308.11432#92
A Survey on Large Language Model based Autonomous Agents
29 [81] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. [82] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â
2308.11432#91
2308.11432#93
2308.11432
[ "2307.03109" ]
2308.11432#93
A Survey on Large Language Model based Autonomous Agents
22213, 2022. [83] Grgur KovaË c, Rémy Portelas, Peter Ford Dominey, and Pierre-Yves Oudeyer. The socialai school: Insights from developmental psychology towards artificial socio-cultural agents. arXiv preprint arXiv:2307.07871, 2023. [84] Ranjay Krishna, Donsuk Lee, Li Fei-Fei, and Michael S Bernstein. Socially situated artificial intelligence enables learning from human interaction. Proceedings of the National Academy of Sciences, 119(39):e2115730119, 2022. [85] Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, et al. Evaluating human- language model interaction. arXiv preprint arXiv:2212.09746, 2022. [86] Chao Li, Xing Su, Chao Fan, Haoying Han, Cong Xue, and Chunmo Zheng. Quantify- ing the impact of large language models on collective opinion dynamics. arXiv preprint arXiv:2308.03313, 2023. [87] Cheng Li, Jindong Wang, Kaijie Zhu, Yixuan Zhang, Wenxin Hou, Jianxun Lian, and Xing Xie.
2308.11432#92
2308.11432#94
2308.11432
[ "2307.03109" ]
2308.11432#94
A Survey on Large Language Model based Autonomous Agents
Emotionprompt: Leveraging psychology for large language models enhancement via emotional stimulus. arXiv preprint arXiv:2307.11760, 2023. [88] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large scale language model society. arXiv preprint arXiv:2303.17760, 2023. [89] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian.
2308.11432#93
2308.11432#95
2308.11432
[ "2307.03109" ]
2308.11432#95
A Survey on Large Language Model based Autonomous Agents
The hitchhikerâ s guide to program analysis: A journey with large language models. arXiv preprint arXiv:2308.00245, 2023. [90] Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. Api-bank: A benchmark for tool-augmented LLMs. arXiv preprint arXiv:2304.08244, 2023. [91] Siyu Li, Jin Yang, and Kui Zhao. Are you in a masquerade? exploring the behavior and impact of large language model driven social bots in online social networks. arXiv preprint arXiv:2307.10337, 2023. [92] Xinnian Liang, Bing Wang, Hui Huang, Shuangzhi Wu, Peihao Wu, Lu Lu, Zejun Ma, and Zhoujun Li. Unleashing infinite-length input capacity for large-scale language models with self-controlled memory system. arXiv preprint arXiv:2304.13343, 2023. [93] Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, et al. Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis. arXiv preprint arXiv:2303.16434, 2023. [94] Yuanzhi Liang, Linchao Zhu, and Yi Yang.
2308.11432#94
2308.11432#96
2308.11432
[ "2307.03109" ]
2308.11432#96
A Survey on Large Language Model based Autonomous Agents
Tachikuma: Understading complex interac- tions with multi-character and novel objects by large language models. arXiv preprint arXiv:2307.12573, 2023. [95] Mark Liffiton, Brad Sheese, Jaromir Savelka, and Paul Denny. Codehelp: Using large language models with guardrails for scalable support in programming classes. arXiv preprint arXiv:2308.06921, 2023. [96] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. [97] Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, and Xiang Ren.
2308.11432#95
2308.11432#97
2308.11432
[ "2307.03109" ]
2308.11432#97
A Survey on Large Language Model based Autonomous Agents
Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. arXiv preprint arXiv:2305.17390, 2023. [98] Jessy Lin, Nicholas Tomlin, Jacob Andreas, and Jason Eisner. Decision-oriented dialogue for human-ai collaboration. arXiv preprint arXiv:2305.20076, 2023. [99] Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, and Qin Chen. Agentsims: An open-source sandbox for large language model evaluation. arXiv preprint arXiv:2308.04026, 2023.
2308.11432#96
2308.11432#98
2308.11432
[ "2307.03109" ]
2308.11432#98
A Survey on Large Language Model based Autonomous Agents
30 [100] Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. LLM+P: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023. [101] Hao Liu, Carmelo Sferrazza, and Pieter Abbeel. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 3, 2023. [102] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. Training socially aligned language models in simulated human society. arXiv preprint arXiv:2305.16960, 2023. [103] Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al.
2308.11432#97
2308.11432#99
2308.11432
[ "2307.03109" ]
2308.11432#99
A Survey on Large Language Model based Autonomous Agents
Agentbench: Evaluating LLMs as agents. arXiv preprint arXiv:2308.03688, 2023. [104] Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, et al. BOLAA: Benchmarking and orchestrating LLM-augmented autonomous agents. arXiv preprint arXiv:2308.05960, 2023. [105] Zilin Ma, Yiyang Mei, and Zhaoyuan Su. Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support. arXiv preprint arXiv:2307.15810, 2023. [106] Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang. Memory-assisted prompt editing to improve GPT-3 after deployment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2833â 2861, 2022. [107] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al.
2308.11432#98
2308.11432#100
2308.11432
[ "2307.03109" ]
2308.11432#100
A Survey on Large Language Model based Autonomous Agents
Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. [108] Zhao Mandi, Shreeya Jain, and Shuran Song. Roco: Dialectic multi-robot collaboration with large language models. arXiv preprint arXiv:2307.04738, 2023. [109] Jordan K Matelsky, Felipe Parodi, Tony Liu, Richard D Lange, and Konrad P Kording. A large language model-assisted education tool to provide feedback on open-ended responses. arXiv preprint arXiv:2308.02439, 2023. [110] Nikhil Mehta, Milagro Teruel, Patricio Figueroa Sanz, Xin Deng, Ahmed Hassan Awadallah, and Julia Kiseleva. Improving grounded language understanding in a collaborative environment by interacting with agents through help feedback. arXiv preprint arXiv:2304.10750, 2023. [111] Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023. [112] Ning Miao, Yee Whye Teh, and Tom Rainforth.
2308.11432#99
2308.11432#101
2308.11432
[ "2307.03109" ]
2308.11432#101
A Survey on Large Language Model based Autonomous Agents
SelfCheck: Using LLMs to zero-shot check their own step-by-step reasoning. arXiv preprint arXiv:2308.00436, 2023. [113] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. [114] Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, and Hinrich Schütze. RET-LLM: Towards a general read-write memory for large language models. arXiv preprint arXiv:2305.14322, 2023.
2308.11432#100
2308.11432#102
2308.11432
[ "2307.03109" ]
2308.11432#102
A Survey on Large Language Model based Autonomous Agents
[115] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. WebGPT: Browser- assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. [116] Nathalia Nascimento, Paulo Alencar, and Donald Cowan. Self-adaptive large language model (llm)-based multiagent systems. arXiv preprint arXiv:2307.06187, 2023.
2308.11432#101
2308.11432#103
2308.11432
[ "2307.03109" ]
2308.11432#103
A Survey on Large Language Model based Autonomous Agents
[117] Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Ko- dama, and Jun Deguchi. Simplyretrieve: A private and lightweight retrieval-centric generative ai tool. arXiv preprint arXiv:2308.03983, 2023. [118] Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, and Roy Fox. Do embodied agents dream of pixelated sheep?: Embodied decision making using language guided world modelling. arXiv preprint arXiv:2301.12050, 2023.
2308.11432#102
2308.11432#104
2308.11432
[ "2307.03109" ]
2308.11432#104
A Survey on Large Language Model based Autonomous Agents
31 [119] Oluwatosin Ogundare, Srinath Madasu, and Nathanial Wiggins. Industrial engineering with large language models: A case study of ChatGPTâ s performance on oil & gas problems. arXiv preprint arXiv:2304.14354, 2023. [120] OpenAI. GPT-4 technical report, 2023. [121] Joon Sung Park, Joseph C. Oâ Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein.
2308.11432#103
2308.11432#105
2308.11432
[ "2307.03109" ]
2308.11432#105
A Survey on Large Language Model based Autonomous Agents
Generative agents: Interactive simulacra of human behavior. In In the 36th Annual ACM Symposium on User Interface Software and Technology (UIST â 23), UIST â 23, New York, NY, USA, 2023. Association for Computing Machinery. [122] Joon Sung Park, Lindsay Popowski, Carrie Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Social simulacra: Creating populated prototypes for social computing systems. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, pages 1â
2308.11432#104
2308.11432#106
2308.11432
[ "2307.03109" ]
2308.11432#106
A Survey on Large Language Model based Autonomous Agents
18, 2022. [123] Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023. [124] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. Communicative agents for software development. arXiv preprint arXiv:2307.07924, 2023. [125] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, et al. Tool learning with foundation models. arXiv preprint arXiv:2304.08354, 2023. [126] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. ToolLLM: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023. [127] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Lan- guage models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. [128] Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. Planning with large language models via corrective re-prompting. arXiv preprint arXiv:2211.09935, 2022. [129] Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, Ian Reid, and Niko Suender- hauf. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. arXiv preprint arXiv:2307.06135, 2023.
2308.11432#105
2308.11432#107
2308.11432
[ "2307.03109" ]
2308.11432#107
A Survey on Large Language Model based Autonomous Agents
[130] Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Xingyu Zeng, and Rui Zhao. TPTU: Task planning and tool usage of large language model-based AI agents. arXiv preprint arXiv:2308.03427, 2023. [131] Mustafa Safdari, Greg Serapio-Garcà a, Clément Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c.
2308.11432#106
2308.11432#108
2308.11432
[ "2307.03109" ]
2308.11432#108
A Survey on Large Language Model based Autonomous Agents
Personality traits in large language models. arXiv preprint arXiv:2307.00184, 2023. [132] Swarnadeep Saha, Peter Hase, and Mohit Bansal. Can language models teach weaker agents? teacher explanations improve students via theory of mind. arXiv preprint arXiv:2306.09299, 2023. [133] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom.
2308.11432#107
2308.11432#109
2308.11432
[ "2307.03109" ]
2308.11432#109
A Survey on Large Language Model based Autonomous Agents
Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023. [134] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [135] Dale Schuurmans. Memory augmented large language models are computationally universal. arXiv preprint arXiv:2301.04589, 2023. [136] Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. Minding language modelsâ (lack of) theory of mind: A plug-and-play multi-character belief tracker. arXiv preprint arXiv:2306.00924, 2023.
2308.11432#108
2308.11432#110
2308.11432
[ "2307.03109" ]
2308.11432#110
A Survey on Large Language Model based Autonomous Agents
[137] Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. Algo- rithm of thoughts: Enhancing exploration of ideas in large language models. arXiv preprint arXiv:2308.10379, 2023. 32 [138] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. HuggingGPT: Solving ai tasks with ChatGPT and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023. [139] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao.
2308.11432#109
2308.11432#111
2308.11432
[ "2307.03109" ]
2308.11432#111
A Survey on Large Language Model based Autonomous Agents
Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. [140] Yubo Shu, Hansu Gu, Peng Zhang, Haonan Zhang, Tun Lu, Dongsheng Li, and Ning Gu. Rah! recsys-assistant-human: A human-central recommendation framework with large language models. arXiv preprint arXiv:2308.09904, 2023. [141] Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. LLM-Planner: Few-shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088, 2022.
2308.11432#110
2308.11432#112
2308.11432
[ "2307.03109" ]
2308.11432#112
A Survey on Large Language Model based Autonomous Agents
[142] Yifan Song, Weimin Xiong, Dawei Zhu, Wenhao Wu, Han Qian, Mingbo Song, Hailiang Huang, Cheng Li, Ke Wang, Rong Yao, Ye Tian, and Sujian Li. Restgpt: Connecting large language models with real-world restful apis, 2023. [143] Ruoxi Sun, Sercan O Arik, Hootan Nakhost, Hanjun Dai, Rajarishi Sinha, Pengcheng Yin, and Tomas Pfister.
2308.11432#111
2308.11432#113
2308.11432
[ "2307.03109" ]
2308.11432#113
A Survey on Large Language Model based Autonomous Agents
Sql-palm: Improved large language modeladaptation for text-to-sql. arXiv preprint arXiv:2306.00739, 2023. [144] Dà dac Surà s, Sachit Menon, and Carl Vondrick. ViperGPT: Visual inference via python execution for reasoning. arXiv preprint arXiv:2303.08128, 2023. [145] Melanie Swan, Takashi Kido, Eric Roland, and Renato P dos Santos. Math agents: Computational infrastructure, mathematical embedding, and genomics. arXiv preprint arXiv:2307.02502, 2023. [146] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al.
2308.11432#112
2308.11432#114
2308.11432
[ "2307.03109" ]
2308.11432#114
A Survey on Large Language Model based Autonomous Agents
Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. [147] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [148] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar.
2308.11432#113
2308.11432#115
2308.11432
[ "2307.03109" ]
2308.11432#115
A Survey on Large Language Model based Autonomous Agents
Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291, 2023. [149] Lei Wang. Recagent. https://github.com/RUC-GSAI/YuLan-Rec, 2023. [150] Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, and Ji-Rong Wen. Recagent: A novel simulation paradigm for recommender systems. arXiv preprint arXiv:2306.02552, 2023. [151] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. [152] Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, and Yingzhen Yang. Recmind: Large language model powered agent for recommendation. arXiv preprint arXiv:2308.14296, 2023. [153] Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966, 2023. [154] Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang.
2308.11432#114
2308.11432#116
2308.11432
[ "2307.03109" ]
2308.11432#116
A Survey on Large Language Model based Autonomous Agents
Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023. [155] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. [156] Ross Williams, Niyousha Hosseinichimeh, Aritra Majumdar, and Navid Ghaffarzadegan. Epidemic modeling with generative agents. arXiv preprint arXiv:2307.04986, 2023.
2308.11432#115
2308.11432#117
2308.11432
[ "2307.03109" ]
2308.11432#117
A Survey on Large Language Model based Autonomous Agents
33 [157] Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, and Thomas Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv preprint arXiv:2305.05658, 2023. [158] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. AutoGen: Enabling next-gen LLM applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155, 2023.
2308.11432#116
2308.11432#118
2308.11432
[ "2307.03109" ]
2308.11432#118
A Survey on Large Language Model based Autonomous Agents
[159] Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, and Shrimai Prabhumoye. Plan, eliminate, and trackâ language models are good teachers for embodied agents. arXiv preprint arXiv:2305.02412, 2023. [160] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. Embodied task planning with large language models. arXiv preprint arXiv:2307.01848, 2023. [161] Yuchen Xia, Manthan Shenoy, Nasser Jazdi, and Michael Weyrich. Towards autonomous system: flexible modular production system enhanced with large language model agents. arXiv preprint arXiv:2304.14721, 2023. [162] Jiannan Xiang, Tianhua Tao, Yi Gu, Tianmin Shu, Zirui Wang, Zichao Yang, and Zhiting Hu. Language models meet world models: Embodied experiences enhance language models. arXiv preprint arXiv:2305.10626, 2023. [163] Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, and Dongkuan Xu. Gentopia: A collaborative platform for tool-augmented LLMs. arXiv preprint arXiv:2308.04030, 2023. [164] Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023.
2308.11432#117
2308.11432#119
2308.11432
[ "2307.03109" ]
2308.11432#119
A Survey on Large Language Model based Autonomous Agents
[165] Yuxuan Lei Jing Yao Defu Lian Xing Xie Xu Huang, Jianxun Lian. Recommender ai agent: Integrating large language models for interactive recommendations. arXiv preprint arXiv:2308.16505, 2023. [166] Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. Harnessing the power of LLMs in practice: A survey on ChatGPT and beyond. arXiv preprint arXiv:2304.13712, 2023. [167] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023. [168] Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan.
2308.11432#118
2308.11432#120
2308.11432
[ "2307.03109" ]
2308.11432#120
A Survey on Large Language Model based Autonomous Agents
Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â 20757, 2022. [169] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023. [170] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
2308.11432#119
2308.11432#121
2308.11432
[ "2307.03109" ]
2308.11432#121
A Survey on Large Language Model based Autonomous Agents
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629, 2022. [171] Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, et al. Retroformer: Retrospective large language agents with policy gradient optimization. arXiv preprint arXiv:2308.02151, 2023.
2308.11432#120
2308.11432#122
2308.11432
[ "2307.03109" ]
2308.11432#122
A Survey on Large Language Model based Autonomous Agents
[172] Ceyao Zhang, Kaijie Yang, Siyi Hu, Zihao Wang, Guanghe Li, Yihang Sun, Cheng Zhang, Zhaowei Zhang, Anji Liu, Song-Chun Zhu, Xiaojun Chang, Junge Zhang, Feng Yin, Yitao Liang, and Yaodong Yang. Proagent: Building proactive cooperative ai with large language models, 2023. [173] Chenrui Zhang, Lin Liu, Jinpeng Wang, Chuyuan Wang, Xiao Sun, Hongyu Wang, and Mingchen Cai.
2308.11432#121
2308.11432#123
2308.11432
[ "2307.03109" ]
2308.11432#123
A Survey on Large Language Model based Autonomous Agents
Prefer: Prompt ensemble learning via feedback-reflect-refine. arXiv preprint arXiv:2308.12033, 2023. [174] Danyang Zhang, Lu Chen, Situo Zhang, Hongshen Xu, Zihan Zhao, and Kai Yu. Large language model is semi-parametric reinforcement learning agent. arXiv preprint arXiv:2306.07929, 2023. 34 [175] Danyang Zhang, Lu Chen, Zihan Zhao, Ruisheng Cao, and Kai Yu. Mobile-Env: An evaluation platform and benchmark for interactive agents in LLM era. arXiv preprint arXiv:2305.08144, 2023. [176] Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan. Building cooperative embodied agents modularly with large language models. arXiv preprint arXiv:2307.02485, 2023. [177] Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang.
2308.11432#122
2308.11432#124
2308.11432
[ "2307.03109" ]
2308.11432#124
A Survey on Large Language Model based Autonomous Agents
Expel: Llm agents are experiential learners, 2023. [178] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023. [179] Wanjun Zhong, Lianghong Guo, Qiqi Gao, and Yanlin Wang. Memorybank: Enhancing large language models with long-term memory. arXiv preprint arXiv:2305.10250, 2023. [180] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al.
2308.11432#123
2308.11432#125
2308.11432
[ "2307.03109" ]
2308.11432#125
A Survey on Large Language Model based Autonomous Agents
Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854, 2023. [181] Wei Zhou, Xiangyu Peng, and Mark Riedl. Dialogue shaping: Empowering agents through npc interaction. arXiv preprint arXiv:2307.15833, 2023. [182] Xuanhe Zhou, Guoliang Li, and Zhiyuan Liu. Llm as dba. arXiv preprint arXiv:2308.05481, 2023. [183] Andrew Zhu, Lara J Martin, Andrew Head, and Chris Callison-Burch.
2308.11432#124
2308.11432#126
2308.11432
[ "2307.03109" ]
2308.11432#126
A Survey on Large Language Model based Autonomous Agents
Calypso: Llms as dungeon mastersâ assistants. arXiv preprint arXiv:2308.07540, 2023. [184] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al. Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory. arXiv preprint arXiv:2305.17144, 2023.
2308.11432#125
2308.11432#127
2308.11432
[ "2307.03109" ]
2308.11432#127
A Survey on Large Language Model based Autonomous Agents
[185] Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, Róbert Csordás, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al. Mindstorms in natural language-based societies of mind. arXiv preprint arXiv:2305.17066, 2023. [186] Terry Yue Zhuo, Zhuang Li, Yujin Huang, Yuan-Fang Li, Weiqing Wang, Gholamreza Haffari, and Fatemeh Shiri. On robustness of prompt-based semantic parsing with large pre-trained language model: An empirical study on codex. arXiv preprint arXiv:2301.12868, 2023. [187] Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. arXiv preprint Can large language models transform computational social science? arXiv:2305.03514, 2023.
2308.11432#126
2308.11432#128
2308.11432
[ "2307.03109" ]
2308.11432#128
A Survey on Large Language Model based Autonomous Agents
35
2308.11432#127
2308.11432
[ "2307.03109" ]
2308.10837#0
Leveraging Large Language Models for Pre-trained Recommender Systems
3 2 0 2 g u A 1 2 ] R I . s c [ 1 v 7 3 8 0 1 . 8 0 3 2 : v i X r a # Leveraging Large Language Models for Pre-trained Recommender Systems Zhixuan Chu*1, Hongyan Hao*1, Xin Ouyang1, Simeng Wang1, Yan Wang1, Yue Shen1, Jinjie Gu1, Qing Cui1, Longfei Li1, Siqiao Xue1, James Y Zhang1, Sheng Li2 1Ant Group 2University of Virginia {chuzhixuan.czx, hongyanhao.hhy, xin.oyx, simeng.wsm, luli.wy, zhanying, jinjie.gujj, cuiqing.cq, longyao.llf, siqiao.xsq, james.z}@antgroup.com, [email protected] # Abstract
2308.10837#1
2308.10837
[ "1810.04805" ]
2308.10837#1
Leveraging Large Language Models for Pre-trained Recommender Systems
Recent advancements in recommendation systems have shifted towards more comprehensive and personalized rec- ommendations by utilizing large language models (LLM). However, commonsense knowledge and reasoning abilities into recommendation sys- tems remains a challenging problem. In this paper, we propose RecSysLLM, a novel pre-trained recommendation model based on LLMs. RecSysLLM retains LLM reasoning and knowledge while integrating recommendation domain knowledge through unique designs of data, training, and in- ference. This allows RecSysLLM to leverage LLMsâ capabil- ities for recommendation tasks in an efficient, unified frame- work. We demonstrate the effectiveness of RecSysLLM on benchmarks and real-world scenarios. RecSysLLM provides a promising approach to developing unified recommendation systems by fully exploiting the power of pre-trained language models. Introduction The realm of recommendation has gained considerable at- tention in recent years due to its ability to drive business growth and enhance user engagement. Recent advancements in recommender systems have shifted towards incorporating diverse information and catering to a broader range of ap- plication scenarios, rather than focusing on task-specific ar- chitectures. This shift has been driven by the need for more comprehensive and personalized recommendations, as well as the availability of new data sources and knowledge (Geng et al. 2022; Chu et al. 2022; Hui et al. 2022; Sheu et al. 2021; Li and Zhao 2021; Jiang et al. 2022; Xue et al. 2021). In addition, with the advent of the Large Language Model (LLM) (Radford et al., 2019; Brown et al. 2020; Ouyang et al. 2022), we have witnessed an unprecedented surge in the capabilities of natural language processing. The power of LLM lies in its ability to understand and generate human- like language. LLM has also enabled the extraction of im- plicit knowledge from text data (Gu et al. 2023; Yoneda et al. 2023; Zhao et al. 2023). This newfound capability of LLM has opened up exciting avenues for the integration of seman- tic information into recommender systems and provides a wealth of insights into user preferences and behaviors (Shi et al. 2023; Zhao, Tan, and Mei 2022).
2308.10837#0
2308.10837#2
2308.10837
[ "1810.04805" ]
2308.10837#2
Leveraging Large Language Models for Pre-trained Recommender Systems
As a result, incorpo- rating LLM into recommender systems has become a cru- cial step toward providing a powerful and comprehensive paradigm for recommendation tasks. In the following, we will discuss the new generation of recommendation model paradigms from two directions, i.e., the unified pre-trained recommendation model and the combination of LLM and recommendation model. On the one hand, training a pre-trained recommendation model can help overcome the limitations of existing recom- mendation approaches that require designing task-specific architectures and training objectives. Traditional recommen- dation methods have focused on a single task, such as per- sonalized product recommendations, contextual advertising, customer segmentation, and so on, making them less adapt- able to new tasks and limiting their ability to generalize to new domains. By training a pre-trained recommenda- tion model, we can leverage the power of pre-trained mod- els to learn generalizable representations of user behavior and product characteristics (Tsai et al. 2023; Zhao, Tan, and Mei 2022) that can be applied to a variety of recommen- dation tasks. Overall, a pre-trained recommendation model provides a flexible and scalable solution that can be adapted to a variety of recommendation tasks. Since recommenda- tion tasks usually share a common userâ item pool, features, behavioral sequences, and other contextual information, we believe it is promising to merge even more recommendation tasks into a unified framework so that they can implicitly transfer knowledge to benefit each other and enable general- ization to other unseen tasks (Xie et al. 2022). On the other hand, integrating LLMs into recommenda- tion systems has several significant advantages. These ad- vantages are linked to the LLMâ s capabilities in thinking, reasoning, and discovering implicit relationships within tex- tual data based on the entailment of wealthy background knowledge and logical chains. (1) By leveraging the seman- tic information in natural language data, LLMs can help the recommendation system understand and infer the re- lationship between user features and behavioral sequences and among entities in behavioral sequences. This allows the recommendation system to understand the userâ s needs and preferences in a more comprehensive way. (2) Another ben- efit of integrating LLMs into recommendation systems is the ability to leverage the implicit knowledge that is hidden in
2308.10837#1
2308.10837#3
2308.10837
[ "1810.04805" ]
2308.10837#3
Leveraging Large Language Models for Pre-trained Recommender Systems
*These authors contributed equally. the models. LLMs are trained on vast amounts of textual data and can help to understand the relationships between different concepts and ideas. By incorporating LLMs into recommendation systems, this implicit knowledge can be used to generate more divergent and logical recommenda- tions. This can lead to more creative and unexpected rec- ommendations that the user may not have considered oth- erwise. (3) By leveraging the natural language processing capabilities of LLMs, recommendation tasks that previously required separate specialized systems can now be integrated into a unified framework. The pretrained knowledge and few-shot learning abilities of LLMs allow recommendation models to be rapidly adapted to new domains with lim- ited data. Overall, the natural language processing power and versatility of LLMs can help merge more recommenda- tion tasks into a unified framework. Furthermore, a compre- hensive survey on recommendations and LLMs is provided in the Appendix. This survey covers the motivation behind them, current development, and challenges. However, constructing a robust and integrated recommen- dation system that fully utilizes large language modelsâ im- mense knowledge and reasoning capacities poses several key challenges. Directly training a pre-trained recommen- dation model from scratch is not only a waste of time and data collection efforts but also lacks general common sense and reasoning capabilities that underpin modern large lan- guage models. Meanwhile, directly fine-tuning a pre-trained LLM model on recommendation data also has drawbacks. Recommendation data has distinct characteristics - such as fixed entities and sequential user behaviors - that differ from the raw text corpora used to train language models. As such, fine-tuning may erase much of the capabilities specific to recommendation tasks. Therefore, we propose a novel pre- trained recommendation paradigm (RecSysLLM) based on the pre-trained large language model through unique designs for recommendation in three phases, i.e., data phase, training phase, and inference phase. Our model retains the reasoning ability and rich knowledge contained in large language mod- els while integrating the recommendation-specific knowl- edge. It directly inherits the parameters and framework of the original large language model but also designs and ex- tends some mechanisms in the data phase (textualization and sampling), training phase (mask, position, and order- ing), and inference phase (dynamic position infilling).
2308.10837#2
2308.10837#4
2308.10837
[ "1810.04805" ]
2308.10837#4
Leveraging Large Language Models for Pre-trained Recommender Systems
These modifications do not discard the tokenization, parameters, structure, or previously learned knowledge in the LLM. On this basis, recommendation data is used to fine-tune it. The significant advantage of this pre-trained recommenda- tion model is that it can utilize the reasoning capabilities and rich knowledge of large language models while incor- porating domain-specific knowledge of the recommenda- tion system through parameter-efficient fine-tuning of user- profiles and behavioral sequences data. Another crucial ben- efit of this model is that it can be easily adapted to differ- ent downstream recommendation sub-tasks. We evaluate the proposed model on extensive benchmark datasets and real- world scenarios. The experimental results demonstrate its effectiveness in improving the quality of recommendations. Overall, our proposed pre-trained recommendation model provides a promising approach for building recommenda- tion systems that are efficient, effective, and unified. RecSysLLM Pretraining Mechanism To fully take advantage of LLM and domain knowledge in recommendation tasks, we need to modify the LLM and fine-tune the existing LLM to get a pre-trained recommenda- tion model. However, the conventional large language mod- els are trained on general knowledge and coherent corpus, and the framework of the model is not designed for behav- ioral sequence data and recommendation tasks. To address these two points, we make modifications from three phases, i.e., data, training, and inference phases, to transform a con- ventional pre-trained language model into a pre-trained rec- ommendation model. The whole framework is illustrated in Figure 1. This pre-trained recommendation model has been employed in real-world applications in Chinese scenarios, so we take the GLM (Du et al. 2021) as an example to introduce the RecSysLLM pretraining mechanism, which is bilingual in Chinese and English. Our model can also be adapted to other large language models with minor modifications.
2308.10837#3
2308.10837#5
2308.10837
[ "1810.04805" ]
2308.10837#5
Leveraging Large Language Models for Pre-trained Recommender Systems
# Data Phase In the data phase, textualizing tabular data is often the eas- iest and most straightforward approach for implementing large language models. For the pre-training of RecSysLLM, we first textualize conventional tabular data, such as user features stored in a table with rows and columns into text. Since large language models are originally trained on tex- tual data, text-based features can be easily combined with text-based behavioral sequences and other text information, which helps our model better capture the relationship be- tween features and behavioral sequences. In addition, textu- alizing tabular data allows for greater flexibility in how they are used in the following tasks. Compared with ordinary language texts, the training texts in the recommendation system should take into account the interests and preferences of users from different periods (Yu et al. 2019). Long-term preferences are usually stable and reflect the general preferences of a user. These preferences do not change frequently over time, but they lack time- liness and may not reflect current interests. On the other hand, short-term preferences tend to change frequently over time and are more reflective of a userâ s current interests. We aim to use different periods of preferences to provide accu- rate and relevant recommendations to users, which can bal- ance the userâ s general interests with their current needs. Therefore, we sample behavioral sequences in long-term preferences (10%), medium-term preferences (30%), and short-term preferences (60%). Long-term preferences cap- ture the userâ s preferences that have remained consistent for an extended period of time, typically spanning over sev- eral months or years. Medium-term preferences capture the userâ s preferences that have developed and changed over a shorter period of time, typically spanning over several weeks or months. Short-term preferences can improve recommen- dation accuracy by providing the system with the userâ s most recent preferences, spanning over several days or hours.
2308.10837#4
2308.10837#6
2308.10837
[ "1810.04805" ]
2308.10837#6
Leveraging Large Language Models for Pre-trained Recommender Systems
Training Phase Entities 1 *2 *3 X4 %5 X6 X7 Aili, ey LLM er &2 1 &2 1 1 I Masks %~1%2%3%4%5%6%7 1 oy â 1 M] 4 [M] X¢ x: Division [M] â 4 IM]X%â X7 | I X1X_Xz Xs Bidirectional X1 Xz X3 [E] Xs [E] trtt tt ry rs rw tetttt ttt [M] %4 [M] %@ %7 [S] x1 X2 X3 [S] X5 Inter-position 1 2 3 4 511113 38 Intra-position 0 0 0 12123441 2 [a oy Autoregressive Encoder Blank Infilling Inference Phase 5 X6 7 [E] Autoregressive Judgment Unknown Beforehand Figure 1: This is the framework of RecSysLLM based on a pre-trained generative language model (GLM). To transform the GLM into a specialized model for recommendation systems, several modifications are made while preserving the core knowl- edge and capabilities of the original language model architecture, such as the new mask mechanism, span order, positional encoding, dynamic position mechanism, and so on. # Training Phase To be consistent with the architecture of GLM, our model is still trained by optimizing an autoregressive blank infilling objective based on an input text x = [x1, · · · , xn]. Differ- ent from the general language text in GLM, our input text is composed of user features and behavioral sequences. Al- though textualized user features and behavioral sequences are also composed of multiple tokens, they often represent a complete meaning as a whole. If they are split into differ- ent parts, like regular text, they will lose their unique mean- ing.
2308.10837#5
2308.10837#7
2308.10837
[ "1810.04805" ]
2308.10837#7
Leveraging Large Language Models for Pre-trained Recommender Systems
In addition, the LLMâ s power comes from the way it tokenizes and processes text. It has been trained on a vast amount of data and has learned to recognize patterns and relationships between tokens, enabling it to identify entities accurately and extract information. If we were to create a new tokenization method, we would lose the LLMâ s power. Therefore, to maintain the LLMâ s power and supplement the new knowledge in the recommendation data, it is best to leverage the existing tokenization and enhance it with addi- tional information and capabilities rather than create a new tokenization. In the following, we name the attributes in user features and items in the behavioral sequences as entities, which means that they are complete units and have fixed meanings.
2308.10837#6
2308.10837#8
2308.10837
[ "1810.04805" ]
2308.10837#8
Leveraging Large Language Models for Pre-trained Recommender Systems
Therefore, as shown in the â Entitiesâ of Figure 1, our data are composed of plain language text and entities, where (x1, x2, and x3) have merged to form e1 and (x6 and x7) to form e2. x4 and x5 are separate tokens. Mask Mechanism. To inject the new knowledge of rec- ommendation tasks based on the original LLM, we follow the principle in the LLM and design the new mask mecha- nism and position strategies.
2308.10837#7
2308.10837#9
2308.10837
[ "1810.04805" ]
2308.10837#9
Leveraging Large Language Models for Pre-trained Recommender Systems
Similar to the GLM (Du et al. 2021), multiple text spans {s1, · · · , sm} are sampled, where each span si corresponds to a series of consecutive tokens [si,1, · · · , si,li ] in x. Each span is replaced with a single [MASK] token. The remaining text and [MASK]s form a corrupted text xcorrupt. In the GLM, since there is no ex- istence of entity, the tokens can be randomly sampled into spans. However, in our model, the multiple and consecutive tokens composing an entity should not be split into different parts. In other words, the tokens of an entity are treated as a whole. The [MASK] mechanism will not break the com- plete entities, which will highlight the whole structure of entities and help to capture the interrelationship between en- tities. For example, as shown in the â Masksâ of Figure 1, x1, x2, and x3 composing the e1 are blocked as a whole and sin- gle token x5 is also blocked. Therefore, we form the xcorrupt with [M], x4, [M], x6, and x7 in the â
2308.10837#8
2308.10837#10
2308.10837
[ "1810.04805" ]
2308.10837#10
Leveraging Large Language Models for Pre-trained Recommender Systems
Divisionâ of Figure 1. language process- ing tasks, we adopt the multi-task pretraining setup (Du et al. 2021) with entity-level [M], sentence-level [sM], and document-level [gM]. Specifically, entity-level refers to the randomly blanking out continuous spans of tokens from the input text, following the idea of autoencoding, which captures the interdependencies between entities. Sentence level restricts that the masked spans must be full sentences. Document-level is to sample a single span whose length is sampled from a uniform distribution over 50%â 100% of the original length. The objective aims for long text generation. Span Order. We implement the autoregressive blank in- filling objective with the following techniques. The input x is divided into two parts: one part is the corrupted text xcorrupt, and the other consists of the masked spans. Our model automatically learns a bidirectional encoder for the first part and a unidirectional decoder for the second part in a unified model. The model predicts the missing tokens in the spans from the corrupted text in an autoregressive manner, which means when predicting the missing tokens in a span, the model has access to the corrupted text and the previously predicted spans. Instead of randomly permuting the order of the spans in the original GLM (Du et al. 2021), we keep all spans in chronological order to keep the interrelationship among different entities. Formally, we define the pretraining objective of a length-m index sequence [1, 2, ..., m] as m S7 log p(s; latcormpts $1 i=l 8i-1;9) (1) i=1 Positional Encoding. To enable autoregressive genera- tion, each span is padded with special tokens [START] and [END], for input and output, respectively.
2308.10837#9
2308.10837#11
2308.10837
[ "1810.04805" ]
2308.10837#11
Leveraging Large Language Models for Pre-trained Recommender Systems
To be consistent with the original LLM, we cannot arbitrarily modify, add, or reduce the original positional strategies. Therefore, we ex- tend 2D positional encodings (Du et al. 2021) based on enti- ties. Specifically, each token is encoded with two positional ids, i.e., inter-position and intra-position ids. The inter-position id represents the position in the cor- rupted text xcorrupt. For the masked spans, it is the position of the corresponding [MASK] token. For the intra-position id, we follow the essential meaning in the original LLM, which still refers to the intra-position. Instead of the scope of the whole span, we extend it into a finer granularity. For the en- tities, it represents the intra-relationship among entities. As shown in Figure 1, for separate tokens (not in the entities) in the encoder part ([M], x4, [M]), their intra-position ids are 0. For consecutive tokens in the entities (x6 and x7), they are numbered in chronological order. For tokens in the autore- gressive blank infilling part, they range from 1 to the length of the entities including [S], such as (entities: [S], x1, x2, x3 â 1, 2, 3, 4) and (independent token: [S], x5 â 1, 2 ). The two positional ids are projected into two vectors via learn- able embedding tables, which are both added to the input token embeddings. 1S) Token SI] sgtereaition ~> Autoregressive Blank infiling Intra-position > Autoregressive Judgment [S] Xs X% X7 5555 1212 5) x5 %6 X7 55585 1234 [SIs Xe x7 [S] Watch Star Wars [S] Apple AirPods Pro [5] Casual Wear And 5 5 5 e 2 e3 1231 Figure 2: This is the dynamic position mechanism.
2308.10837#10
2308.10837#12
2308.10837
[ "1810.04805" ]
2308.10837#12
Leveraging Large Language Models for Pre-trained Recommender Systems
When one token is generated, it will be judged as one part of an entity or not. If it and the previous token belong to one entity, the intra-position id will continue to grow. Otherwise, it will start at 1 again. Inference phase Because our pre-trained model is designed to fit different downstream tasks, the length of the generated text should be unknown beforehand and flexible for the different tasks. Further, due to the existence of entities, the intra-position ids represent the relative position of the entity.
2308.10837#11
2308.10837#13
2308.10837
[ "1810.04805" ]
2308.10837#13
Leveraging Large Language Models for Pre-trained Recommender Systems
As shown in the â Inference Phaseâ of Figure 1, we cannot specify the intra- position ids in advance when autoregressive blank infilling. Hence, we designed a dynamic position mechanism for the mask and position modifications made during the inference phase. It can conduct the autoregressive judgment to deter- mine and complement the intra-position ids one by one as each token is generated in the autoregressive generation pro- cedure. Specifically, we establish an entity pool beforehand, which stores all the tokens of the entities that existed in our recommendation task. When one token is generated, it will be judged as one part of an entity or not. We utilize the Trie algorithm (Bodon and R´onyai 2003) to check whether the generated token and previous token belong to the same en- tity, which is a tree data structure used for locating specific keys from within a set. If they belong to one entity, the intra- position id will continue to grow. Otherwise, it will start at 1 again.
2308.10837#12
2308.10837#14
2308.10837
[ "1810.04805" ]
2308.10837#14
Leveraging Large Language Models for Pre-trained Recommender Systems
The detailed procedure is illustrated in Figure 2. # Experiments Experimental Setup Datasets. We evaluate our method on three real-world e- commerce datasets from Amazon.com, spanning the cate- gories of Sports & Outdoors, Beauty, and Toys & Games. The datasets contain user ratings and reviews from 2019, along with transaction records between January 1 and De- cember 31 (Zhou et al. 2020; Xue et al. 2022, 2023). Key statistics of the resulting datasets are provided in Table 1. Metrics. Following the experiments in (Geng et al. 2022), we cover five different task families â rating, sequential rec- ommendation, explanation, review, and direct recommenda- tion to facilitate the multitask pretraining for the recommen- dation. For rating prediction, we adopt Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) as eval- uation metrics. For sequential recommendation and direct recommendation tasks, we employ top-k Hit Ratio (HR@k) and Normalized Discounted Cumulative Gain (NDCG@k) to evaluate the performance and report HR@1, 5, 10 and NGCG@5, 10. For explanation generation and review sum- marization, we evaluate different methods with BLEU-4, ROUGE-1, ROUGE-2, and ROUGE-L. Lower values of RMSE and MAE indicate better performance, while higher values are preferred for all other metrics. In all result ta- bles, bold numbers represent the best performance, while underlined numbers refer to the second-best performance.
2308.10837#13
2308.10837#15
2308.10837
[ "1810.04805" ]
2308.10837#15
Leveraging Large Language Models for Pre-trained Recommender Systems
Baselines for Multiple Tasks To demonstrate compe- tence on a wide range of recommendation-related tasks, we adopt the same representative approaches as (Geng et al. 2022) for different tasks, such as Rating Prediction (MF (Koren, Bell, and Volinsky 2009) and MLP (Cheng et al. 2016)), Direct Recommendation (BPR-MF (Ren- dle et al. 2009), BPR-MLP (Cheng et al. 2016), and SimpleX (Mao et al. 2021)), Sequential Recommendation (Caser (Tang and Wang 2018), HGN (Ma, Kang, and Liu 2019), GRU4Rec (Hidasi et al. 2016), BERT4Rec (Sun et al. 2019), FDSA (Zhang et al. 2019), SASRec (Kang and McAuley 2018), and S3-Rec (Zhou et al. 2020)), Explana- tion Generation (Attn2Seq (Dong et al. 2017), NRT (Li et al. 2017), PETER (Li, Zhang, and Chen 2021), and PETER+), and review summarization (T0 (Sanh et al. 2022) and GPT- 2 (Radford et al. 2019)). The detailed baselines are provided in the Appendix. Implementation To facilitate the multitask prompt-based pretraining for the recommendation, Geng et al. (2022) created a collection of personalized prompt templates. The collection covers five different task families â rating, sequential recommenda- tion, explanation, review, and direct recommendation.
2308.10837#14
2308.10837#16
2308.10837
[ "1810.04805" ]
2308.10837#16
Leveraging Large Language Models for Pre-trained Recommender Systems
The Table 1: Basic statistics of the experimental datasets. Dataset Sports Beauty Toys #Users #Items #Reviews #Sparsity (%) 35,598 18,357 296,337 0.0453 22,363 12,101 198,502 0.0734 19,412 11,924 167,597 0.0724 prompts include personalized fields for users and items to help the model discover user-item preferences. For rating prediction, prompts ask to predict a userâ s rating or prefer- ence for an item. For sequential recommendation, prompts ask to predict the next item a user will interact with. For ex- planation, prompts ask to generate text explaining a userâ s preferences. For review, prompts summarize or predict rat- ings from reviews. For direct recommendation, prompts ask whether to recommend an item to a user. The complete col- lection of personalized prompts with examples is provided in the Appendix of (Geng et al. 2022). These prompts en- able the building of diverse training examples from raw data for multitask pertaining. We pretrain our RecSysLLM with diverse training examples with different prompt templates from all five task families to verify its multitask learning ability. Besides, we adopt a part of prompts in each task fam- ily for zero-shot evaluation while all remaining prompts are utilized for multitasking prompted pretraining. As a result, we are able to not only compare the performance across var- ious recommendation tasks but also evaluate the zero-shot generalization capability on unseen prompts. Our RecSysLLM model for these English language tasks leverages the powerful GLM-10B for English (Du et al. 2021) model as a foundation. GLM is a General Language Model pretrained with an autoregressive blank-filling objec- tive and can be finetuned on various natural language under- standing and generation tasks. Our approach builds on this pre-trained GLM-10B foundation by utilizing a parameter- efficient fine-tuning method called LoRA (Low-Rank Adap- tation) (Hu et al. 2021) to adapt the model to our specific recommendation tasks. LoRA enables efficiently customiz- ing the enormous GLM-10B model to specialized domains by learning a low-dimensional decomposition of the model update. This allows us to tap into GLM-10Bâ
2308.10837#15
2308.10837#17
2308.10837
[ "1810.04805" ]
2308.10837#17
Leveraging Large Language Models for Pre-trained Recommender Systems
s broad lan- guage knowledge while calibrating it to our RecSysLLM objectives. We inject trainable rank decomposition matri- ces into each query key value, dense, dense h to 4h and dense 4h to h layer of Transformer architecture in GLM- 10B. We pretrain our RecSysLLM for eight epochs with AdamW optimization (Loshchilov and Hutter 2017) on four NVIDIA RTX A100 GPUs. In order to achieve efficient use of memory and distributed training, we use the DeepSpeed (Rasley et al. 2020) module. The batch size is set to 32 per GPU. We set the peak learning rate as 1 Ã 10â 5 and use a warmup strategy to adjust the learning rate. In addition, we set the maximum length of input tokens to 1024. # Performance. We pretrain our RecSysLLM on a diverse set of training ex- amples utilizing different prompt templates across all five Table 2: Performance on rating prediction. The shadow refers to the test on unseen prompts in a zero-shot manner. Methods Sports RMSE MAE Beauty RMSE MAE Toys RMSE MAE 1.0234 1.1277 1.0357 RecSysLLM 1.0410 1.0292 RecSysLLM 1.0278 MF MLP P5 P5 0.7935 0.7626 0.6813 0.7012 0.6864 0.6631 1.1973 1.3078 1.2843 1.2721 1.2870 1.2671 0.9461 0.9597 0.8534 0.8431 0.8531 0.8235 1.0123 1.1215 1.0544 1.0246 1.0245 1.0112 0.7984 0.8097 0.7177 0.7012 0.6931 0.6014 task families. This is to thoroughly verify its multitask learn- ing capabilities. The results in Tables 2-7 demonstrate that for tasks with seen prompt templates, our model reaches the same conclusions as the P5 model and achieves compara- ble or superior performance.
2308.10837#16
2308.10837#18
2308.10837
[ "1810.04805" ]
2308.10837#18
Leveraging Large Language Models for Pre-trained Recommender Systems
However, we were pleasantly surprised to discover that for unseen prompt templates in a zero-shot manner, our model significantly surpasses P5. (1) From Table 2, for rating prediction, our RecSysLLM gets similar performance on prompt in the train data set, but it has better RMSE and MAE on all three datasets compared with P5 on zero-shot setting. It reflects that our RecSysLLM inherits the semantic understanding capacity of LLM on un- seen prompts, which meets our expectations for the LLM. (2) In Table 4, for the sequential recommendation, our Rec- SysLLM surpasses P5 on Beauty and Toys. It gets better per- formance than P5 on unseen prompts in a zero-shot manner. The results show that our RecSysLLM gains inter- and intra- entity knowledge and make more reasonable predictions. (3) As shown in Table 5, our RecSysLLM demonstrates supe- rior performance on the task of explanation generation, both with and without feature-based hints. The large improve- ments in natural language processing abilities of LLMs un- derlie this strong performance. Moreover, the considerable increase in scores when hints are provided highlights the critical role prompt engineering plays in eliciting the full capabilities of large language models. Through prompt de- sign and the generative power of LLMs, our system achieves state-of-the-art results on this challenging task. (4) The re- view summarization results further demonstrate the superi- ority of our RecSysLLM, as shown in Table 6. Despite hav- ing fewer parameters than T0 (7 billion vs 11 billion), our model attains higher performance across all evaluation met- rics. These gains over strong baselines like T0 underscore the efficiency and effectiveness of our approach. The capa- bility to produce high-quality summaries with fewer param- eters highlights the strength of our method, delivering strong performance without the need for extremely large models. (5) For the task of direct recommendation, we make an eval- uation on open question prompts to test the ability of gener- ative recommendation. The results are illustrated in Table 7. Our RecSysLLM outperforms P5 on most evaluation met- rics for this task.
2308.10837#17
2308.10837#19
2308.10837
[ "1810.04805" ]
2308.10837#19
Leveraging Large Language Models for Pre-trained Recommender Systems
The simpleX model is a strong collabora- tive filtering baseline, but RecSysLLM achieves better top-1 item ranking compared to simpleX. To further analyze the performance gap between the P5 model and our proposed method, we conducted an in-depth examination of the training data. Table 3 illustrates that in the P5 model, the items are simply represented by numeric Table 3: The training sequences in Amazon Toys dataset for P5 and our RecSysLLM model. Sequence P5 RecSysLLM 1 1, 2, 3, 4, 5, 6, 7 Hasbro Electronic Catch Phrase, Gloom, Cards Against Humanity, Carcassonne Basic Game, Asmodee 7 Wonders Wonder Pack, Village Board Game, Roryâ s Story Cubes - Voyages 2 8, 9, 10, 11, 12 Megabloks CAT 3in1 Ride On Truck, Fisher-Price Jake and The Never Land Pirates - Jakeâ s Musical Pirate Ship Bucky, VTech KidiBeats Drum Set, Playskool Heroes Transformers Rescue Bots Blades the Copter-Bot Figure, LeapFrog LeapPad2 Power Learning Tablet 1767 692, 5235, 5765, 709, 7162 Badger Basket White Doll Crib With Cabinet Bedding And Mobile - Pink/White, Badger Basket Doll High Chair With Plate Bib And Spoon - Pink/White, Fisher-Price Brilliant Basics Lil Snoopy (Colors May Vary), LeapFrog Shapes and Sharing Picnic Basket, JC Toys 20" La Baby Doll 17788 Webkinz Velvety Elephant, Webkinz Love Frog Limited Edition Release 10092, 9958, 8925, 2881, 2706 The Walking Dead TV Board Game, Zombie Survival Playing Cards, McFarlane Toys The Walking Dead Comic Series 2 Penny The Governors Daughter Action Figure,
2308.10837#18
2308.10837#20
2308.10837
[ "1810.04805" ]
2308.10837#20
Leveraging Large Language Models for Pre-trained Recommender Systems
Table 4: Performance on the sequential recommendation. The shadow refers to the test on unseen prompts in a zero-shot manner. Methods Sports Beauty Toys HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 0.0116 0.0189 0.0129 0.0115 0.0182 0.0233 0.0251 0.0364 RecSysLLM 0.0360 0.0387 RecSysLLM 0.0392 Caser HGN GRU4Rec BERT4Rec FDSA SASRec S3-Rec P5 P5 0.0072 0.0120 0.0086 0.0075 0.0122 0.0154 0.0161 0.0296 0.0291 0.0312 0.0330 0.0194 0.0313 0.0204 0.0191 0.0288 0.0350 0.0385 0.0431 0.0417 0.0460 0.0512 0.0097 0.0159 0.0110 0.0099 0.0156 0.0192 0.0204 0.0318 0.0302 0.0336 0.0375 0.0205 0.0325 0.0164 0.0203 0.0267 0.0387 0.0387 0.0508 0.0508 0.0493 0.0501 0.0131 0.0206 0.0099 0.0124 0.0163 0.0249 0.0244 0.0379 0.0381 0.0367 0.0361 0.0347 0.0512 0.0283 0.0347 0.0407 0.0605 0.0647 0.0664 0.0667 0.0645 0.0650 0.0176 0.0266 0.0137 0.0170 0.0208 0.0318 0.0327 0.0429 0.0446 0.0416 0.0407 0.0166 0.0321 0.0097 0.0116 0.0228 0.0463 0.0443 0.0608 0.0676 0.0587 0.0630 0.0107 0.0221 0.0059 0.0071 0.0140 0.0306 0.0294 0.0507 0.0583 0.0486 0.0523 0.0270 0.0497 0.0176 0.0203 0.0381 0.0675 0.0700 0.0688 0.0712 0.0675 0.0691 0.0141 0.0277 0.0084 0.0099 0.0189 0.0374 0.0376 0.0534 0.0596 0.0536 0.0540
2308.10837#19
2308.10837#21
2308.10837
[ "1810.04805" ]
2308.10837#21
Leveraging Large Language Models for Pre-trained Recommender Systems
Table 5: Performance on explanation generation (%). The shadow refers to test on unseen prompts in a zero-shot manner. Methods Sports Beauty Toys BLUE4 ROUGE1 ROUGE2 ROUGEL BLUE4 ROUGE1 ROUGE2 ROUGEL BLUE4 ROUGE1 ROUGE2 ROUGEL w/o hints 0.5305 0.4793 0.7112 1.0407 RecSysLLM 1.2673 Attn2Seq NRT PETER P5 12.2800 11.0723 12.8944 14.1589 16.7132 1.2107 1.1304 1.3283 2.1220 2.8980 9.1312 7.6674 9.8635 10.6096 13.0104 0.7889 0.8295 1.1541 0.9742 1.5230 12.6590 12.7815 14.8497 16.4530 19.0032 1.6820 1.8543 2.1413 1.8858 3.0422 9.7481 9.9477 11.4143 11.8765 14.7471 1.6238 1.9084 1.9861 2.3185 2.9923 13.2245 13.5231 14.2716 15.3474 16.7823 2.9942 3.6708 3.6718 3.7209 4.8372 10.7398 11.1867 11.7010 12.1312 15.0231 w/ hints 2.4627 1.4689 RecSysLLM 3.7232 1.4303 RecSysLLM 3.9842 PETER+ P5 P5 24.1181 23.5476 30.1129 23.3810 30.2913 5.1937 5.3926 5.0232 5.3239 5.8923 18.4105 17.5852 20.0020 17.4913 20.3821 3.2606 1.8765 4.8232 1.9031 5.0021 25.5541 25.1183 26.9832 25.1763 27.3854 5.9668 6.0764 6.2382 6.1980 6.7281 19.7168 19.4488 21.4842 19.5188 22.7439 4.7919 3.8933 5.9323 3.5861 6.2912 28.3083 27.9916 29.3232 28.1369 30.2948 9.4520 9.5896 9.4234 9.7562 10.0329 22.7017 22.2178 23.9843 22.3056 24.9932
2308.10837#20
2308.10837#22
2308.10837
[ "1810.04805" ]
2308.10837#22
Leveraging Large Language Models for Pre-trained Recommender Systems
Table 6: Performance on review summarization (%). The shadow refers to the test on unseen prompts in a zero-shot manner. Methods Sports Beauty Toys BLUE2 ROUGE1 ROUGE2 ROUGEL BLUE2 ROUGE1 ROUGE2 ROUGEL BLUE2 ROUGE1 ROUGE2 ROUGEL 2.1581 0.7779 2.6910 RecSysLLM 4.2823 T0 GPT-2 P5 2.2695 4.4534 12.0314 14.8343 0.5694 1.0033 3.2921 4.3984 1.6221 1.9236 10.7274 12.4833 1.2871 0.5879 1.9325 3.3821 1.2750 3.3844 8.2909 9.8103 0.3904 0.6756 1.4321 2.8543 0.9592 1.3956 7.4000 10.4003 2.2296 0.6221 1.7833 4.0320 2.4671 3.7149 8.7222 12.2932 0.6482 0.6629 1.3210 3.2943 1.8424 1.4813 7.6134 10.4092 Table 7: Performance on direct recommendation. The shadow refers to the test on unseen prompts in a zero-shot manner.
2308.10837#21
2308.10837#23
2308.10837
[ "1810.04805" ]
2308.10837#23
Leveraging Large Language Models for Pre-trained Recommender Systems
Methods Sports Beauty Toys HR@1 HR@5 NDCG@5 HR@10 NDCG@10 HR@1 HR@5 NDCG@5 HR@10 NDCG@10 HR@1 HR@5 NDCG@5 HR@10 NDCG@10 0.0314 0.0351 0.0331 0.0641 RecSysLLM 0.0654 0.0726 RecSysLLM 0.0892 BPR-MF BPR-MLP SimpleX P5 P5 0.1404 0.1520 0.2362 0.1794 0.2008 0.1955 0.2029 0.0848 0.0927 0.1505 0.1229 0.1438 0.1355 0.1502 0.2563 0.2671 0.3290 0.2598 0.2984 0.2802 0.3001 0.1220 0.1296 0.1800 0.1488 0.1692 0.1627 0.1703 0.0311 0.0317 0.0325 0.0588 0.0618 0.0608 0.6072 0.1426 0.1392 0.2247 0.1573 0.1612 0.1564 0.1502 0.0857 0.0848 0.1441 0.1089 0.1110 0.1096 0.1097 0.2573 0.2542 0.3090 0.2325 0.2209 0.2300 0.2317 0.1224 0.1215 0.1711 0.1330 0.1302 0.1332 0.1302 0.0233 0.0252 0.0268 0.0386 0.0370 0.0389 0.0327 0.1066 0.1142 0.1958 0.1122 0.1301 0.1147 0.1423 0.0641 0.0688 0.1244 0.0756 0.0808 0.0767 0.0825 0.2003 0.2077 0.2662 0.1807 0.1902 0.1863 0.1926 0.0940 0.0988 0.1469 0.0975 0.0998 0.0997 0.1028
2308.10837#22
2308.10837#24
2308.10837
[ "1810.04805" ]
2308.10837#24
Leveraging Large Language Models for Pre-trained Recommender Systems
IDs based on their order of occurrence in the dataset. This type of simplistic representation cannot capture semantic information about the items. In contrast, our RecSysLLM model represents all items as text strings. The textual rep- resentation enables our large language model to understand and capture nuanced interrelationships between items much more effectively. We believe this is the primary reason why our model outperformed P5 across most cases. The textual representation in our model empowers it to ingest semantic details and identify meaningful connections that cannot be derived from IDs alone. # Applications in real-world dataset Dataset The data used in this work was collected from Alipay, a mo- bile payment platform in China. We extracted user behavior logs, including bills, search queries, and page visits for sev- eral recommendation tasks. Each user sequence consists of the userâ s 500 most recent interactions, spanning over one year of history for some users. The user sequences are used to model evolving user interests and capture both long- and short-term preferences. The training set contains 200, 000 sequences, and the test set contains 10, 000 sequences. The large-scale real-world dataset enables the modeling of com- plex user behavior and preferences for various recommenda- tion tasks. The hierarchical categories and sequential inter- actions provide rich signals for understanding user interests. Implementation Details Our RecSysLLM model for Chinese language tasks lever- ages the powerful ChatGLM-6B (Du et al. 2021) model as a foundation. ChatGLM-6B is an open-source bilingual language model with 6.2 billion parameters, trained on a trillion-token corpus comprised primarily of Chinese text with some English. The model architecture is based on the General Language Model (GLM) framework. Similarly, our approach builds on this pre-trained ChatGLM-6B founda- tion by utilizing LoRA to adapt the model to our specific recommender system tasks. We set the rank of Lora to 8, which is a proper coefficient chosen by the ablation study. Sequential Recommendation. Task Description. In this section, we conduct two se- quential recommendation tasks to evaluate the performance of our model, i.e., next-item prediction and candidate rec- ommendation. For next-item prediction, the model directly predicts the next item a user will interact with based on their historical interactions and profiles. For candidate rec- ommendation, given a userâ
2308.10837#23
2308.10837#25
2308.10837
[ "1810.04805" ]
2308.10837#25
Leveraging Large Language Models for Pre-trained Recommender Systems
s interaction history, profiles, and a list of candidate items where only one is positive, the model chooses the correct next item. We have bench- marked our model on the Amazon Sports, Beauty, and Toys datasets and demonstrated superior recommendation capa- bilities compared to other baseline recommender systems. Here, we compare our RecSysLLM to the powerful gen- erative models ChatGPT and the recently announced GPT- 4. We also compare our method against a basic fine-tuning approach of ChatGLM on our recommendation tasks. This allows us to analyze the improvements gained by our spe- cialized techniques that are tailored for the recommendation systems based on LLM. By evaluating against a simple fine- tuning baseline, we can quantify the benefits of our proposed approach and demonstrate that our architectural choices and training methodology confer meaningful advantages on rec- ommendation performance compared to just fine-tuning a large language model out-of-the-box. Next Item Prediction. The results in Table 8 demonstrate that for next-item prediction, our RecSysLLM achieves per- formance on par with ChatGPT, with both significantly out- performing the naive ChatGLM fine-tuning and GPT-4.
2308.10837#24
2308.10837#26
2308.10837
[ "1810.04805" ]
2308.10837#26
Leveraging Large Language Models for Pre-trained Recommender Systems
This is a surprising result, as we expected the larger GPT-4 model to achieve superior performance compared to ChatGPT on this recommendation task due to its greater parameter size and pretraining scale. However, GPT-4 did not exhibit par- ticularly strong results and was not decisively superior to ChatGPT. There are several potential explanations for why GPT-4 underperformed expectations on the next item predic- tion. First, the dataset and evaluation methodology used for this task may not have fully exercised GPT-4â s strengths in areas like few-shot learning and knowledge recall. Second, GPT-4â s more powerful generative capabilities may have caused it to diverge too far from the tight distributions of the recommendation data. There could be a mismatch between GPT-4â s broad natural language generation skills and the specialized prediction required by the recommender system task. In summary, our specialized RecSysLLM demonstrates that simply utilizing a larger pre-trained language model is not the only path to improved recommendation performance. The model architecture and pretraining objectives also play a vital role. By designing a model specifically for the rec- ommendation, focusing the pretraining on recommendation data, and tightly bounding the final fine-tuning, our RecSys- LLM is able to match or exceed the performance of even much larger general language models like GPT-4 for next- item prediction. These results highlight the importance of specialized model design in addition to scale for advancing recommendation systems. Candidate Recommendation. For candidate recommen- dation in Table 9, our RecSysLLM consistently outperforms both ChatGPT and the naive ChatGLM fine-tuning across metrics. This demonstrates the effectiveness of our special- ized approach for this task. In contrast to the next item re- sults, this time, GPT-4 achieves the overall best performance on candidate recommendation. In candidate recommenda- tion, given a userâ s interaction history, profile, and a list of candidate items where only one is the ground truth next in- teraction, the model must choose the correct item from the candidates. With a constrained set of options provided, GPT- 4 is able to give full play to its powerful reasoning and de- duction capabilities. The limited choice set prevents GPT- 4â s generative tendencies from leading it astray.
2308.10837#25
2308.10837#27
2308.10837
[ "1810.04805" ]
2308.10837#27
Leveraging Large Language Models for Pre-trained Recommender Systems
As a result, GPT-4 is able to leverage its scale and pretraining to achieve the best overall performance on candidate recommendation. In summary, by providing GPT-4 a focused set of candidates, we can elicit its strengths in logical reasoning while avoiding over-generation. This allows GPT-4 to achieve state-of-the- art results on candidate recommendation, showcasing the benefits of its scale and pretraining. Our specialized RecSys- LLM still exceeds the general language models on this task, demonstrating the value of recommendation-specific mod- eling. But these results highlight how large generative LMs like GPT-4 can excel given the right setup. Conclusion The focus of this paper is to design a novel paradigm of pre- training recommendation models based on large language models. We introduce a novel mask mechanism, span or- der, and positional encoding to inject inter- and intra-entity
2308.10837#26
2308.10837#28
2308.10837
[ "1810.04805" ]
2308.10837#28
Leveraging Large Language Models for Pre-trained Recommender Systems
# Table 8: Performance on next item recommendation. Methods HR@5 NDCG@5 HR@10 NDCG@10 ChatGPT GPT-4 ChatGLM+SFT RecSysLLM 0.4326 0.3846 0.2654 0.3805 0.3208 0.2890 0.2091 0.3072 0.5110 0.4674 0.3729 0.4756 0.3465 0.3159 0.2513 0.4091 Table 9: Performance on candidate recommendation task. Methods HR@1 HR@5 NDCG@5 HR@10 NDCG@10 ChatGPT GPT-4 ChatGLM+SFT RecSysLLM 0.3786 0.7079 0.2984 0.4965 0.5550 0.8154 0.7012 0.7435 0.4715 0.7671 0.6826 0.7032 0.6424 0.8560 0.7621 0.7728 0.5001 0.7804 0.7038 0.7237
2308.10837#27
2308.10837#29
2308.10837
[ "1810.04805" ]
2308.10837#29
Leveraging Large Language Models for Pre-trained Recommender Systems
knowledge into the LLM. Although our method follows the architecture of generative language models (GLM) to some extent, the core ideas of special designs for entities in recommendation tasks can be extended to other large lan- guage models. The experiments conducted on public and industrial datasets demonstrate the effectiveness and poten- tial of our proposed model on recommendation systems and related applications. The results show improvements over strong baselines, indicating that encoding entity relation- ships during pretraining can meaningfully improve down- stream performance. While we validate our approach on a select set of datasets, further experiments on a wider range of tasks would better reveal the strengths and limitations of the method. In particular, evaluating the approach across a more diverse set of domains could shed light on how ro- bust the learned representations are. Additionally, from the perspective of causal inference (Yao et al. 2021; Chu et al. 2023), there are likely further improvements to be made in terms of how semantic connections between entities are cap- tured and injected into the model. References Andreas, J. 2022. Language models as agent models. arXiv preprint arXiv:2212.01681. Bao, K.; Zhang, J.; Zhang, Y.; Wang, W.; Feng, F.; and He, X. 2023.
2308.10837#28
2308.10837#30
2308.10837
[ "1810.04805" ]
2308.10837#30
Leveraging Large Language Models for Pre-trained Recommender Systems
Tallrec: An effective and efficient tuning frame- work to align large language model with recommendation. arXiv preprint arXiv:2305.00447. Bodon, F.; and R´onyai, L. 2003. Trie: an alternative data structure for data mining algorithms. Mathematical and Computer Modelling, 38(7-9): 739â 751. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020.
2308.10837#29
2308.10837#31
2308.10837
[ "1810.04805" ]
2308.10837#31
Leveraging Large Language Models for Pre-trained Recommender Systems
Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877â 1901. Chen, Z. 2023. PALR: Personalization Aware LLMs for Recommendation. arXiv preprint arXiv:2305.07622. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. 2016. Wide & deep learning for recommender sys- tems. In Proceedings of the 1st workshop on deep learning for recommender systems, 7â
2308.10837#30
2308.10837#32
2308.10837
[ "1810.04805" ]
2308.10837#32
Leveraging Large Language Models for Pre-trained Recommender Systems
10. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learn- ing Phrase Representations using RNN Encoderâ Decoder In Proceedings of the for Statistical Machine Translation. 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 1724â 1734. Chu, Z.; Ding, H.; Zeng, G.; Huang, Y.; Yan, T.; Kang, Y.; and Li, S. 2022.
2308.10837#31
2308.10837#33
2308.10837
[ "1810.04805" ]
2308.10837#33
Leveraging Large Language Models for Pre-trained Recommender Systems
Hierarchical capsule prediction network for marketing campaigns effect. In Proceedings of the 31st ACM International Conference on Information & Knowl- edge Management, 3043â 3051. Chu, Z.; Huang, J.; Li, R.; Chu, W.; and Li, S. 2023. Causal effect estimation: Recent advances, challenges, and oppor- tunities. arXiv preprint arXiv:2302.00848. Dai, S.; Shao, N.; Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; and Xu, J. 2023. Uncovering ChatGPTâ s arXiv preprint Capabilities in Recommender Systems. arXiv:2305.02182. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018.
2308.10837#32
2308.10837#34
2308.10837
[ "1810.04805" ]
2308.10837#34
Leveraging Large Language Models for Pre-trained Recommender Systems
Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805. Dong, L.; Huang, S.; Wei, F.; Lapata, M.; Zhou, M.; and Xu, K. 2017. Learning to generate product reviews from attributes. In EACL. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; and Tang, J. 2021. Glm: General language model pre- training with autoregressive blank infilling. arXiv preprint arXiv:2103.10360. Friedman, L.; Ahuja, S.; Allen, D.; Tan, T.; Sidahmed, H.; Long, C.; Xie, J.; Schubiner, G.; Patel, A.; Lara, H.; et al. 2023. Leveraging Large Language Models in arXiv preprint Conversational Recommender Systems. arXiv:2305.07961. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and Zhang, J. 2023. Chat-rec: Towards interactive and explain- able llms-augmented recommender system. arXiv preprint arXiv:2303.14524. Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; and Zhang, Y. 2022. Rec- ommendation as language processing (rlp): A unified pre- train, personalized prompt & predict paradigm (p5). In Pro- ceedings of the 16th ACM Conference on Recommender Sys- tems, 299â 315. Gu, J.; Zhao, H.; Xu, H.; Nie, L.; Mei, H.; and Yin, W. 2023.
2308.10837#33
2308.10837#35
2308.10837
[ "1810.04805" ]
2308.10837#35
Leveraging Large Language Models for Pre-trained Recommender Systems
Robustness of Learning from Task Instructions. In Findings of ACL. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2016. Session-based Recommendations with Recurrent Neural Networks. In ICLR. Hou, Y.; Zhang, J.; Lin, Z.; Lu, H.; Xie, R.; McAuley, J.; and Zhao, W. X. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845. Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021.
2308.10837#34
2308.10837#36
2308.10837
[ "1810.04805" ]
2308.10837#36
Leveraging Large Language Models for Pre-trained Recommender Systems
Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Hui, B.; Zhang, L.; Zhou, X.; Wen, X.; and Nian, Y. 2022. Personalized recommendation system based on knowledge embedding and historical behavior. Applied Intelligence, 1â 13. Jiang, C.; Xue, S.; Zhang, J.; Liu, L.; Zhu, Z.; and Hao, H. 2022. Learning Large-scale Universal User Representation with Sparse Mixture of Experts. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequen- In 2018 IEEE international confer- tial recommendation. ence on data mining (ICDM), 197â 206. IEEE. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023.
2308.10837#35
2308.10837#37
2308.10837
[ "1810.04805" ]
2308.10837#37
Leveraging Large Language Models for Pre-trained Recommender Systems
Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint arXiv:2305.06474. Koren, Y.; Bell, R.; and Volinsky, C. 2009. Matrix factoriza- tion techniques for recommender systems. Computer, 42(8): 30â 37. Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im- agenet classification with deep convolutional neural net- works. Advances in neural information processing systems, 25. Li, L.; Zhang, Y.; and Chen, L. 2021. Personalized Trans- In Proceedings former for Explainable Recommendation. of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), 4947â 4957. Li, P.; Wang, Z.; Ren, Z.; Bing, L.; and Lam, W. 2017.
2308.10837#36
2308.10837#38
2308.10837
[ "1810.04805" ]
2308.10837#38
Leveraging Large Language Models for Pre-trained Recommender Systems
Neural rating regression with abstractive tips generation for recommendation. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in In- formation Retrieval, 345â 354. Li, S.; and Zhao, H. 2021. A survey on representation learning for user modeling. In Proceedings of the Twenty- Ninth International Conference on International Joint Con- ferences on Artificial Intelligence, 4997â 5003. Lin, J.; Dai, X.; Xi, Y.; Liu, W.; Chen, B.; Li, X.; Zhu, C.; Guo, H.; Yu, Y.; Tang, R.; et al. 2023.
2308.10837#37
2308.10837#39
2308.10837
[ "1810.04805" ]
2308.10837#39
Leveraging Large Language Models for Pre-trained Recommender Systems
How Can Recom- mender Systems Benefit from Large Language Models: A Survey. arXiv preprint arXiv:2306.05817. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. 2023a. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149. Liu, Q.; Chen, N.; Sakai, T.; and Wu, X.-M. 2023b. A First Look at LLM-Powered Generative News Recommendation. arXiv preprint arXiv:2305.06566. Loshchilov, I.; and Hutter, F. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. Ma, C.; Kang, P.; and Liu, X. 2019. Hierarchical gating net- works for sequential recommendation. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 825â 833. Mao, K.; Zhu, J.; Wang, J.; Dai, Q.; Dong, Z.; Xiao, X.; and He, X. 2021. SimpleX:
2308.10837#38
2308.10837#40
2308.10837
[ "1810.04805" ]
2308.10837#40
Leveraging Large Language Models for Pre-trained Recommender Systems
A Simple and Strong Baseline for Collaborative Filtering. In Proceedings of the 30th ACM In- ternational Conference on Information & Knowledge Man- agement, 1243â 1252. Muhamed, A.; Keivanloo, I.; Perera, S.; Mracek, J.; Xu, Y.; Cui, Q.; Rajagopalan, S.; Zeng, B.; and Chilimbi, T. 2021. CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models. In NeurIPS Efficient Nat- ural Language and Speech Processing Workshop. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â
2308.10837#39
2308.10837#41
2308.10837
[ "1810.04805" ]
2308.10837#41
Leveraging Large Language Models for Pre-trained Recommender Systems
27744. Qiu, Z.; Wu, X.; Gao, J.; and Fan, W. 2021. U-BERT: Pre- training user representations for improved recommendation. In Proceedings of the AAAI Conference on Artificial Intelli- gence, volume 35, 4320â 4327. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. ???? Improving language understanding by generative pre-training. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I.; et al. 2019. Language models are unsupervised multitask learners. OpenAI blog. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J. 2020.
2308.10837#40
2308.10837#42
2308.10837
[ "1810.04805" ]
2308.10837#42
Leveraging Large Language Models for Pre-trained Recommender Systems
Explor- ing the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1): 5485â 5551. Rasley, J.; Rajbhandari, S.; Ruwase, O.; and He, Y. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Pro- ceedings of the 26th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, 3505â 3506. Rendle, S.; Freudenthaler, C.; Gantner, Z.; and Schmidt- Thieme, L. 2009. BPR: Bayesian Personalized Ranking from Implicit Feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI â
2308.10837#41
2308.10837#43
2308.10837
[ "1810.04805" ]
2308.10837#43
Leveraging Large Language Models for Pre-trained Recommender Systems
09, 452â 461. Arlington, Virginia, USA: AUAI Press. ISBN 9780974903958. Sanh, V.; Webson, A.; Raffel, C.; Bach, S.; Sutawika, L.; Alyafeai, Z.; Chaffin, A.; Stiegler, A.; Raja, A.; Dey, M.; Bari, M. S.; Xu, C.; Thakker, U.; Sharma, S. S.; Szczechla, E.; Kim, T.; Chhablani, G.; Nayak, N.; Datta, D.; Chang, J.; Jiang, M. T.-J.; Wang, H.; Manica, M.; Shen, S.; Yong, Z. X.; Pandey, H.; Bawden, R.; Wang, T.; Neeraj, T.; Rozen, J.; Sharma, A.; Santilli, A.; Fevry, T.; Fries, J. A.; Teehan, R.; Scao, T. L.; Biderman, S.; Gao, L.; Wolf, T.; and Rush, A. M. 2022. Multitask Prompted Training Enables Zero- In International Conference on Shot Task Generalization. Learning Representations. Schuster, M.; and Paliwal, K. K. 1997.
2308.10837#42
2308.10837#44
2308.10837
[ "1810.04805" ]
2308.10837#44
Leveraging Large Language Models for Pre-trained Recommender Systems
Bidirectional recur- rent neural networks. IEEE transactions on Signal Process- ing, 45(11): 2673â 2681. Sheu, H.-S.; Chu, Z.; Qi, D.; and Li, S. 2021. Knowledge- guided article embedding refinement for session-based news recommendation. and Learning Systems, 33(12): 7921â 7927. Shi, X.; Xue, S.; Wang, K.; Zhou, F.; Zhang, J. Y.; Zhou, J.; Tan, C.; and Mei, H. 2023.
2308.10837#43
2308.10837#45
2308.10837
[ "1810.04805" ]
2308.10837#45
Leveraging Large Language Models for Pre-trained Recommender Systems
Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning. arXiv preprint arXiv:2305.16646. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidi- rectional encoder representations from transformer. In Pro- ceedings of the 28th ACM international conference on infor- mation and knowledge management, 1441â
2308.10837#44
2308.10837#46
2308.10837
[ "1810.04805" ]
2308.10837#46
Leveraging Large Language Models for Pre-trained Recommender Systems
1450. Tang, J.; and Wang, K. 2018. Personalized top-n sequential recommendation via convolutional sequence embedding. In Proceedings of the eleventh ACM international conference on web search and data mining, 565â 573. Tsai, C. F.; Zhou, X.; Liu, S. S.; Li, J.; Yu, M.; and Mei, H. 2023. Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions. arXiv preprint. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Å .; and Polosukhin, I. 2017.
2308.10837#45
2308.10837#47
2308.10837
[ "1810.04805" ]
2308.10837#47
Leveraging Large Language Models for Pre-trained Recommender Systems
At- tention is all you need. Advances in neural information pro- cessing systems, 30. Wang, W.; Lin, X.; Feng, F.; He, X.; and Chua, T.-S. 2023. Generative recommendation: Towards next-generation rec- ommender paradigm. arXiv preprint arXiv:2304.03516. Wang, X.; Zhou, K.; Wen, J.-R.; and Zhao, W. X. 2022.
2308.10837#46
2308.10837#48
2308.10837
[ "1810.04805" ]
2308.10837#48
Leveraging Large Language Models for Pre-trained Recommender Systems
Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1929â 1937. Wu, C.; Wu, F.; Qi, T.; and Huang, Y. 2021. Empower- ing news recommendation with pre-trained language mod- In Proceedings of the 44th International ACM SIGIR els. Conference on Research and Development in Information Retrieval, 1652â
2308.10837#47
2308.10837#49
2308.10837
[ "1810.04805" ]
2308.10837#49
Leveraging Large Language Models for Pre-trained Recommender Systems
1656. Wu, L.; Zheng, Z.; Qiu, Z.; Wang, H.; Gu, H.; Shen, T.; Qin, C.; Zhu, C.; Zhu, H.; Liu, Q.; et al. 2023. A Survey on Large Language Models for Recommendation. arXiv preprint arXiv:2305.19860. Xiao, S.; Liu, Z.; Shao, Y.; Di, T.; Middha, B.; Wu, F.; and Xie, X. 2022. Training large-scale news recommenders with pretrained language models in the loop. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discov- ery and Data Mining, 4215â
2308.10837#48
2308.10837#50
2308.10837
[ "1810.04805" ]
2308.10837#50
Leveraging Large Language Models for Pre-trained Recommender Systems
4225. Xie, S.; Qiu, J.; Pasad, A.; Du, L.; Qu, Q.; and Mei, H. 2022. Hidden State Variability of Pretrained Language Mod- els Can Guide Computation Reduction for Transfer Learn- ing. In Findings of EMNLP. Xue, S.; Shi, X.; Chu, Z.; Wang, Y.; Zhou, F.; Hao, H.; Jiang, C.; Pan, C.; Xu, Y.; Zhang, J. Y.; Wen, Q.; Zhou, J.; and Mei, H. 2023.
2308.10837#49
2308.10837#51
2308.10837
[ "1810.04805" ]
2308.10837#51
Leveraging Large Language Models for Pre-trained Recommender Systems
EasyTPP: Towards Open Benchmarking the Temporal Point Processes. Xue, S.; Shi, X.; Hao, H.; Ma, L.; Zhang, J.; Wang, S.; and Wang, S. 2021. A Graph Regularized Point Process Model In 2021 International For Event Propagation Sequence. Joint Conference on Neural Networks (IJCNN), 1â 7. Xue, S.; Shi, X.; Zhang, Y. J.; and Mei, H. 2022. HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences. In Advances in Neural In- formation Processing Systems. Yao, L.; Chu, Z.; Li, S.; Li, Y.; Gao, J.; and Zhang, A. 2021. A survey on causal inference. ACM Transactions on Knowl- edge Discovery from Data (TKDD), 15(5): 1â 46. Yao, S.; Tan, J.; Chen, X.; Zhang, J.; Zeng, X.; and Yang, K. 2022. ReprBERT: Distilling BERT to an Efficient Representation-Based Relevance Model for E-Commerce. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 4363â 4371. Yoneda, T.; Fang, J.; Li, P.; Zhang, H.; Jiang, T.; Lin, S.; Picker, B.; Yunis, D.; Mei, H.; and Walter, M. R. 2023.
2308.10837#50
2308.10837#52
2308.10837
[ "1810.04805" ]
2308.10837#52
Leveraging Large Language Models for Pre-trained Recommender Systems
Statler: State-Maintaining Language Models for Embodied Reasoning. arXiv preprint. Yu, Z.; Lian, J.; Mahmoody, A.; Liu, G.; and Xie, X. 2019. Adaptive User Modeling with Long and Short-Term Prefer- ences for Personalized Recommendation. In IJCAI, 4213â 4219. Zhang, J.; Xie, R.; Hou, Y.; Zhao, W. X.; Lin, L.; and Wen, J.-R. 2023.
2308.10837#51
2308.10837#53
2308.10837
[ "1810.04805" ]
2308.10837#53
Leveraging Large Language Models for Pre-trained Recommender Systems
Recommendation as instruction follow- ing: A large language model empowered recommendation approach. arXiv preprint arXiv:2305.07001. Zhang, T.; Zhao, P.; Liu, Y.; Sheng, V. S.; Xu, J.; Wang, D.; Liu, G.; and Zhou, X. 2019. Feature-level Deeper Self- Attention Network for Sequential Recommendation. In IJ- CAI, 4320â 4326. Tiny-Attention Zhao, H.; Tan, H.; and Mei, H. 2022. Adapter:
2308.10837#52
2308.10837#54
2308.10837
[ "1810.04805" ]