How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Symbol-LLM/Symbol-LLM-8B-Instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "Symbol-LLM/Symbol-LLM-8B-Instruct",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/Symbol-LLM/Symbol-LLM-8B-Instruct
Quick Links

Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models

Paper Link: https://arxiv.org/abs/2311.09278

Project Page: https://xufangzhi.github.io/symbol-llm-page/

πŸ”₯ News

  • πŸ”₯πŸ”₯πŸ”₯ [2024/07/23] We release Symbol-LLM-8B-Instruct model ! Try it !

  • πŸ”₯πŸ”₯πŸ”₯ Symbol-LLM is accepted by ACL 2024 !See you in Thailand !

  • πŸ”₯πŸ”₯πŸ”₯ We have made Symbol-LLM series models (7B / 13B) public.

Citation

If you find it helpful, please kindly cite the paper.

@article{xu2023symbol,
  title={Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models},
  author={Xu, Fangzhi and Wu, Zhiyong and Sun, Qiushi and Ren, Siyu and Yuan, Fei and Yuan, Shuai and Lin, Qika and Qiao, Yu and Liu, Jun},
  journal={arXiv preprint arXiv:2311.09278},
  year={2023}
}
Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Symbol-LLM/Symbol-LLM-8B-Instruct

Quantizations
2 models

Paper for Symbol-LLM/Symbol-LLM-8B-Instruct