qwen-7b-chat / requirements.txt
Hilda Cran May
Duplicate from mikeee/qwen-7b-chat
fce4951
raw
history blame contribute delete
324 Bytes
transformers==4.31.0
accelerate
tiktoken
einops
# flash-attention
# git clone -b v1.0.8 https://github.com/Dao-AILab/flash-attention
# cd flash-attention && pip install .
# pip install csrc/layer_norm
# pip install csrc/rotary
torch # 2.0.1
safetensors
bitsandbytes
transformers_stream_generator
scipy
loguru
about-time