Instructions to use debisoft/Qwen3-8B-thinking-function_calling-quant-V0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use debisoft/Qwen3-8B-thinking-function_calling-quant-V0 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("debisoft/Qwen3-8B-thinking-function_calling-quant-V0", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9a997c15c541270d7ca68a03c0f248e98711e274cb72dc56bd5357af56e3ca05
- Size of remote file:
- 11.4 MB
- SHA256:
- 69be6eac895955c02db8c4908c27468037e69a15b32adf233ed901e304c8bc29
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.