YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Qwen2.5-Vex-Python

Qwen2.5-Vex-Python is a fine-tuned version of the Qwen2.5-Coder-7B-Instruct model, optimized for enhanced performance in Python code generation and understanding.

Model Overview

  • Base Model: Qwen2.5-Coder-7B-Instruct
  • Parameter Count: 7.62 billion
  • Quantization: 8-bit (Q8_0)
  • Architecture: Qwen2
  • License: Apache-2.0

The Qwen2.5 series, developed by Alibaba Cloud's Qwen team, is a collection of large language models designed for various tasks, including code generation. The 7B variant strikes a balance between performance and resource requirements, making it suitable for a wide range of applications. citeturn0search0

Usage

To utilize Qwen2.5-Vex-Python for Python code generation or understanding tasks, you can load the model using the Hugging Face Transformers library. Ensure that you have the necessary dependencies installed and that your environment supports 8-bit quantized models.

For detailed instructions on loading and using Qwen2.5 models, refer to the Qwen Quickstart Guide. citeturn0search4

License

This model is licensed under the Apache-2.0 License. You are free to use, modify, and distribute this model, provided that you comply with the terms of the license.


base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en

Uploaded model

  • Developed by: pahaadi
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-coder-7b-instruct-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
22
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support