Below is a sample README file for the repository. You can adjust the sections as needed:


Qwen2.5-Coder-14B Houdini Vex Functions

This repository hosts a fine-tuned version of the Qwen2.5-Coder-14B model, optimized specifically for generating Houdini VEX functions. The model has been fine-tuned using Houdini VEX Functions data and is designed to assist developers and technical artists working in Houdini.

Model Details

  • Base Model: Qwen2.5-Coder-14B
  • Fine-Tuning: Finetuned using Houdini VEX Functions data
  • Architecture: qwen2
  • Model Size: 14.8B parameters
  • Quantization: 8-bit (Q8_0)
  • License: Apache-2.0

Features

  • Houdini VEX Expertise: Specially adapted to generate Houdini VEX code.
  • Procedural Workflow: Ideal for creating procedural geometry, effects, and other Houdini-specific functions.
  • Efficient Performance: Utilizes 8-bit quantization for faster inference while maintaining quality.

Installation

To use this model, ensure you have the required dependencies installed. You can install the necessary Python packages using pip:

pip install transformers torch

Then, load the model in your Python script as follows:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "pahaadi/Qwen2.5-Coder-14B-houdini_vex_functions"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)

# Example usage: Generate Houdini VEX code
prompt = "Write a Houdini VEX function that creates procedural geometry."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=256)
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)

Usage

This model is tailored for tasks such as:

  • Generating Houdini VEX functions.
  • Assisting with procedural generation tasks in Houdini.
  • Accelerating coding workflows in Houdini-based projects.

Feel free to integrate the model into your Houdini pipeline to enhance your creative coding process.

Fine-Tuning and Contributions

If you are interested in further fine-tuning this model or adapting it for other Houdini-related tasks, contributions and suggestions are welcome. Please follow the guidelines provided in the Hugging Face documentation for model fine-tuning and deployment.

License

This model is released under the Apache-2.0 License.

Citation

If you use this model in your research or projects, please consider citing it as follows:

@misc{pahaadi2025qwen2.5,
  author = {pahaadi},
  title = {Qwen2.5-Coder-14B Houdini Vex Functions},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/pahaadi/Qwen2.5-Coder-14B-houdini_vex_functions}
}

Acknowledgments

Special thanks to the contributors and the Hugging Face community for their continuous support and for providing an open platform for sharing and developing innovative machine learning models.


This README provides an overview of the model, usage instructions, and additional details that help users understand and integrate the model into their projects.

Downloads last month
48
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support