Instructions to use prithivMLmods/Math-IIO-7B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Math-IIO-7B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Math-IIO-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Math-IIO-7B-Instruct") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Math-IIO-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Math-IIO-7B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Math-IIO-7B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Math-IIO-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Math-IIO-7B-Instruct
- SGLang
How to use prithivMLmods/Math-IIO-7B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Math-IIO-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Math-IIO-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Math-IIO-7B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Math-IIO-7B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Math-IIO-7B-Instruct with Docker Model Runner:
docker model run hf.co/prithivMLmods/Math-IIO-7B-Instruct
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Math-IIO-7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Math-IIO-7B-Instruct")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Math IIO 7B Instruct
The Math IIO 7B Instruct is a fine-tuned language model based on the robust Qwen2.5-7B-Instruct architecture. This model has been specifically trained to excel in single-shot mathematical reasoning and instruction-based tasks, making it a reliable choice for educational, analytical, and problem-solving applications.
Key Features:
Math-Optimized Capabilities:
The model is designed to handle complex mathematical problems, step-by-step calculations, and reasoning tasks.Instruction-Tuned:
Fine-tuned for better adherence to structured queries and task-oriented prompts, enabling clear and concise outputs.Large Vocabulary:
Equipped with an extensive tokenizer configuration and custom tokens to ensure precise mathematical notation support.
Single Shot Answers
Math-IIO File Structure
| File Name [ Uploaded file ] | Size | Description | Upload Status |
|---|---|---|---|
.gitattributes |
1.57 kB | Git attributes configuration file | Uploaded |
README.md |
263 Bytes | README file with minimal details | Updated |
added_tokens.json |
657 Bytes | Custom added tokens for tokenizer | Uploaded |
config.json |
861 Bytes | Model configuration file | Uploaded |
generation_config.json |
281 Bytes | Configuration for text generation settings | Uploaded |
merges.txt |
1.82 MB | Merge rules for byte pair encoding tokenizer | Uploaded |
pytorch_model-00001-of-00004.bin |
4.88 GB | First part of model weights (PyTorch) | Uploaded (LFS) |
pytorch_model-00002-of-00004.bin |
4.93 GB | Second part of model weights (PyTorch) | Uploaded (LFS) |
pytorch_model-00003-of-00004.bin |
4.33 GB | Third part of model weights (PyTorch) | Uploaded (LFS) |
pytorch_model-00004-of-00004.bin |
1.09 GB | Fourth part of model weights (PyTorch) | Uploaded (LFS) |
pytorch_model.bin.index.json |
28.1 kB | Index JSON file for model weights | Uploaded |
special_tokens_map.json |
644 Bytes | Map of special tokens used by the tokenizer | Uploaded |
tokenizer.json |
11.4 MB | Tokenizer settings and vocab | Uploaded (LFS) |
tokenizer_config.json |
7.73 kB | Configuration for tokenizer | Uploaded |
vocab.json |
2.78 MB | Vocabulary for tokenizer | Uploaded |
| Model Type | Size | Context Length | Link |
|---|---|---|---|
| GGUF | 7B | - | 🤗 Math-IIO-7B-Instruct-GGUF |
Training Details:
- Base Model: Qwen/Qwen2.5-7B-Instruct
- Dataset: Trained on Math-IIO-68K-Mini, a curated dataset with 68.8k high-quality examples focusing on mathematical instructions, equations, and logic-based queries.
Capabilities:
- Problem-Solving: Solves mathematical problems ranging from basic arithmetic to advanced calculus and linear algebra.
- Educational Use: Explains solutions step-by-step, making it a valuable teaching assistant.
- Analysis & Reasoning: Handles logical reasoning tasks and computational queries effectively.
How to Use:
- Download all model files, ensuring the PyTorch weights and tokenizer configurations are included.
- Load the model in your Python environment using frameworks like PyTorch or Hugging Face Transformers.
- Use the provided configurations (
config.jsonandgeneration_config.json) for optimal inference.
- Downloads last month
- 13


# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Math-IIO-7B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)