phi-3-M3-coder / README.md
moelanoby's picture
Update README.md
f51659d verified
|
raw
history blame
4.92 kB
metadata
language:
  - ar
  - en
  - de
  - fr
  - pt
  - pl
metrics:
  - accuracy
base_model:
  - microsoft/Phi-3-mini-4k-instruct
library_name: transformers
tags:
  - code

M3-V2: A State-of-the-Art Commercial Language Model

License

M3-V2 is a state-of-the-art causal language model featuring a proprietary architecture that enables advanced reasoning and self-correction. This model is not open source and is available for commercial licensing.

The model achieves a groundbreaking 98.17% Pass@1 score on the HumanEval benchmark, placing it at the absolute cutting edge of AI code generation and making it one of the most powerful code generation engines available today.


Benchmark Performance

The benchmark results demonstrate a level of performance that significantly surpasses publicly available models.

HumanEval Benchmark Chart

Performance Comparison

Model HumanEval Pass@1 Score Note
moelanoby/phi3-M3-V2 (This Model) 98.17% Commercial License
GPT-4.5 / "Orion" ~96.00% Projected (Late 2025)
Gemini 2.5 Pro ~95.00% Projected (Late 2025)
Claude 4 ~94.00% Projected (Late 2025)

License and Terms of Use

This model is proprietary and is governed by the following custom terms. By accessing or using this model, you agree to be bound by these rules.

  1. Architecture Non-Derivability: The underlying code and architectural design, including the architecture.py file, are proprietary and represent a trade secret. You are strictly prohibited from reverse-engineering, copying, or integrating this architecture or its components into any other model or software.

  2. Commercial License Required: Access to and use of this model require a paid commercial license. Unauthorized use, distribution, or access is strictly forbidden and will be subject to legal action.

  3. Ethical Use and Finetuning Restriction: You may not finetune, train, or adapt this model on any dataset intended to remove ethical safeguards, promote illegal acts, or generate uncensored content. The model must be used in accordance with safety and ethical best practices.


How to Get Access

This model is available for commercial use via a paid license.

To purchase a license and gain access to the model, please contact our licensing team:

Email: [email protected] Website: [Link to your pricing or contact page]

You will be provided with access credentials and usage instructions upon completion of the licensing agreement.


Technical Usage (For Licensed Users)

Note: The following instructions are for licensed users only. Running this code without a valid commercial license is a violation of the terms of use.

Installation

First, ensure you have the necessary libraries installed:

pip install torch transformers accelerate

Python Implementation

After gaining access, you can integrate the model into your application. You must use trust_remote_code=True for the proprietary architecture to load correctly.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Use the private model ID and token provided with your license
MODEL_ID = "moelanoby/phi3-M3-V2"
# AUTH_TOKEN = "YOUR_HF_ACCESS_TOKEN_HERE" # Required for private models

print("Loading tokenizer and model...")
tokenizer = AutoTokenizer.from_pretrained(
    MODEL_ID, 
    trust_remote_code=True, 
    # token=AUTH_TOKEN
)
model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    # token=AUTH_TOKEN
)
print("Model loaded successfully.")

# --- Controlling the model's proprietary reasoning feature ---
# This feature is a key part of your license.
# Default is 1 pass.
try:
    target_layer_path = "model.layers.15.mlp.gate_up_proj" 
    custom_layer = model
    for part in target_layer_path.split('.'):
        custom_layer = getattr(custom_layer, part)
        
    custom_layer.num_correction_passes = 3 
    print(f"✅ Number of reasoning passes set to: {custom_layer.num_correction_passes}")
except AttributeError:
    print("⚠️ Could not access the custom layer. The model will run with its default settings.")

# (Example generation code would follow here)

Acknowledgements

  • The base of this model utilizes the Phi-3 architecture developed by Microsoft.
  • The benchmark results were obtained using the HumanEval dataset from OpenAI.