|
--- |
|
pipeline_tag: text-generation |
|
inference: |
|
parameters: |
|
temperature: 0.01 |
|
extra_gated_prompt: "Purchase access to this repo [HERE](https://buy.stripe.com/5kA3cYcWhci73ks7tt)" |
|
tags: |
|
- facebook |
|
- meta |
|
- mistral |
|
- pytorch |
|
- llama |
|
- llama-2 |
|
- gguf |
|
- function-calling |
|
- function calling |
|
--- |
|
# Function Calling Fine-tuned Mistral Instruct |
|
|
|
Purchase access to this model [here](https://buy.stripe.com/5kA3cYcWhci73ks7tt). |
|
|
|
This model is fine-tuned for function calling. |
|
- The function metadata format is the same as used for OpenAI. |
|
- The model is suitable for commercial use. |
|
- A GGUF version is in the gguf branch. |
|
|
|
Check out other fine-tuned function calling models [here](https://trelis.com/function-calling/). |
|
|
|
## Quick Server Setup |
|
Runpod one click template [here](https://runpod.io/gsc?template=lcrj267zgp&ref=jmfkcdio). You must add a HuggingFace Hub access token (HUGGING_FACE_HUB_TOKEN) to the environment variables as this is a gated model. |
|
|
|
Runpod Affiliate [Link](https://runpod.io?ref=jmfkcdio) (helps support the Trelis channel). |
|
|
|
## Inference Scripts |
|
See below for sample prompt format. |
|
|
|
Complete inference scripts are available for purchase [here](https://trelis.com/enterprise-server-api-and-inference-guide/): |
|
- Easily format prompts using tokenizer.apply_chat_format (starting from openai formatted functions and a list of messages) |
|
- Automate catching, handling and chaining of function calls. |
|
|
|
## Prompt Format |
|
``` |
|
B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n" |
|
B_INST, E_INST = "[INST] ", " [/INST]" #Llama / Mistral style |
|
prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n" |
|
``` |
|
### Using tokenizer.apply_chat_template |
|
For an easier application of the prompt, you can set up as follows: |
|
|
|
Set up `messages`: |
|
``` |
|
[ |
|
{ |
|
"role": "function_metadata", |
|
"content": "FUNCTION_METADATA" |
|
}, |
|
{ |
|
"role": "user", |
|
"content": "What is the current weather in London?" |
|
}, |
|
{ |
|
"role": "function_call", |
|
"content": "{\n \"name\": \"get_current_weather\",\n \"arguments\": {\n \"city\": \"London\"\n }\n}" |
|
}, |
|
{ |
|
"role": "function_response", |
|
"content": "{\n \"temperature\": \"15 C\",\n \"condition\": \"Cloudy\"\n}" |
|
}, |
|
{ |
|
"role": "assistant", |
|
"content": "The current weather in London is Cloudy with a temperature of 15 Celsius" |
|
} |
|
] |
|
``` |
|
|
|
with `FUNCTION_METADATA` as: |
|
``` |
|
[ |
|
{ |
|
"type": "function", |
|
"function": { |
|
"name": "get_current_weather", |
|
"description": "This function gets the current weather in a given city", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"city": { |
|
"type": "string", |
|
"description": "The city, e.g., San Francisco" |
|
}, |
|
"format": { |
|
"type": "string", |
|
"enum": ["celsius", "fahrenheit"], |
|
"description": "The temperature unit to use." |
|
} |
|
}, |
|
"required": ["city"] |
|
} |
|
} |
|
}, |
|
{ |
|
"type": "function", |
|
"function": { |
|
"name": "get_clothes", |
|
"description": "This function provides a suggestion of clothes to wear based on the current weather", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"temperature": { |
|
"type": "string", |
|
"description": "The temperature, e.g., 15 C or 59 F" |
|
}, |
|
"condition": { |
|
"type": "string", |
|
"description": "The weather condition, e.g., 'Cloudy', 'Sunny', 'Rainy'" |
|
} |
|
}, |
|
"required": ["temperature", "condition"] |
|
} |
|
} |
|
} |
|
] |
|
``` |
|
and then apply the chat template to get a formatted prompt: |
|
``` |
|
tokenizer = AutoTokenizer.from_pretrained('Trelis/Mistral-7B-Instruct-v0.1-function-calling-v3', trust_remote_code=True) |
|
|
|
prompt = tokenizer.apply_chat_template(prompt, tokenize=False) |
|
``` |
|
If you are using a gated model, you need to first run: |
|
``` |
|
pip install huggingface_hub |
|
huggingface-cli login |
|
``` |
|
|
|
### Manual Prompt: |
|
``` |
|
[INST] You have access to the following functions. Use them if required: |
|
|
|
[ |
|
{ |
|
"type": "function", |
|
"function": { |
|
"name": "get_big_stocks", |
|
"description": "Get the names of the largest N stocks by market cap", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"number": { |
|
"type": "integer", |
|
"description": "The number of largest stocks to get the names of, e.g. 25" |
|
}, |
|
"region": { |
|
"type": "string", |
|
"description": "The region to consider, can be \"US\" or \"World\"." |
|
} |
|
}, |
|
"required": [ |
|
"number" |
|
] |
|
} |
|
} |
|
}, |
|
{ |
|
"type": "function", |
|
"function": { |
|
"name": "get_stock_price", |
|
"description": "Get the stock price of an array of stocks", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"names": { |
|
"type": "array", |
|
"items": { |
|
"type": "string" |
|
}, |
|
"description": "An array of stocks" |
|
} |
|
}, |
|
"required": [ |
|
"names" |
|
] |
|
} |
|
} |
|
} |
|
] |
|
|
|
[INST] Get the names of the five largest stocks in the US by market cap [/INST] |
|
|
|
{ |
|
"name": "get_big_stocks", |
|
"arguments": { |
|
"number": 5, |
|
"region": "US" |
|
} |
|
}</s> |
|
``` |
|
|
|
# Dataset |
|
See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function_calling_v3). |
|
|
|
# License |
|
This model may be used commercially for inference, or for further fine-tuning and inference. Users may not re-publish or re-sell this model in the same or derivative form (including fine-tunes). |
|
|
|
~~~ |
|
The original repo card follows below. |
|
~~~ |
|
|
|
# Model Card for Mistral-7B-Instruct-v0.1 |
|
|
|
The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets. |
|
|
|
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). |
|
|
|
## Instruction format |
|
|
|
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. |
|
|
|
E.g. |
|
``` |
|
text = "<s>[INST] What is your favourite condiment? [/INST]" |
|
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> " |
|
"[INST] Do you have mayonnaise recipes? [/INST]" |
|
``` |
|
|
|
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") |
|
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") |
|
|
|
messages = [ |
|
{"role": "user", "content": "What is your favourite condiment?"}, |
|
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, |
|
{"role": "user", "content": "Do you have mayonnaise recipes?"} |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |
|
|
|
## Model Architecture |
|
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices: |
|
- Grouped-Query Attention |
|
- Sliding-Window Attention |
|
- Byte-fallback BPE tokenizer |
|
|
|
## Troubleshooting |
|
- If you see the following error: |
|
``` |
|
Traceback (most recent call last): |
|
File "", line 1, in |
|
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained |
|
config, kwargs = AutoConfig.from_pretrained( |
|
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained |
|
config_class = CONFIG_MAPPING[config_dict["model_type"]] |
|
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem |
|
raise KeyError(key) |
|
KeyError: 'mistral' |
|
``` |
|
|
|
Installing transformers from source should solve the issue |
|
pip install git+https://github.com/huggingface/transformers |
|
|
|
This should not be required after transformers-v4.33.4. |
|
|
|
## Limitations |
|
|
|
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. |
|
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to |
|
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. |
|
|
|
## The Mistral AI Team |
|
|
|
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |