modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-24 00:43:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
573 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-24 00:37:34
card
stringlengths
11
1.01M
salforis/lora-paraphrase-vistral-mix
salforis
2024-05-21T06:55:35Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T05:37:52Z
--- license: apache-2.0 ---
ShleeSSU/Scoring_Korean_Narrative_Sentences
ShleeSSU
2024-05-21T06:54:34Z
180
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T06:53:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roosterben/llama3_4bitlora_model
roosterben
2024-05-21T06:52:53Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-21T06:52:32Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** roosterben - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Dhahlan2000/Translation-GPT-v2
Dhahlan2000
2024-05-21T06:40:05Z
67
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "base_model:Dhahlan2000/Translation-GPT", "base_model:finetune:Dhahlan2000/Translation-GPT", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T06:38:35Z
--- license: apache-2.0 tags: - generated_from_keras_callback base_model: Dhahlan2000/Translation-GPT model-index: - name: Translation-GPT-v2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Translation-GPT-v2 This model is a fine-tuned version of [Dhahlan2000/Translation-GPT](https://huggingface.co/Dhahlan2000/Translation-GPT) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8260 - Validation Loss: 3.0893 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.3125 | 3.4116 | 0 | | 3.8260 | 3.0893 | 1 | ### Framework versions - Transformers 4.40.2 - TensorFlow 2.15.0 - Datasets 2.17.0 - Tokenizers 0.19.1
moriire/Qwen0.5-healthcare
moriire
2024-05-21T06:36:21Z
0
0
transformers
[ "transformers", "safetensors", "text-generation", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-generation
2024-04-10T13:46:33Z
--- library_name: transformers pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
denru/Yi-1.5-34B-Chat-16Kx2-4_65bpw-h8-exl2-pippa
denru
2024-05-21T06:35:33Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:01-ai/Yi-1.5-34B-Chat-16K", "base_model:quantized:01-ai/Yi-1.5-34B-Chat-16K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T06:30:48Z
--- base_model: - 01-ai/Yi-1.5-34B-Chat-16K library_name: transformers tags: - mergekit - merge --- # merged_model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [01-ai/Yi-1.5-34B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [0, 12] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [6, 18] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [12, 24] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [18, 30] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [24, 36] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [30, 42] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [36, 48] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [42, 54] - sources: - model: 01-ai/Yi-1.5-34B-Chat-16K layer_range: [48, 60] merge_method: passthrough dtype: float16 ```
SeHwanJoo/my-awesome-model
SeHwanJoo
2024-05-21T06:27:16Z
34
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T06:26:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardLuo/Shotluck-Holmes-3.1
RichardLuo
2024-05-21T06:24:48Z
23
2
transformers
[ "transformers", "safetensors", "tiny_llava_phi", "text-generation", "custom_code", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T06:17:41Z
--- license: apache-2.0 ---
Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2
Zoyd
2024-05-21T06:16:49Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "6-bit", "exl2", "region:us" ]
text-generation
2024-05-21T05:58:57Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2)**</center> | <center>3893 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2)**</center> | <center>6912 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2)**</center> | <center>8106 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2
Zoyd
2024-05-21T06:16:46Z
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T05:23:54Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **3.75 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2)**</center> | <center>3893 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2)**</center> | <center>6912 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2)**</center> | <center>8106 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2
Zoyd
2024-05-21T06:16:46Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-21T05:32:39Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2)**</center> | <center>3893 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2)**</center> | <center>6912 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2)**</center> | <center>8106 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2
Zoyd
2024-05-21T06:16:45Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-21T05:06:36Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_2bpw_exl2)**</center> | <center>3250 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-2_5bpw_exl2)**</center> | <center>3479 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_0bpw_exl2)**</center> | <center>3893 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_5bpw_exl2)**</center> | <center>4311 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-3_75bpw_exl2)**</center> | <center>4519 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_0bpw_exl2)**</center> | <center>4726 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-4_25bpw_exl2)**</center> | <center>4935 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-5_0bpw_exl2)**</center> | <center>5558 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_0bpw_exl2)**</center> | <center>6495 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-6_5bpw_exl2)**</center> | <center>6912 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-8B-8_0bpw_exl2)**</center> | <center>8106 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
se0ngjun/kisa-fine-tuned4
se0ngjun
2024-05-21T06:10:29Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-21T06:02:56Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Transduce/dilana
Transduce
2024-05-21T06:09:12Z
0
0
null
[ "license:other", "region:us" ]
null
2024-05-20T07:41:27Z
--- license: other license_name: test license_link: LICENSE ---
JayKim83/kisa-fine-tuned4
JayKim83
2024-05-21T06:07:25Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-21T06:01:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jeiku/Nous-Capybara-3B-V1.9-Q4_K_M-GGUF
jeiku
2024-05-21T06:04:26Z
11
0
null
[ "gguf", "sft", "StableLM", "llama-cpp", "gguf-my-repo", "eng", "dataset:LDJnr/Capybara", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-21T06:04:21Z
--- language: - eng license: - mit tags: - sft - StableLM - llama-cpp - gguf-my-repo datasets: - LDJnr/Capybara - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel --- # jeiku/Nous-Capybara-3B-V1.9-Q4_K_M-GGUF This model was converted to GGUF format from [`NousResearch/Nous-Capybara-3B-V1.9`](https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo jeiku/Nous-Capybara-3B-V1.9-Q4_K_M-GGUF --model nous-capybara-3b-v1.9.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo jeiku/Nous-Capybara-3B-V1.9-Q4_K_M-GGUF --model nous-capybara-3b-v1.9.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m nous-capybara-3b-v1.9.Q4_K_M.gguf -n 128 ```
bartowski/Llama-3-Hercules-5.0-8B-GGUF
bartowski
2024-05-21T06:03:36Z
236
6
transformers
[ "transformers", "gguf", "text-generation", "dataset:Locutusque/hercules-v5.0", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-21T05:42:43Z
--- library_name: transformers license: llama3 datasets: - Locutusque/hercules-v5.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Llama-3-Hercules-5.0-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2940">b2940</a> for quantization. Original model: https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-Hercules-5.0-8B-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-Hercules-5.0-8B-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-Hercules-5.0-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-Hercules-5.0-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-Hercules-5.0-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-Hercules-5.0-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-Hercules-5.0-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-Hercules-5.0-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-Hercules-5.0-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-Hercules-5.0-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-Hercules-5.0-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-Hercules-5.0-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-Hercules-5.0-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-Hercules-5.0-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-Hercules-5.0-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-Hercules-5.0-8B-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-Hercules-5.0-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-Hercules-5.0-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-Hercules-5.0-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-Hercules-5.0-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-Hercules-5.0-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-Hercules-5.0-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF/blob/main/Llama-3-Hercules-5.0-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Llama-3-Hercules-5.0-8B-GGUF --include "Llama-3-Hercules-5.0-8B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Llama-3-Hercules-5.0-8B-GGUF --include "Llama-3-Hercules-5.0-8B-Q8_0.gguf/*" --local-dir Llama-3-Hercules-5.0-8B-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (Llama-3-Hercules-5.0-8B-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
nerottt/lc_0.3
nerottt
2024-05-21T06:02:31Z
76
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-21T06:01:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RomBor/ppo-PyramidsRND
RomBor
2024-05-21T06:02:27Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2024-05-21T06:02:24Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: RomBor/ppo-PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nerottt/lc_0.2
nerottt
2024-05-21T06:00:14Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T06:00:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_bs64_lr7
damgomz
2024-05-21T05:59:11Z
118
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T20:30:08Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T07:18:18' project_name: ft_bs64_lr7_emissions_tracker run_id: 1145bafe-8e78-49ff-af17-c3e010606fad duration: 33325.59226679802 emissions: 0.0218008981080728 emissions_rate: 6.541788645056703e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.3934264249450631 gpu_energy: 0 ram_energy: 0.0694278267284233 energy_consumed: 0.4628542516734863 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 33325.59226679802 | | Emissions (Co2eq in kg) | 0.0218008981080728 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.3934264249450631 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0694278267284233 | | Consumed energy (kWh) | 0.4628542516734863 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.06415176511358618 | | Emissions (Co2eq in kg) | 0.013052523637829223 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs32_lr5 | | model_name | ft_bs64_lr7 | | sequence_length | 400 | | num_epoch | 15 | | learning_rate | 5e-07 | | batch_size | 64 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 81450 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.704930 | 0.674754 | 0.572165 | 0.265337 | | 1 | 0.665731 | 0.636024 | 0.677467 | 0.766871 | | 2 | 0.608330 | 0.578532 | 0.709131 | 0.889571 | | 3 | 0.554262 | 0.535725 | 0.730486 | 0.898773 | | 4 | 0.514408 | 0.501135 | 0.752577 | 0.888037 | | 5 | 0.478413 | 0.467166 | 0.776878 | 0.904908 | | 6 | 0.441089 | 0.433510 | 0.804124 | 0.884969 | | 7 | 0.413271 | 0.413341 | 0.818851 | 0.861963 | | 8 | 0.390756 | 0.404188 | 0.820324 | 0.920245 | | 9 | 0.374568 | 0.391133 | 0.818851 | 0.877301 | | 10 | 0.360722 | 0.384345 | 0.826215 | 0.883436 | | 11 | 0.352599 | 0.380412 | 0.830633 | 0.878834 | | 12 | 0.338886 | 0.378745 | 0.830633 | 0.889571 | | 13 | 0.330394 | 0.376916 | 0.834315 | 0.868098 | | 14 | 0.316725 | 0.380716 | 0.832106 | 0.845092 |
MSParkDev/SingSeqBERT-UCIRetail
MSParkDev
2024-05-21T05:57:18Z
5
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-15T14:04:29Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: SingSeqBERT-UCIRetail results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SingSeqBERT-UCIRetail This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4882 - Accuracy: 0.7685 - F1: 0.7672 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 456 | 0.5482 | 0.7479 | 0.7425 | | 0.6336 | 2.0 | 912 | 0.5108 | 0.7570 | 0.7569 | | 0.5437 | 3.0 | 1368 | 0.4882 | 0.7685 | 0.7672 | | 0.4872 | 4.0 | 1824 | 0.5918 | 0.7825 | 0.7825 | | 0.4329 | 5.0 | 2280 | 0.6156 | 0.7652 | 0.7652 | | 0.3957 | 6.0 | 2736 | 0.6598 | 0.7685 | 0.7683 | | 0.3439 | 7.0 | 3192 | 0.7881 | 0.7768 | 0.7756 | | 0.3068 | 8.0 | 3648 | 0.9189 | 0.7545 | 0.7536 | | 0.2635 | 9.0 | 4104 | 1.0319 | 0.7619 | 0.7619 | | 0.2305 | 10.0 | 4560 | 1.0976 | 0.7586 | 0.7586 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.0.0 - Datasets 2.14.5 - Tokenizers 0.14.1
JeongKyu/my_awesome_billsum_model
JeongKyu
2024-05-21T05:47:45Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:42:51Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5862 - Rouge1: 0.1331 - Rouge2: 0.0416 - Rougel: 0.1104 - Rougelsum: 0.1104 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8752 | 0.1183 | 0.0316 | 0.0998 | 0.0998 | 19.0 | | No log | 2.0 | 124 | 2.6656 | 0.127 | 0.0382 | 0.1058 | 0.1058 | 19.0 | | No log | 3.0 | 186 | 2.6039 | 0.1309 | 0.0429 | 0.1094 | 0.1094 | 19.0 | | No log | 4.0 | 248 | 2.5862 | 0.1331 | 0.0416 | 0.1104 | 0.1104 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
DeepBrainChainAI/superImageAI
DeepBrainChainAI
2024-05-21T05:46:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-21T05:32:20Z
--- license: apache-2.0 ---
wendy41/llama2-koen-ft-v2
wendy41
2024-05-21T05:44:25Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T05:44:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m
taddeusb90
2024-05-21T05:43:35Z
3
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "finance", "conversational", "en", "dataset:taddeusb90/finbro-v0.1.0", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-10T12:38:00Z
--- license: llama3 datasets: - taddeusb90/finbro-v0.1.0 language: - en library_name: transformers tags: - finance --- Fibro v0.1.0 Dolphin 2.9 Llama 3 8B Model with 1m token context window ====================== Model Description ----------------- The Fibro Dolphin 2.9 Llama 3 8B model is a language model optimized for financial applications. This model is uncensored and aims to enhance financial analysis, automate data extraction, improve financial literacy across various user expertise levels, and is trained for obedience. It utilizes a massive 1m token context window. This is just a sneak peek into what's coming, and future releases will be done periodically, consistently improving its performance. ![FinBro](https://huggingface.co/taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-131k/resolve/main/1539868156729340231_3171889935_10-05-2024-05-08-05.jpeg) Training: ----------------- The model is still training, I will be sharing new incremental releases while it's improving so you have time to play around with it. ![Loss](https://huggingface.co/taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m/resolve/main/W%26B%20Chart%2020_05_2024%2C%2017_35_24.png) ![Evaluation Loss](https://huggingface.co/taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-131k/resolve/main/W%26B%20Chart%2020_05_2024%2C%2017_35_39.png) What's Next? ----------- * **Extended Capability:** Continue training on the 8B model as it hasn't converged yet I only scratched the surface here and transitioning to scale up with a 70B model for deeper insights and broader financial applications. * **Dataset Expansion:** Continuous enhancement by integrating more diverse and comprehensive real and synthetic financial data. * **Advanced Financial Analysis:** Future versions will support complex financial decision-making processes by interpreting and analyzing financial data within agentive workflows. * **Incremental Improvements:** Regular updates are made to increase the model's efficiency and accuracy and extend its capabilities in financial tasks. Model Applications ------------------ * **Information Extraction:** Automates the process of extracting valuable data from unstructured financial documents. * **Financial Literacy:** Provides explanations of financial documents at various levels, making financial knowledge more accessible. How to Use ---------- Here is how to load and use the model in your Python projects: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Your financial query here" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(inputs['input_ids']) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Training Data ------------- The Fibro Llama 3 8B model was trained on the Finbro Dataset, an extensive compilation of over 300,000 entries sourced from Investopedia and Sujet Finance. This dataset includes structured Q&A pairs, financial reports, and a variety of financial tasks pooled from multiple datasets. The dataset can be found [here](https://huggingface.co/datasets/taddeusb90/finbro-v0.1.0) This dataset will be extended to contain real and synthetic data on a wide range of financial tasks such as: - Investment valuation - Value investing - Security analysis - Derivatives - Asset and portfolio management - Financial information extraction - Quantitative finance - Econometrics - Applied computer science in finance and much more Notice -------- You are advised to implement your own alignment layer and guard rails before exposing the model as a service or using it in production. It will be highly compliant with any requests, even unethical ones. Please read Eric Hartford's blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Please exercise caution and use it at your own risk. I assume no responsibility for any losses incurred if used. Licensing --------- This model is released under the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE). Citation -------- If you use this model in your research, please cite it as follows: ```bibtex @misc{ finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m, author = {Taddeus Buica}, title = {Fibro Dolphin 2.9 Llama 3 8B Model for Financial Analysis}, year = {2024}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/taddeusb90/finbro-v0.1.0-dolphin-2.9-llama-3-8B-instruct-1m}} } ``` Special thanks to the folks from AI@Meta and Cognitive Computations for powering this project with their awesome models. Contact -------- If you would like to connect, share ideas, feedback, help support bigger models or even develop your own custom finance model on your private dataset let's talk on [LinkedIn](https://www.linkedin.com/in/taddeus-buica-1009a965/) References -------- [[1](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)] Llama 3 Model Card by AI@Meta, Year: 2024 [[2](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)] Dolphin 2.9 by Cognitive Computations, Year 2024 [[3](https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k)] Sujet Finance Dataset [[4](https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset)] Dataset Card for investopedia-instruction-tuning
Dhahlan2000/Translation-GPT
Dhahlan2000
2024-05-21T05:38:15Z
60
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:35:33Z
--- license: apache-2.0 tags: - generated_from_keras_callback base_model: google/mt5-small model-index: - name: Translation-GPT results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Translation-GPT This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.2450 - Validation Loss: 3.8117 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 8.9467 | 4.5859 | 0 | | 5.2450 | 3.8117 | 1 | ### Framework versions - Transformers 4.40.2 - TensorFlow 2.15.0 - Datasets 2.17.0 - Tokenizers 0.19.1
WooHaru/my_awesome_billsum_model
WooHaru
2024-05-21T05:36:27Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:30:03Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5382 - Rouge1: 0.1349 - Rouge2: 0.0451 - Rougel: 0.1128 - Rougelsum: 0.1127 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8350 | 0.1266 | 0.0357 | 0.1067 | 0.1068 | 19.0 | | No log | 2.0 | 124 | 2.6190 | 0.1356 | 0.0464 | 0.1148 | 0.1148 | 19.0 | | No log | 3.0 | 186 | 2.5561 | 0.136 | 0.0436 | 0.1129 | 0.1129 | 19.0 | | No log | 4.0 | 248 | 2.5382 | 0.1349 | 0.0451 | 0.1128 | 0.1127 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ekkkee/my_awesome_billsum_model
ekkkee
2024-05-21T05:35:23Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:29:53Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5449 - Rouge1: 0.145 - Rouge2: 0.0509 - Rougel: 0.1173 - Rougelsum: 0.1171 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8333 | 0.1273 | 0.037 | 0.105 | 0.1053 | 19.0 | | No log | 2.0 | 124 | 2.6231 | 0.1377 | 0.0474 | 0.1125 | 0.1122 | 19.0 | | No log | 3.0 | 186 | 2.5621 | 0.1433 | 0.0501 | 0.1162 | 0.1159 | 19.0 | | No log | 4.0 | 248 | 2.5449 | 0.145 | 0.0509 | 0.1173 | 0.1171 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
zpdlsprtm/my_awesome_billsum_model
zpdlsprtm
2024-05-21T05:32:52Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:27:47Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5093 - Rouge1: 0.1421 - Rouge2: 0.049 - Rougel: 0.1164 - Rougelsum: 0.1163 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8023 | 0.124 | 0.0327 | 0.1044 | 0.1044 | 19.0 | | No log | 2.0 | 124 | 2.5922 | 0.1325 | 0.0397 | 0.1085 | 0.1088 | 19.0 | | No log | 3.0 | 186 | 2.5274 | 0.1398 | 0.0473 | 0.1152 | 0.1153 | 19.0 | | No log | 4.0 | 248 | 2.5093 | 0.1421 | 0.049 | 0.1164 | 0.1163 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
wifibaby4u/Guru-Llama-3-8B-Chat
wifibaby4u
2024-05-21T05:30:50Z
8
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "llama3中文指令模型", "conversational", "en", "zh", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T06:31:04Z
--- language: - en - zh pipeline_tag: text-generation tags: - meta - llama-3 - llama3中文指令模型 license: llama3 --- # Llama3 中文指令模型 ## 项目概述 本项目使用 `LLaMA-Factory` 对 [Guru-Llama-3-8B](https://modelscope.cn/models/wifibaby4u/Guru-Llama-3-8B) 模型进行微调。 ## Models - Chat models | Name | Download | | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Guru-Llama-3-8B-Chat | • [🤗 Hugging Face](https://huggingface.co/wifibaby4u/Guru-Llama-3-8B-Chat) • [🤖 ModelScope](https://modelscope.cn/models/wifibaby4u/Guru-Llama-3-8B-Chat) | - Base models | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Guru-Llama-3-8B | • [🤗 Hugging Face](https://huggingface.co/wifibaby4u/Guru-Llama-3-8B) • [🤖 ModelScope](https://modelscope.cn/models/wifibaby4u/Guru-Llama-3-8B) | ## 评测 ### CMMLU | Name | Average | STEM | Social Sciences | Humanities | Other | |-------|---------|------|-----------------|------------|-------| | Five-shot | 49.65 | 42.83 | 50.99 | 52.87 | 51.13 | | Zero-shot | 43.51 | 37.57 | 44.91 | 45.64 | 45.09 | ## 训练数据集 - alpaca_gpt4_en - alpaca_gpt4_zh - ruozhiba_gpt4o ## 使用指南 ### 环境配置 确保您的机器已经安装了以下软件: - Python 3.8+ - PyTorch 1.8+ ### 安装 首先安装所需依赖: ```bash pip install modelscope ``` ### 模型下载 使用以下命令加载并运行模型: ```python from modelscope import snapshot_download model_dir = snapshot_download('wifibaby4u/Guru-Llama-3-8B-Chat') ``` ## 贡献 我们欢迎社区开发者的贡献!如果您有兴趣参与本项目的开发或有任何建议,欢迎通过 Issue 或 Pull Request 的方式与我们联系。
bartowski/Llama-3-Hercules-5.0-8B-exl2
bartowski
2024-05-21T05:29:26Z
2
0
transformers
[ "transformers", "text-generation", "dataset:Locutusque/hercules-v5.0", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T05:29:25Z
--- library_name: transformers license: llama3 datasets: - Locutusque/hercules-v5.0 quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Llama-3-Hercules-5.0-8B Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/Locutusque/Llama-3-Hercules-5.0-8B ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Available sizes | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description | | ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ | | [8_0](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2/tree/5_0) | 5.0 | 6.0 | 7.7 GB | 8.1 GB | 9.1 GB | 11.2 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2/tree/4_25) | 4.25 | 6.0 | 7.0 GB | 7.4 GB | 8.4 GB | 10.5 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2/tree/3_5) | 3.5 | 6.0 | 6.4 GB | 6.8 GB | 7.8 GB | 9.9 GB | Lower quality, only use if you have to. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2 Llama-3-Hercules-5.0-8B-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/Llama-3-Hercules-5.0-8B-exl2 --revision 6_5 --local-dir Llama-3-Hercules-5.0-8B-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/Llama-3-Hercules-5.0-8B-exl2 --revision 6_5 --local-dir Llama-3-Hercules-5.0-8B-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ttokky/my_awesome_billsum_model
ttokky
2024-05-21T05:28:16Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:21:09Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4992 - Rouge1: 0.144 - Rouge2: 0.0527 - Rougel: 0.1181 - Rougelsum: 0.1181 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.7901 | 0.128 | 0.0345 | 0.1078 | 0.1077 | 19.0 | | No log | 2.0 | 124 | 2.5764 | 0.1374 | 0.0451 | 0.1137 | 0.1135 | 19.0 | | No log | 3.0 | 186 | 2.5156 | 0.1437 | 0.0519 | 0.1182 | 0.118 | 19.0 | | No log | 4.0 | 248 | 2.4992 | 0.144 | 0.0527 | 0.1181 | 0.1181 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
yhjeong81/my_awesome_billsum_model
yhjeong81
2024-05-21T05:26:55Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:21:49Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5867 - Rouge1: 0.1413 - Rouge2: 0.0517 - Rougel: 0.1168 - Rougelsum: 0.1168 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8742 | 0.1248 | 0.0359 | 0.1039 | 0.1039 | 19.0 | | No log | 2.0 | 124 | 2.6692 | 0.133 | 0.0454 | 0.1118 | 0.1118 | 19.0 | | No log | 3.0 | 186 | 2.6035 | 0.1369 | 0.0486 | 0.1138 | 0.1138 | 19.0 | | No log | 4.0 | 248 | 2.5867 | 0.1413 | 0.0517 | 0.1168 | 0.1168 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
seonhwa/my_awesome_billsum_model
seonhwa
2024-05-21T05:25:56Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:20:45Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5800 - Rouge1: 0.1423 - Rouge2: 0.0542 - Rougel: 0.1176 - Rougelsum: 0.1176 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8711 | 0.1265 | 0.0361 | 0.1061 | 0.1061 | 19.0 | | No log | 2.0 | 124 | 2.6608 | 0.1367 | 0.0476 | 0.1134 | 0.1133 | 19.0 | | No log | 3.0 | 186 | 2.5972 | 0.1407 | 0.0519 | 0.1159 | 0.1161 | 19.0 | | No log | 4.0 | 248 | 2.5800 | 0.1423 | 0.0542 | 0.1176 | 0.1176 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
sidovic/flan-t5-base-mimic-med-reports
sidovic
2024-05-21T05:25:51Z
114
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "base_model:finetune:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-20T20:21:38Z
--- license: apache-2.0 base_model: google/flan-t5-base tags: - generated_from_trainer metrics: - rouge model-index: - name: flan-t5-base-mimic-med-reports results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-mimic-med-reports This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2032 - Rouge1: 52.8742 - Rouge2: 42.4294 - Rougel: 51.1178 - Rougelsum: 51.8773 - Meteor: 47.6053 - Bleu4: 14.2811 - Bleu-p1: 61.1865 - Bleu-p2: 43.5135 - Bleu-p3: 33.9223 - Bleu-p4: 25.8304 - Gen Len: 13.4702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor | Bleu4 | Bleu-p1 | Bleu-p2 | Bleu-p3 | Bleu-p4 | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| | 0.2627 | 1.0 | 10497 | 0.2235 | 51.0687 | 39.674 | 49.6154 | 50.1361 | 45.6438 | 13.1526 | 60.4125 | 40.8134 | 31.1521 | 23.6299 | 13.2415 | | 0.2376 | 2.0 | 20994 | 0.2102 | 51.5603 | 40.8339 | 49.8247 | 50.5212 | 46.2244 | 13.1941 | 60.8733 | 42.1622 | 32.8697 | 24.9374 | 13.1225 | | 0.23 | 3.0 | 31491 | 0.2051 | 52.5731 | 41.7381 | 50.8502 | 51.6767 | 47.2270 | 14.0337 | 60.9681 | 42.8231 | 33.1248 | 25.1256 | 13.4702 | | 0.2288 | 4.0 | 41988 | 0.2032 | 52.8742 | 42.4294 | 51.1178 | 51.8773 | 47.6053 | 14.2811 | 61.1865 | 43.5135 | 33.9223 | 25.8304 | 13.4702 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
MaziyarPanahi/NeuralsynthesisT3qm7-7B-GGUF
MaziyarPanahi
2024-05-21T05:23:52Z
54
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp", "base_model:nlpguy/T3QM7", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/NeuralsynthesisT3qm7-7B", "base_model:quantized:automerger/NeuralsynthesisT3qm7-7B" ]
text-generation
2024-05-21T04:54:50Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp - base_model:nlpguy/T3QM7 - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: NeuralsynthesisT3qm7-7B-GGUF base_model: automerger/NeuralsynthesisT3qm7-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/NeuralsynthesisT3qm7-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisT3qm7-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/NeuralsynthesisT3qm7-7B](https://huggingface.co/automerger/NeuralsynthesisT3qm7-7B) ## Description [MaziyarPanahi/NeuralsynthesisT3qm7-7B-GGUF](https://huggingface.co/MaziyarPanahi/NeuralsynthesisT3qm7-7B-GGUF) contains GGUF format model files for [automerger/NeuralsynthesisT3qm7-7B](https://huggingface.co/automerger/NeuralsynthesisT3qm7-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
CluelessNovice/task_demo_metadata
CluelessNovice
2024-05-21T05:22:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:westlake-repl/SaProt_35M_AF2", "base_model:adapter:westlake-repl/SaProt_35M_AF2", "region:us" ]
null
2024-05-21T05:22:03Z
--- library_name: peft base_model: westlake-repl/SaProt_35M_AF2 --- # Model Card for Model ID This model is used for a demo task<br><br> The digital label means: <br>0: <br> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
barazard/my_awesome_billsum_model
barazard
2024-05-21T05:20:14Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T05:15:15Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: my_awesome_billsum_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5191 - Rouge1: 0.1475 - Rouge2: 0.0544 - Rougel: 0.1219 - Rougelsum: 0.1221 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8191 | 0.1283 | 0.0403 | 0.1085 | 0.1085 | 19.0 | | No log | 2.0 | 124 | 2.5989 | 0.1404 | 0.0492 | 0.1175 | 0.1178 | 19.0 | | No log | 3.0 | 186 | 2.5364 | 0.1483 | 0.0554 | 0.123 | 0.1231 | 19.0 | | No log | 4.0 | 248 | 2.5191 | 0.1475 | 0.0544 | 0.1219 | 0.1221 | 19.0 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
jurieyel/Llama3-sqlcoder-8b-4bit-GGUF-q4_K_M
jurieyel
2024-05-21T05:17:59Z
17
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:defog/llama-3-sqlcoder-8b", "base_model:quantized:defog/llama-3-sqlcoder-8b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-21T05:08:35Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: defog/llama-3-sqlcoder-8b --- # Uploaded model - **Developed by:** jurieyel - **License:** apache-2.0 - **Finetuned from model :** defog/llama-3-sqlcoder-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
issaccyj/lora-sdxl-cat1
issaccyj
2024-05-21T05:08:54Z
4
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-21T04:54:55Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a sbu cat in a bucket' output: url: "image_0.png" - text: 'a sbu cat in a bucket' output: url: "image_1.png" - text: 'a sbu cat in a bucket' output: url: "image_2.png" - text: 'a sbu cat in a bucket' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a sbu cat license: openrail++ --- # SDXL LoRA DreamBooth - issaccyj/lora-sdxl-cat1 <Gallery /> ## Model description These are issaccyj/lora-sdxl-cat1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a sbu cat to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](issaccyj/lora-sdxl-cat1/tree/main) them in the Files & versions tab.
lewy666/results
lewy666
2024-05-21T05:07:02Z
181
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T05:06:35Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
wahid028/llama3-FT-alpaca-unsloth
wahid028
2024-05-21T05:04:58Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T04:41:04Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m
taddeusb90
2024-05-21T05:04:10Z
6
1
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "finance", "conversational", "en", "dataset:taddeusb90/finbro-v0.1.0", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T11:28:29Z
--- license: llama3 datasets: - taddeusb90/finbro-v0.1.0 language: - en library_name: transformers tags: - finance --- Fibro v0.1.0 Llama 3 8B Model with 1Million token context window ====================== Model Description ----------------- The Fibro Llama 3 8B model is language model optimized for financial applications. This model aims to enhance financial analysis, automate data extraction, and improve financial literacy across various user expertise levels. It utilizes a massive 1 million token context window. This is just a sneak peek into what's coming, and future releases will be done periodically consistently improving it's performance. ![FinBro](https://huggingface.co/taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m-POSE/resolve/main/437210082_369067905507560_2052449041654631065_n.png) Training: ----------------- The model is still training, I will be sharing new incremental releases while it's improving so you have time to play around with it. ![Loss](https://huggingface.co/taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m-POSE/resolve/main/W%26B%20Chart%2006_05_2024%2C%2015_57_42.png) ![Evaluation Loss](https://huggingface.co/taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m-POSE/resolve/main/W%26B%20Chart%2006_05_2024%2C%2015_58_01.png) What's Next? ----------- * **Extended Capability:** Continue training on the 8B model as it hasn't converged yet as I only scratched the surface here and transitioning to scale up with a 70B model for deeper insights and broader financial applications. * **Dataset Expansion:** Continuous enhancement by integrating more diverse and comprehensive real and synthetic financial data. * **Advanced Financial Analysis:** Future versions will support complex financial decision-making processes by interpreting and analyzing financial data within agentive workflows. * **Incremental Improvements:** Regular updates are made to increase the model's efficiency and accuracy and extend it's capabilities in financial tasks. Model Applications ------------------ * **Information Extraction:** Automates the process of extracting valuable data from unstructured financial documents. * **Financial Literacy:** Provides explanations of financial documents at various levels, making financial knowledge more accessible. How to Use ---------- Here is how to load and use the model in your Python projects: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Your financial query here" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(inputs['input_ids']) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Training Data ------------- The Fibro Llama 3 8B model was trained on the Finbro Dataset, an extensive compilation of over 300,000 entries sourced from Investopedia and Sujet Finance. This dataset includes structured Q&A pairs, financial reports, and a variety of financial tasks pooled from multiple datasets. The dataset can be found [here](https://huggingface.co/datasets/taddeusb90/finbro-v0.1.0) This dataset will be extended to contain real and synthetic data on a wide range of financial tasks such as: - Investment valuation - Value investing - Security analysis - Derivatives - Asset and portfolio management - Financial information extraction - Quantitative finance - Econometrics - Applied computer science in finance and much more Notice -------- You are advised to implement your own alignment layer and guard rails before exposing the model as a service or using it in production. Please exercise caution and use it at your own risk. I assume no responsibility for any losses incurred if used. Licensing --------- This model is released under the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE). Citation -------- If you use this model in your research, please cite it as follows: ```bibtex @misc{ finbro_v0.1.0-llama-3-8B-1m, author = {Taddeus Buica}, title = {Fibro Llama 3 8B Model for Financial Analysis}, year = {2024}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/taddeusb90/finbro-v0.1.0-llama-3-8B-instruct-1m}} } ``` Special thanks to the folks from AI@Meta for powering this project with their awesome models. Contact -------- If you would like to connect, share ideas, feedback, help support bigger models or even develop your own custom finance model on your private dataset let's talk on [LinkedIn](https://www.linkedin.com/in/taddeus-buica-1009a965/) References -------- [[1](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)] Llama 3 Model Card by AI@Meta, Year: 2024 [[2](https://huggingface.co/datasets/sujet-ai/Sujet-Finance-Instruct-177k)] Sujet Finance Dataset [[3](https://huggingface.co/datasets/FinLang/investopedia-instruction-tuning-dataset)] Dataset Card for investopedia-instruction-tuning
cminja/lora_adapters_llama-3-8b-bnb-4bit
cminja
2024-05-21T05:04:03Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-20T14:40:36Z
--- language: - en license: llama3 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Repo doesent contain the full model,but ONLY the LoRA adapter for base model - **Developed by:** cminja - **License:** llama3 community license - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit **on a custom dataset**
nadellaroshni/reformer_model
nadellaroshni
2024-05-21T05:03:27Z
91
0
transformers
[ "transformers", "safetensors", "reformer", "text-classification", "generated_from_trainer", "base_model:google/reformer-crime-and-punishment", "base_model:finetune:google/reformer-crime-and-punishment", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-21T03:51:42Z
--- base_model: google/reformer-crime-and-punishment tags: - generated_from_trainer metrics: - accuracy model-index: - name: reformer_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reformer_model This model is a fine-tuned version of [google/reformer-crime-and-punishment](https://huggingface.co/google/reformer-crime-and-punishment) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6693 - Accuracy: 0.561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6841 | 1.0 | 625 | 0.6725 | 0.559 | | 0.6789 | 2.0 | 1250 | 0.6693 | 0.561 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cpu - Datasets 2.19.1 - Tokenizers 0.19.1
damgomz/ft_bs16_1lr6_base_x8
damgomz
2024-05-21T05:01:49Z
107
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T21:03:13Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:49:36' project_name: ft_bs16_1lr6_base_x8_emissions_tracker run_id: e8f58681-b571-4e35-a345-b5c77f7b4a7e duration: 29334.1406750679 emissions: 0.0191897816027374 emissions_rate: 6.541790951131397e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.3463053195810985 gpu_energy: 0 ram_energy: 0.0611123913536468 energy_consumed: 0.4074177109347459 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 29334.1406750679 | | Emissions (Co2eq in kg) | 0.0191897816027374 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.3463053195810985 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0611123913536468 | | Consumed energy (kWh) | 0.4074177109347459 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.05646822079950571 | | Emissions (Co2eq in kg) | 0.011489205097734928 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_bs16_1lr6_base_x8 | | sequence_length | 400 | | num_epoch | 20 | | learning_rate | 1e-06 | | batch_size | 16 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 108600 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.523328 | 0.476205 | 0.776141 | 0.906442 | | 1 | 0.431474 | 0.413639 | 0.813697 | 0.866564 | | 2 | 0.384488 | 0.402209 | 0.823270 | 0.897239 | | 3 | 0.353091 | 0.387237 | 0.822533 | 0.800613 | | 4 | 0.328723 | 0.390632 | 0.836524 | 0.918712 | | 5 | 0.314824 | 0.373720 | 0.835052 | 0.848160 | | 6 | 0.299005 | 0.389435 | 0.810751 | 0.750000 | | 7 | 0.289835 | 0.386018 | 0.835052 | 0.860429 | | 8 | 0.273817 | 0.388888 | 0.829897 | 0.814417 | | 9 | 0.257712 | 0.386943 | 0.837997 | 0.871166 | | 10 | 0.236881 | 0.410112 | 0.832842 | 0.855828 | | 11 | 0.218910 | 0.429738 | 0.820324 | 0.837423 | | 12 | 0.207044 | 0.461636 | 0.832106 | 0.891104 | | 13 | 0.192752 | 0.454077 | 0.817378 | 0.828221 | | 14 | 0.167404 | 0.477347 | 0.802651 | 0.754601 | | 15 | 0.146702 | 0.511787 | 0.810751 | 0.875767 | | 16 | 0.134885 | 0.540342 | 0.814433 | 0.858896 | | 17 | 0.118554 | 0.552969 | 0.807069 | 0.802147 | | 18 | 0.105443 | 0.596917 | 0.805596 | 0.803681 | | 19 | 0.085186 | 0.638636 | 0.796024 | 0.757669 |
rhaymison/portuguese-tom-cat-13b
rhaymison
2024-05-21T05:01:47Z
10
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "portugues", "portuguese", "QA", "instruct", "phi", "conversational", "pt", "dataset:rhaymison/superset", "base_model:meta-llama/Llama-2-13b", "base_model:finetune:meta-llama/Llama-2-13b", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T19:08:17Z
--- language: - pt license: apache-2.0 library_name: transformers tags: - portugues - portuguese - QA - instruct - phi base_model: meta-llama/Llama-2-13b datasets: - rhaymison/superset pipeline_tag: text-generation model-index: - name: portuguese-tom-cat-13b results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 42.76 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 45.62 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 39.09 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 77.41 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 58.44 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 68.14 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 84.13 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 56.27 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia/tweetsentbr_fewshot split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 48.86 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/portuguese-tom-cat-13b name: Open Portuguese LLM Leaderboard --- # portuguese-tom-cat-13b <p align="center"> <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/13b.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/> </p> This model was trained with a superset of 300,000 instructions in Portuguese. The model comes to help fill the gap in models in Portuguese. Tuned from the Llama-2-13b # How to use ### FULL MODEL : A100 ### HALF MODEL: L4 ### 8bit or 4bit : T4 or V100 You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models to perform much better. ```python !pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model = AutoModelForCausalLM.from_pretrained("rhaymison/portuguese-tom-cat-13b", device_map= {"": 0}) tokenizer = AutoTokenizer.from_pretrained("rhaymison/portuguese-tom-cat-13b") model.eval() ``` You can use with Pipeline. ```python from transformers import pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, do_sample=True, max_new_tokens=512, num_beams=2, temperature=0.3, top_k=50, top_p=0.95, early_stopping=True, pad_token_id=tokenizer.eos_token_id, ) def format_question(input:str)-> str: base_instruction = """Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.""" _input = f"""<s>[INST] <<SYS>>\n {base_instruction} <</SYS>> {input} [/INST] """ return _input.strip() prompt = "Me explique sobre os romanos" pipe(format_question(prompt)) ``` ```text Os romanos foram um povo que viveu na Itália antiga, entre o século VIII a.C. e o século V d.C. Eles eram conhecidos por sua habilidade em construir estradas, edifícios e aquedutos, e também por suas conquistas militares. O Império Romano, que durou de 27 a.C. a 476 d.C., foi o maior império da história, abrangendo uma área que ia da Grécia até a Inglaterra. Os romanos também desenvolveram um sistema de leis e instituições políticas que influenciaram profundamente a cultura ocidental. ``` If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization. For the complete model in colab you will need the A100. If you want to use 4bits or 8bits, T4 or L4 will already solve the problem. # 4bits example ```python from transformers import BitsAndBytesConfig import torch nb_4bit_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True ) model = AutoModelForCausalLM.from_pretrained( base_model, quantization_config=bnb_config, device_map={"": 0} ) ``` # Open Portuguese LLM Leaderboard Evaluation Results Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/portuguese-tom-cat-13b) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) | Metric | Value | |--------------------------|---------| |Average |**57.86**| |ENEM Challenge (No Images)| 42.76| |BLUEX (No Images) | 45.62| |OAB Exams | 39.09| |Assin2 RTE | 77.41| |Assin2 STS | 58.44| |FaQuAD NLI | 68.14| |HateBR Binary | 84.13| |PT Hate Speech Binary | 56.27| |tweetSentBR | 48.86| ### Comments Any idea, help or report will always be welcome. email: [email protected] <div style="display:flex; flex-direction:row; justify-content:left"> <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank"> <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"> </a> <a href="https://github.com/rhaymisonbetini" target="_blank"> <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"> </a>
linzw/PASTED-Lexical
linzw
2024-05-21T04:58:58Z
96
0
transformers
[ "transformers", "safetensors", "longformer", "token-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-20T14:36:19Z
--- license: apache-2.0 ---
damgomz/ft_bs32_1lr6_base_x8
damgomz
2024-05-21T04:55:30Z
106
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T21:05:10Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:55:27' project_name: ft_bs32_1lr6_base_x8_emissions_tracker run_id: 1ff5aee1-6c8d-4e7a-951c-dc91ed582d85 duration: 29669.0817964077 emissions: 0.0194088841146787 emissions_rate: 6.541787928546118e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.3502593684590531 gpu_energy: 0 ram_energy: 0.0618101017152269 energy_consumed: 0.4120694701742792 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 29669.0817964077 | | Emissions (Co2eq in kg) | 0.0194088841146787 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.3502593684590531 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0618101017152269 | | Consumed energy (kWh) | 0.4120694701742792 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.05711298245808482 | | Emissions (Co2eq in kg) | 0.011620390370259682 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_bs32_1lr6_base_x8 | | sequence_length | 400 | | num_epoch | 20 | | learning_rate | 1e-06 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 108600 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.563129 | 0.502131 | 0.748159 | 0.777607 | | 1 | 0.467373 | 0.454147 | 0.782769 | 0.842025 | | 2 | 0.418498 | 0.435779 | 0.794551 | 0.907975 | | 3 | 0.379170 | 0.403679 | 0.811487 | 0.895706 | | 4 | 0.358712 | 0.382256 | 0.827688 | 0.858896 | | 5 | 0.340088 | 0.380777 | 0.834315 | 0.880368 | | 6 | 0.326862 | 0.395078 | 0.823270 | 0.897239 | | 7 | 0.314514 | 0.419026 | 0.816642 | 0.929448 | | 8 | 0.302010 | 0.378412 | 0.832842 | 0.834356 | | 9 | 0.293725 | 0.385449 | 0.824006 | 0.797546 | | 10 | 0.286153 | 0.380928 | 0.835052 | 0.874233 | | 11 | 0.267783 | 0.388242 | 0.836524 | 0.877301 | | 12 | 0.255809 | 0.398119 | 0.830633 | 0.831288 | | 13 | 0.245926 | 0.413752 | 0.819588 | 0.797546 | | 14 | 0.236472 | 0.416892 | 0.815906 | 0.794479 | | 15 | 0.223494 | 0.431361 | 0.830633 | 0.872699 | | 16 | 0.207387 | 0.438017 | 0.815169 | 0.808282 | | 17 | 0.198799 | 0.445411 | 0.819588 | 0.819018 | | 18 | 0.182488 | 0.460939 | 0.821060 | 0.837423 | | 19 | 0.168675 | 0.513154 | 0.817378 | 0.900307 |
damgomz/ft_bs32_lr7_base_x8
damgomz
2024-05-21T04:51:30Z
106
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T21:06:02Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:51:24' project_name: ft_bs32_lr7_base_x8_emissions_tracker run_id: d86ab92c-42c5-46ae-a58f-cb705b0a7a8b duration: 29443.913482666016 emissions: 0.0192615910805578 emissions_rate: 6.54179040836314e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.3476012287669722 gpu_energy: 0 ram_energy: 0.0613410671621561 energy_consumed: 0.4089422959291282 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 29443.913482666016 | | Emissions (Co2eq in kg) | 0.0192615910805578 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.3476012287669722 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0613410671621561 | | Consumed energy (kWh) | 0.4089422959291282 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.056679533454132076 | | Emissions (Co2eq in kg) | 0.011532199447377522 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_bs32_lr7_base_x8 | | sequence_length | 400 | | num_epoch | 20 | | learning_rate | 5e-07 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 108600 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.599385 | 0.533520 | 0.732695 | 0.743865 | | 1 | 0.497255 | 0.495337 | 0.756996 | 0.874233 | | 2 | 0.456973 | 0.457591 | 0.777614 | 0.812883 | | 3 | 0.428078 | 0.435462 | 0.792342 | 0.811350 | | 4 | 0.405985 | 0.418146 | 0.806333 | 0.865031 | | 5 | 0.386763 | 0.402823 | 0.818851 | 0.852761 | | 6 | 0.370968 | 0.398841 | 0.818115 | 0.819018 | | 7 | 0.361504 | 0.389461 | 0.822533 | 0.865031 | | 8 | 0.348315 | 0.386434 | 0.828424 | 0.881902 | | 9 | 0.339924 | 0.381690 | 0.829897 | 0.820552 | | 10 | 0.333508 | 0.379336 | 0.829161 | 0.869632 | | 11 | 0.327714 | 0.375907 | 0.831370 | 0.860429 | | 12 | 0.319972 | 0.372091 | 0.835052 | 0.861963 | | 13 | 0.311965 | 0.373268 | 0.833579 | 0.829755 | | 14 | 0.307354 | 0.374971 | 0.834315 | 0.835890 | | 15 | 0.303944 | 0.373268 | 0.835052 | 0.874233 | | 16 | 0.297742 | 0.387149 | 0.831370 | 0.906442 | | 17 | 0.288179 | 0.376481 | 0.837997 | 0.878834 | | 18 | 0.284836 | 0.380563 | 0.834315 | 0.892638 | | 19 | 0.279182 | 0.376233 | 0.835788 | 0.843558 |
sidddddddddddd/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo
sidddddddddddd
2024-05-21T04:49:33Z
7
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-20T09:58:47Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** sidddddddddddd - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon0.1-pgd_layer10-def_layer0-harmless-2
PhillipGuo
2024-05-21T04:38:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T04:38:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RomBor/ppo-SnowballTarget
RomBor
2024-05-21T04:38:00Z
1
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2024-05-21T04:37:56Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: RomBor/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2
Zoyd
2024-05-21T04:35:51Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "exl2", "region:us" ]
text-generation
2024-05-21T04:29:28Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
damgomz/ft_bs64_lr7_base_x8
damgomz
2024-05-21T04:35:45Z
108
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T20:59:35Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:35:39' project_name: ft_bs64_lr7_base_x8_emissions_tracker run_id: 0945d8ee-6ebc-49db-aee6-bd90d1f4b2bb duration: 28945.63649892807 emissions: 0.0189356286432173 emissions_rate: 6.541790381399498e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.341718781052364 gpu_energy: 0 ram_energy: 0.0603030155807732 energy_consumed: 0.4020217966331371 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 28945.63649892807 | | Emissions (Co2eq in kg) | 0.0189356286432173 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.341718781052364 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0603030155807732 | | Consumed energy (kWh) | 0.4020217966331371 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.05572035026043654 | | Emissions (Co2eq in kg) | 0.011337040962080162 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_bs64_lr7_base_x8 | | sequence_length | 400 | | num_epoch | 20 | | learning_rate | 5e-07 | | batch_size | 64 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 108600 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.605284 | 0.559398 | 0.727541 | 0.823620 | | 1 | 0.521418 | 0.518530 | 0.740795 | 0.834356 | | 2 | 0.487186 | 0.492726 | 0.758468 | 0.835890 | | 3 | 0.459882 | 0.469925 | 0.776141 | 0.832822 | | 4 | 0.436708 | 0.451493 | 0.786451 | 0.848160 | | 5 | 0.414721 | 0.432541 | 0.798233 | 0.815951 | | 6 | 0.395287 | 0.419788 | 0.806333 | 0.819018 | | 7 | 0.381209 | 0.413589 | 0.805596 | 0.868098 | | 8 | 0.371399 | 0.402106 | 0.821060 | 0.858896 | | 9 | 0.362982 | 0.403256 | 0.814433 | 0.878834 | | 10 | 0.353726 | 0.393290 | 0.826215 | 0.848160 | | 11 | 0.346824 | 0.389223 | 0.826215 | 0.852761 | | 12 | 0.341413 | 0.385427 | 0.829161 | 0.846626 | | 13 | 0.339145 | 0.385045 | 0.830633 | 0.835890 | | 14 | 0.329240 | 0.386728 | 0.826215 | 0.874233 | | 15 | 0.325913 | 0.383079 | 0.834315 | 0.825153 | | 16 | 0.319987 | 0.381838 | 0.831370 | 0.840491 | | 17 | 0.314436 | 0.383904 | 0.834315 | 0.871166 | | 18 | 0.309196 | 0.386049 | 0.833579 | 0.881902 | | 19 | 0.309234 | 0.411747 | 0.818115 | 0.923313 |
HariprasathSB/whisper-tamil-vulnerablee
HariprasathSB
2024-05-21T04:33:57Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:HariprasathSB/whisper-tamil-vulnerable", "base_model:finetune:HariprasathSB/whisper-tamil-vulnerable", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-20T20:21:03Z
--- license: apache-2.0 base_model: HariprasathSB/whisper-tamil-vulnerable tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tamil-vulnerablee results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tamil-vulnerablee This model is a fine-tuned version of [HariprasathSB/whisper-tamil-vulnerable](https://huggingface.co/HariprasathSB/whisper-tamil-vulnerable) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1757 - Wer: 76.6682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0216 | 1.7544 | 200 | 1.0816 | 78.3139 | | 0.0191 | 3.5088 | 400 | 1.0777 | 79.3327 | | 0.0069 | 5.2632 | 600 | 1.1236 | 77.1048 | | 0.003 | 7.0175 | 800 | 1.1772 | 78.3699 | | 0.0004 | 8.7719 | 1000 | 1.1757 | 76.6682 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
damgomz/ft_bs16_lr7_mlm
damgomz
2024-05-21T04:31:29Z
107
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T20:20:56Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:31:25' project_name: ft_bs16_lr7_mlm_emissions_tracker run_id: b6614c15-0b17-42cf-a4e3-7b88ff581e67 duration: 31470.129777431488 emissions: 0.0190430699900628 emissions_rate: 6.051157120972355e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 3.75 cpu_energy: 0.3715217473053264 gpu_energy: 0 ram_energy: 0.0327811335265635 energy_consumed: 0.4043028808318904 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 2 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 10 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 31470.129777431488 | | Emissions (Co2eq in kg) | 0.0190430699900628 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.3715217473053264 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0327811335265635 | | Consumed energy (kWh) | 0.4043028808318904 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.06057999982155562 | | Emissions (Co2eq in kg) | 0.012325800829494 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs16_lr5_MLM | | model_name | ft_bs16_lr7_mlm | | sequence_length | 400 | | num_epoch | 15 | | learning_rate | 5e-07 | | batch_size | 16 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 81450 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.628769 | 0.554894 | 0.730486 | 0.842025 | | 1 | 0.510461 | 0.486829 | 0.763623 | 0.797546 | | 2 | 0.449970 | 0.445788 | 0.786451 | 0.888037 | | 3 | 0.410732 | 0.416862 | 0.807806 | 0.884969 | | 4 | 0.380523 | 0.396044 | 0.812960 | 0.872699 | | 5 | 0.359862 | 0.388476 | 0.820324 | 0.909509 | | 6 | 0.342461 | 0.369396 | 0.834315 | 0.874233 | | 7 | 0.330469 | 0.362060 | 0.840943 | 0.861963 | | 8 | 0.319533 | 0.359950 | 0.840943 | 0.889571 | | 9 | 0.310329 | 0.358102 | 0.843888 | 0.892638 | | 10 | 0.300148 | 0.363338 | 0.840206 | 0.904908 | | 11 | 0.291830 | 0.362882 | 0.830633 | 0.791411 | | 12 | 0.285529 | 0.354668 | 0.840206 | 0.849693 | | 13 | 0.277152 | 0.358292 | 0.837261 | 0.823620 | | 14 | 0.264916 | 0.364439 | 0.844624 | 0.897239 |
phuccodelo/violence_finetune-1111
phuccodelo
2024-05-21T04:30:25Z
61
0
transformers
[ "transformers", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-05-21T03:00:17Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer model-index: - name: violence_finetune-1111 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # violence_finetune-1111 This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.7299 - eval_accuracy: 0.5 - eval_runtime: 317.7414 - eval_samples_per_second: 0.434 - eval_steps_per_second: 0.217 - epoch: 0.03 - step: 8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 306 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2
Zoyd
2024-05-21T04:29:43Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T03:36:07Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2
Zoyd
2024-05-21T04:29:43Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-05-21T03:51:14Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2
Zoyd
2024-05-21T04:29:42Z
3
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "3-bit", "exl2", "region:us" ]
text-generation
2024-05-21T03:28:33Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2
Zoyd
2024-05-21T04:29:42Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T03:21:03Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2
Zoyd
2024-05-21T04:29:42Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2405.03548", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-21T03:13:36Z
--- license: mit language: - en --- **Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_0bpw_exl2)**</center> | <center>2839 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_5bpw_exl2)**</center> | <center>3261 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_0bpw_exl2)**</center> | <center>3675 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-4_25bpw_exl2)**</center> | <center>3883 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_0bpw_exl2)**</center> | <center>5359 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-Plus-8_0bpw_exl2)**</center> | <center>6851 MB</center> | <center>8</center> | # 🦣 MAmmoTH2: Scaling Instructions from the Web Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/) Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548) Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2) ## Introduction Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities. | | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** | |:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------| | 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) | | 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) | | 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) | ## Training Data Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details. ![Project Framework](webinstruct.png) ## Training Procedure The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results: | **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** | |:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------| | **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 | | **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 | | **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 | | **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 | | **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 | | **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 | To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval. ## Usage You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution. Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2 ## Limitations We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively. ## Citation If you use the models, data, or code from this project, please cite the original paper: ``` @article{yue2024mammoth2, title={MAmmoTH2: Scaling Instructions from the Web}, author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu}, journal={arXiv preprint arXiv:2405.03548}, year={2024} } ```
sangmini/Llama-3-Ko-8B-Instruct
sangmini
2024-05-21T04:28:38Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T04:23:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
muthu0101/q-FrozenLake-v1-4x4-noSlippery
muthu0101
2024-05-21T04:20:52Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-21T04:20:48Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="muthu0101/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ekorman-strive/bge-large-en-v1.5
ekorman-strive
2024-05-21T04:15:32Z
11
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-20T23:31:23Z
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-large-en-v1.5 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8507462686567 - type: ap value: 38.566457320228245 - type: f1 value: 69.69386648043475 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.416675 - type: ap value: 89.1928861155922 - type: f1 value: 92.39477019574215 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 48.175999999999995 - type: f1 value: 47.80712792870253 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 40.184999999999995 - type: map_at_10 value: 55.654 - type: map_at_100 value: 56.25 - type: map_at_1000 value: 56.255 - type: map_at_3 value: 51.742999999999995 - type: map_at_5 value: 54.129000000000005 - type: mrr_at_1 value: 40.967 - type: mrr_at_10 value: 55.96 - type: mrr_at_100 value: 56.54900000000001 - type: mrr_at_1000 value: 56.554 - type: mrr_at_3 value: 51.980000000000004 - type: mrr_at_5 value: 54.44 - type: ndcg_at_1 value: 40.184999999999995 - type: ndcg_at_10 value: 63.542 - type: ndcg_at_100 value: 65.96499999999999 - type: ndcg_at_1000 value: 66.08699999999999 - type: ndcg_at_3 value: 55.582 - type: ndcg_at_5 value: 59.855000000000004 - type: precision_at_1 value: 40.184999999999995 - type: precision_at_10 value: 8.841000000000001 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.238 - type: precision_at_5 value: 15.405 - type: recall_at_1 value: 40.184999999999995 - type: recall_at_10 value: 88.407 - type: recall_at_100 value: 98.72 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 66.714 - type: recall_at_5 value: 77.027 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 48.567077926750066 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 43.19453389182364 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 64.46555939623092 - type: mrr value: 77.82361605768807 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 84.9554128814735 - type: cos_sim_spearman value: 84.65373612172036 - type: euclidean_pearson value: 83.2905059954138 - type: euclidean_spearman value: 84.52240782811128 - type: manhattan_pearson value: 82.99533802997436 - type: manhattan_spearman value: 84.20673798475734 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.78896103896103 - type: f1 value: 87.77189310964883 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 39.714538337650495 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 36.90108349284447 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.795 - type: map_at_10 value: 43.669000000000004 - type: map_at_100 value: 45.151 - type: map_at_1000 value: 45.278 - type: map_at_3 value: 40.006 - type: map_at_5 value: 42.059999999999995 - type: mrr_at_1 value: 39.771 - type: mrr_at_10 value: 49.826 - type: mrr_at_100 value: 50.504000000000005 - type: mrr_at_1000 value: 50.549 - type: mrr_at_3 value: 47.115 - type: mrr_at_5 value: 48.832 - type: ndcg_at_1 value: 39.771 - type: ndcg_at_10 value: 50.217999999999996 - type: ndcg_at_100 value: 55.454 - type: ndcg_at_1000 value: 57.37 - type: ndcg_at_3 value: 44.885000000000005 - type: ndcg_at_5 value: 47.419 - type: precision_at_1 value: 39.771 - type: precision_at_10 value: 9.642000000000001 - type: precision_at_100 value: 1.538 - type: precision_at_1000 value: 0.198 - type: precision_at_3 value: 21.268 - type: precision_at_5 value: 15.536 - type: recall_at_1 value: 32.795 - type: recall_at_10 value: 62.580999999999996 - type: recall_at_100 value: 84.438 - type: recall_at_1000 value: 96.492 - type: recall_at_3 value: 47.071000000000005 - type: recall_at_5 value: 54.079 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 32.671 - type: map_at_10 value: 43.334 - type: map_at_100 value: 44.566 - type: map_at_1000 value: 44.702999999999996 - type: map_at_3 value: 40.343 - type: map_at_5 value: 41.983 - type: mrr_at_1 value: 40.764 - type: mrr_at_10 value: 49.382 - type: mrr_at_100 value: 49.988 - type: mrr_at_1000 value: 50.03300000000001 - type: mrr_at_3 value: 47.293 - type: mrr_at_5 value: 48.51 - type: ndcg_at_1 value: 40.764 - type: ndcg_at_10 value: 49.039 - type: ndcg_at_100 value: 53.259 - type: ndcg_at_1000 value: 55.253 - type: ndcg_at_3 value: 45.091 - type: ndcg_at_5 value: 46.839999999999996 - type: precision_at_1 value: 40.764 - type: precision_at_10 value: 9.191 - type: precision_at_100 value: 1.476 - type: precision_at_1000 value: 0.19499999999999998 - type: precision_at_3 value: 21.72 - type: precision_at_5 value: 15.299 - type: recall_at_1 value: 32.671 - type: recall_at_10 value: 58.816 - type: recall_at_100 value: 76.654 - type: recall_at_1000 value: 89.05999999999999 - type: recall_at_3 value: 46.743 - type: recall_at_5 value: 51.783 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 40.328 - type: map_at_10 value: 53.32599999999999 - type: map_at_100 value: 54.37499999999999 - type: map_at_1000 value: 54.429 - type: map_at_3 value: 49.902 - type: map_at_5 value: 52.002 - type: mrr_at_1 value: 46.332 - type: mrr_at_10 value: 56.858 - type: mrr_at_100 value: 57.522 - type: mrr_at_1000 value: 57.54899999999999 - type: mrr_at_3 value: 54.472 - type: mrr_at_5 value: 55.996 - type: ndcg_at_1 value: 46.332 - type: ndcg_at_10 value: 59.313 - type: ndcg_at_100 value: 63.266999999999996 - type: ndcg_at_1000 value: 64.36 - type: ndcg_at_3 value: 53.815000000000005 - type: ndcg_at_5 value: 56.814 - type: precision_at_1 value: 46.332 - type: precision_at_10 value: 9.53 - type: precision_at_100 value: 1.238 - type: precision_at_1000 value: 0.13699999999999998 - type: precision_at_3 value: 24.054000000000002 - type: precision_at_5 value: 16.589000000000002 - type: recall_at_1 value: 40.328 - type: recall_at_10 value: 73.421 - type: recall_at_100 value: 90.059 - type: recall_at_1000 value: 97.81 - type: recall_at_3 value: 59.009 - type: recall_at_5 value: 66.352 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.424 - type: map_at_10 value: 36.332 - type: map_at_100 value: 37.347 - type: map_at_1000 value: 37.422 - type: map_at_3 value: 33.743 - type: map_at_5 value: 35.176 - type: mrr_at_1 value: 29.153000000000002 - type: mrr_at_10 value: 38.233 - type: mrr_at_100 value: 39.109 - type: mrr_at_1000 value: 39.164 - type: mrr_at_3 value: 35.876000000000005 - type: mrr_at_5 value: 37.169000000000004 - type: ndcg_at_1 value: 29.153000000000002 - type: ndcg_at_10 value: 41.439 - type: ndcg_at_100 value: 46.42 - type: ndcg_at_1000 value: 48.242000000000004 - type: ndcg_at_3 value: 36.362 - type: ndcg_at_5 value: 38.743 - type: precision_at_1 value: 29.153000000000002 - type: precision_at_10 value: 6.315999999999999 - type: precision_at_100 value: 0.927 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 15.443000000000001 - type: precision_at_5 value: 10.644 - type: recall_at_1 value: 27.424 - type: recall_at_10 value: 55.364000000000004 - type: recall_at_100 value: 78.211 - type: recall_at_1000 value: 91.74600000000001 - type: recall_at_3 value: 41.379 - type: recall_at_5 value: 47.14 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.601 - type: map_at_10 value: 27.826 - type: map_at_100 value: 29.017 - type: map_at_1000 value: 29.137 - type: map_at_3 value: 25.125999999999998 - type: map_at_5 value: 26.765 - type: mrr_at_1 value: 24.005000000000003 - type: mrr_at_10 value: 32.716 - type: mrr_at_100 value: 33.631 - type: mrr_at_1000 value: 33.694 - type: mrr_at_3 value: 29.934 - type: mrr_at_5 value: 31.630999999999997 - type: ndcg_at_1 value: 24.005000000000003 - type: ndcg_at_10 value: 33.158 - type: ndcg_at_100 value: 38.739000000000004 - type: ndcg_at_1000 value: 41.495 - type: ndcg_at_3 value: 28.185 - type: ndcg_at_5 value: 30.796 - type: precision_at_1 value: 24.005000000000003 - type: precision_at_10 value: 5.908 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 13.391 - type: precision_at_5 value: 9.876 - type: recall_at_1 value: 19.601 - type: recall_at_10 value: 44.746 - type: recall_at_100 value: 68.82300000000001 - type: recall_at_1000 value: 88.215 - type: recall_at_3 value: 31.239 - type: recall_at_5 value: 37.695 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.130000000000003 - type: map_at_10 value: 40.96 - type: map_at_100 value: 42.282 - type: map_at_1000 value: 42.392 - type: map_at_3 value: 37.889 - type: map_at_5 value: 39.661 - type: mrr_at_1 value: 36.958999999999996 - type: mrr_at_10 value: 46.835 - type: mrr_at_100 value: 47.644 - type: mrr_at_1000 value: 47.688 - type: mrr_at_3 value: 44.562000000000005 - type: mrr_at_5 value: 45.938 - type: ndcg_at_1 value: 36.958999999999996 - type: ndcg_at_10 value: 47.06 - type: ndcg_at_100 value: 52.345 - type: ndcg_at_1000 value: 54.35 - type: ndcg_at_3 value: 42.301 - type: ndcg_at_5 value: 44.635999999999996 - type: precision_at_1 value: 36.958999999999996 - type: precision_at_10 value: 8.479000000000001 - type: precision_at_100 value: 1.284 - type: precision_at_1000 value: 0.163 - type: precision_at_3 value: 20.244 - type: precision_at_5 value: 14.224999999999998 - type: recall_at_1 value: 30.130000000000003 - type: recall_at_10 value: 59.27 - type: recall_at_100 value: 81.195 - type: recall_at_1000 value: 94.21199999999999 - type: recall_at_3 value: 45.885 - type: recall_at_5 value: 52.016 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.169999999999998 - type: map_at_10 value: 36.451 - type: map_at_100 value: 37.791000000000004 - type: map_at_1000 value: 37.897 - type: map_at_3 value: 33.109 - type: map_at_5 value: 34.937000000000005 - type: mrr_at_1 value: 32.877 - type: mrr_at_10 value: 42.368 - type: mrr_at_100 value: 43.201 - type: mrr_at_1000 value: 43.259 - type: mrr_at_3 value: 39.763999999999996 - type: mrr_at_5 value: 41.260000000000005 - type: ndcg_at_1 value: 32.877 - type: ndcg_at_10 value: 42.659000000000006 - type: ndcg_at_100 value: 48.161 - type: ndcg_at_1000 value: 50.345 - type: ndcg_at_3 value: 37.302 - type: ndcg_at_5 value: 39.722 - type: precision_at_1 value: 32.877 - type: precision_at_10 value: 7.9 - type: precision_at_100 value: 1.236 - type: precision_at_1000 value: 0.158 - type: precision_at_3 value: 17.846 - type: precision_at_5 value: 12.9 - type: recall_at_1 value: 26.169999999999998 - type: recall_at_10 value: 55.35 - type: recall_at_100 value: 78.755 - type: recall_at_1000 value: 93.518 - type: recall_at_3 value: 40.176 - type: recall_at_5 value: 46.589000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.15516666666667 - type: map_at_10 value: 36.65741666666667 - type: map_at_100 value: 37.84991666666666 - type: map_at_1000 value: 37.96316666666667 - type: map_at_3 value: 33.74974999999999 - type: map_at_5 value: 35.3765 - type: mrr_at_1 value: 32.08233333333334 - type: mrr_at_10 value: 41.033833333333334 - type: mrr_at_100 value: 41.84524999999999 - type: mrr_at_1000 value: 41.89983333333333 - type: mrr_at_3 value: 38.62008333333333 - type: mrr_at_5 value: 40.03441666666666 - type: ndcg_at_1 value: 32.08233333333334 - type: ndcg_at_10 value: 42.229 - type: ndcg_at_100 value: 47.26716666666667 - type: ndcg_at_1000 value: 49.43466666666667 - type: ndcg_at_3 value: 37.36408333333333 - type: ndcg_at_5 value: 39.6715 - type: precision_at_1 value: 32.08233333333334 - type: precision_at_10 value: 7.382583333333334 - type: precision_at_100 value: 1.16625 - type: precision_at_1000 value: 0.15408333333333332 - type: precision_at_3 value: 17.218 - type: precision_at_5 value: 12.21875 - type: recall_at_1 value: 27.15516666666667 - type: recall_at_10 value: 54.36683333333333 - type: recall_at_100 value: 76.37183333333333 - type: recall_at_1000 value: 91.26183333333333 - type: recall_at_3 value: 40.769916666666674 - type: recall_at_5 value: 46.702333333333335 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.749 - type: map_at_10 value: 33.001999999999995 - type: map_at_100 value: 33.891 - type: map_at_1000 value: 33.993 - type: map_at_3 value: 30.703999999999997 - type: map_at_5 value: 31.959 - type: mrr_at_1 value: 28.834 - type: mrr_at_10 value: 35.955 - type: mrr_at_100 value: 36.709 - type: mrr_at_1000 value: 36.779 - type: mrr_at_3 value: 33.947 - type: mrr_at_5 value: 35.089 - type: ndcg_at_1 value: 28.834 - type: ndcg_at_10 value: 37.329 - type: ndcg_at_100 value: 41.79 - type: ndcg_at_1000 value: 44.169000000000004 - type: ndcg_at_3 value: 33.184999999999995 - type: ndcg_at_5 value: 35.107 - type: precision_at_1 value: 28.834 - type: precision_at_10 value: 5.7669999999999995 - type: precision_at_100 value: 0.876 - type: precision_at_1000 value: 0.11399999999999999 - type: precision_at_3 value: 14.213000000000001 - type: precision_at_5 value: 9.754999999999999 - type: recall_at_1 value: 25.749 - type: recall_at_10 value: 47.791 - type: recall_at_100 value: 68.255 - type: recall_at_1000 value: 85.749 - type: recall_at_3 value: 36.199 - type: recall_at_5 value: 41.071999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.777 - type: map_at_10 value: 25.201 - type: map_at_100 value: 26.423999999999996 - type: map_at_1000 value: 26.544 - type: map_at_3 value: 22.869 - type: map_at_5 value: 24.023 - type: mrr_at_1 value: 21.473 - type: mrr_at_10 value: 29.12 - type: mrr_at_100 value: 30.144 - type: mrr_at_1000 value: 30.215999999999998 - type: mrr_at_3 value: 26.933 - type: mrr_at_5 value: 28.051 - type: ndcg_at_1 value: 21.473 - type: ndcg_at_10 value: 30.003 - type: ndcg_at_100 value: 35.766 - type: ndcg_at_1000 value: 38.501000000000005 - type: ndcg_at_3 value: 25.773000000000003 - type: ndcg_at_5 value: 27.462999999999997 - type: precision_at_1 value: 21.473 - type: precision_at_10 value: 5.482 - type: precision_at_100 value: 0.975 - type: precision_at_1000 value: 0.13799999999999998 - type: precision_at_3 value: 12.205 - type: precision_at_5 value: 8.692 - type: recall_at_1 value: 17.777 - type: recall_at_10 value: 40.582 - type: recall_at_100 value: 66.305 - type: recall_at_1000 value: 85.636 - type: recall_at_3 value: 28.687 - type: recall_at_5 value: 33.089 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 26.677 - type: map_at_10 value: 36.309000000000005 - type: map_at_100 value: 37.403999999999996 - type: map_at_1000 value: 37.496 - type: map_at_3 value: 33.382 - type: map_at_5 value: 34.98 - type: mrr_at_1 value: 31.343 - type: mrr_at_10 value: 40.549 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.397 - type: mrr_at_3 value: 38.029 - type: mrr_at_5 value: 39.451 - type: ndcg_at_1 value: 31.343 - type: ndcg_at_10 value: 42.1 - type: ndcg_at_100 value: 47.089999999999996 - type: ndcg_at_1000 value: 49.222 - type: ndcg_at_3 value: 36.836999999999996 - type: ndcg_at_5 value: 39.21 - type: precision_at_1 value: 31.343 - type: precision_at_10 value: 7.164 - type: precision_at_100 value: 1.0959999999999999 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 16.915 - type: precision_at_5 value: 11.940000000000001 - type: recall_at_1 value: 26.677 - type: recall_at_10 value: 55.54599999999999 - type: recall_at_100 value: 77.094 - type: recall_at_1000 value: 92.01 - type: recall_at_3 value: 41.191 - type: recall_at_5 value: 47.006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.501 - type: map_at_10 value: 33.102 - type: map_at_100 value: 34.676 - type: map_at_1000 value: 34.888000000000005 - type: map_at_3 value: 29.944 - type: map_at_5 value: 31.613999999999997 - type: mrr_at_1 value: 29.447000000000003 - type: mrr_at_10 value: 37.996 - type: mrr_at_100 value: 38.946 - type: mrr_at_1000 value: 38.995000000000005 - type: mrr_at_3 value: 35.079 - type: mrr_at_5 value: 36.69 - type: ndcg_at_1 value: 29.447000000000003 - type: ndcg_at_10 value: 39.232 - type: ndcg_at_100 value: 45.247 - type: ndcg_at_1000 value: 47.613 - type: ndcg_at_3 value: 33.922999999999995 - type: ndcg_at_5 value: 36.284 - type: precision_at_1 value: 29.447000000000003 - type: precision_at_10 value: 7.648000000000001 - type: precision_at_100 value: 1.516 - type: precision_at_1000 value: 0.23900000000000002 - type: precision_at_3 value: 16.008 - type: precision_at_5 value: 11.779 - type: recall_at_1 value: 24.501 - type: recall_at_10 value: 51.18899999999999 - type: recall_at_100 value: 78.437 - type: recall_at_1000 value: 92.842 - type: recall_at_3 value: 35.808 - type: recall_at_5 value: 42.197 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.039 - type: map_at_10 value: 30.377 - type: map_at_100 value: 31.275 - type: map_at_1000 value: 31.379 - type: map_at_3 value: 27.98 - type: map_at_5 value: 29.358 - type: mrr_at_1 value: 24.03 - type: mrr_at_10 value: 32.568000000000005 - type: mrr_at_100 value: 33.403 - type: mrr_at_1000 value: 33.475 - type: mrr_at_3 value: 30.436999999999998 - type: mrr_at_5 value: 31.796000000000003 - type: ndcg_at_1 value: 24.03 - type: ndcg_at_10 value: 35.198 - type: ndcg_at_100 value: 39.668 - type: ndcg_at_1000 value: 42.296 - type: ndcg_at_3 value: 30.709999999999997 - type: ndcg_at_5 value: 33.024 - type: precision_at_1 value: 24.03 - type: precision_at_10 value: 5.564 - type: precision_at_100 value: 0.828 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 13.309000000000001 - type: precision_at_5 value: 9.39 - type: recall_at_1 value: 22.039 - type: recall_at_10 value: 47.746 - type: recall_at_100 value: 68.23599999999999 - type: recall_at_1000 value: 87.852 - type: recall_at_3 value: 35.852000000000004 - type: recall_at_5 value: 41.410000000000004 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 15.692999999999998 - type: map_at_10 value: 26.903 - type: map_at_100 value: 28.987000000000002 - type: map_at_1000 value: 29.176999999999996 - type: map_at_3 value: 22.137 - type: map_at_5 value: 24.758 - type: mrr_at_1 value: 35.57 - type: mrr_at_10 value: 47.821999999999996 - type: mrr_at_100 value: 48.608000000000004 - type: mrr_at_1000 value: 48.638999999999996 - type: mrr_at_3 value: 44.452000000000005 - type: mrr_at_5 value: 46.546 - type: ndcg_at_1 value: 35.57 - type: ndcg_at_10 value: 36.567 - type: ndcg_at_100 value: 44.085 - type: ndcg_at_1000 value: 47.24 - type: ndcg_at_3 value: 29.964000000000002 - type: ndcg_at_5 value: 32.511 - type: precision_at_1 value: 35.57 - type: precision_at_10 value: 11.485 - type: precision_at_100 value: 1.9619999999999997 - type: precision_at_1000 value: 0.256 - type: precision_at_3 value: 22.237000000000002 - type: precision_at_5 value: 17.471999999999998 - type: recall_at_1 value: 15.692999999999998 - type: recall_at_10 value: 43.056 - type: recall_at_100 value: 68.628 - type: recall_at_1000 value: 86.075 - type: recall_at_3 value: 26.918999999999997 - type: recall_at_5 value: 34.14 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.53 - type: map_at_10 value: 20.951 - type: map_at_100 value: 30.136000000000003 - type: map_at_1000 value: 31.801000000000002 - type: map_at_3 value: 15.021 - type: map_at_5 value: 17.471999999999998 - type: mrr_at_1 value: 71.0 - type: mrr_at_10 value: 79.176 - type: mrr_at_100 value: 79.418 - type: mrr_at_1000 value: 79.426 - type: mrr_at_3 value: 78.125 - type: mrr_at_5 value: 78.61200000000001 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 44.106 - type: ndcg_at_100 value: 49.268 - type: ndcg_at_1000 value: 56.711999999999996 - type: ndcg_at_3 value: 48.934 - type: ndcg_at_5 value: 45.826 - type: precision_at_1 value: 71.0 - type: precision_at_10 value: 35.0 - type: precision_at_100 value: 11.360000000000001 - type: precision_at_1000 value: 2.046 - type: precision_at_3 value: 52.833 - type: precision_at_5 value: 44.15 - type: recall_at_1 value: 9.53 - type: recall_at_10 value: 26.811 - type: recall_at_100 value: 55.916999999999994 - type: recall_at_1000 value: 79.973 - type: recall_at_3 value: 16.413 - type: recall_at_5 value: 19.980999999999998 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 51.519999999999996 - type: f1 value: 46.36601294761231 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 74.413 - type: map_at_10 value: 83.414 - type: map_at_100 value: 83.621 - type: map_at_1000 value: 83.635 - type: map_at_3 value: 82.337 - type: map_at_5 value: 83.039 - type: mrr_at_1 value: 80.19800000000001 - type: mrr_at_10 value: 87.715 - type: mrr_at_100 value: 87.778 - type: mrr_at_1000 value: 87.779 - type: mrr_at_3 value: 87.106 - type: mrr_at_5 value: 87.555 - type: ndcg_at_1 value: 80.19800000000001 - type: ndcg_at_10 value: 87.182 - type: ndcg_at_100 value: 87.90299999999999 - type: ndcg_at_1000 value: 88.143 - type: ndcg_at_3 value: 85.60600000000001 - type: ndcg_at_5 value: 86.541 - type: precision_at_1 value: 80.19800000000001 - type: precision_at_10 value: 10.531 - type: precision_at_100 value: 1.113 - type: precision_at_1000 value: 0.11499999999999999 - type: precision_at_3 value: 32.933 - type: precision_at_5 value: 20.429 - type: recall_at_1 value: 74.413 - type: recall_at_10 value: 94.363 - type: recall_at_100 value: 97.165 - type: recall_at_1000 value: 98.668 - type: recall_at_3 value: 90.108 - type: recall_at_5 value: 92.52 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 22.701 - type: map_at_10 value: 37.122 - type: map_at_100 value: 39.178000000000004 - type: map_at_1000 value: 39.326 - type: map_at_3 value: 32.971000000000004 - type: map_at_5 value: 35.332 - type: mrr_at_1 value: 44.753 - type: mrr_at_10 value: 53.452 - type: mrr_at_100 value: 54.198 - type: mrr_at_1000 value: 54.225 - type: mrr_at_3 value: 50.952 - type: mrr_at_5 value: 52.464 - type: ndcg_at_1 value: 44.753 - type: ndcg_at_10 value: 45.021 - type: ndcg_at_100 value: 52.028 - type: ndcg_at_1000 value: 54.596000000000004 - type: ndcg_at_3 value: 41.622 - type: ndcg_at_5 value: 42.736000000000004 - type: precision_at_1 value: 44.753 - type: precision_at_10 value: 12.284 - type: precision_at_100 value: 1.955 - type: precision_at_1000 value: 0.243 - type: precision_at_3 value: 27.828999999999997 - type: precision_at_5 value: 20.061999999999998 - type: recall_at_1 value: 22.701 - type: recall_at_10 value: 51.432 - type: recall_at_100 value: 77.009 - type: recall_at_1000 value: 92.511 - type: recall_at_3 value: 37.919000000000004 - type: recall_at_5 value: 44.131 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 40.189 - type: map_at_10 value: 66.24600000000001 - type: map_at_100 value: 67.098 - type: map_at_1000 value: 67.149 - type: map_at_3 value: 62.684 - type: map_at_5 value: 64.974 - type: mrr_at_1 value: 80.378 - type: mrr_at_10 value: 86.127 - type: mrr_at_100 value: 86.29299999999999 - type: mrr_at_1000 value: 86.297 - type: mrr_at_3 value: 85.31400000000001 - type: mrr_at_5 value: 85.858 - type: ndcg_at_1 value: 80.378 - type: ndcg_at_10 value: 74.101 - type: ndcg_at_100 value: 76.993 - type: ndcg_at_1000 value: 77.948 - type: ndcg_at_3 value: 69.232 - type: ndcg_at_5 value: 72.04599999999999 - type: precision_at_1 value: 80.378 - type: precision_at_10 value: 15.595999999999998 - type: precision_at_100 value: 1.7840000000000003 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.884 - type: precision_at_5 value: 29.145 - type: recall_at_1 value: 40.189 - type: recall_at_10 value: 77.981 - type: recall_at_100 value: 89.21 - type: recall_at_1000 value: 95.48299999999999 - type: recall_at_3 value: 67.326 - type: recall_at_5 value: 72.863 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 92.84599999999999 - type: ap value: 89.4710787567357 - type: f1 value: 92.83752676932258 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 23.132 - type: map_at_10 value: 35.543 - type: map_at_100 value: 36.702 - type: map_at_1000 value: 36.748999999999995 - type: map_at_3 value: 31.737 - type: map_at_5 value: 33.927 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 36.204 - type: mrr_at_100 value: 37.29 - type: mrr_at_1000 value: 37.330999999999996 - type: mrr_at_3 value: 32.458999999999996 - type: mrr_at_5 value: 34.631 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 42.492999999999995 - type: ndcg_at_100 value: 47.985 - type: ndcg_at_1000 value: 49.141 - type: ndcg_at_3 value: 34.748000000000005 - type: ndcg_at_5 value: 38.651 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 6.665 - type: precision_at_100 value: 0.941 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.776 - type: precision_at_5 value: 10.84 - type: recall_at_1 value: 23.132 - type: recall_at_10 value: 63.794 - type: recall_at_100 value: 89.027 - type: recall_at_1000 value: 97.807 - type: recall_at_3 value: 42.765 - type: recall_at_5 value: 52.11 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 94.59188326493388 - type: f1 value: 94.3842594786827 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 79.49384404924761 - type: f1 value: 59.7580539534629 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 77.56220578345663 - type: f1 value: 75.27228165561478 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 80.53463349024884 - type: f1 value: 80.4893958236536 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 32.56100273484962 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 31.470380028839607 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 32.06102792457849 - type: mrr value: 33.30709199672238 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 6.776999999999999 - type: map_at_10 value: 14.924000000000001 - type: map_at_100 value: 18.955 - type: map_at_1000 value: 20.538999999999998 - type: map_at_3 value: 10.982 - type: map_at_5 value: 12.679000000000002 - type: mrr_at_1 value: 47.988 - type: mrr_at_10 value: 57.232000000000006 - type: mrr_at_100 value: 57.818999999999996 - type: mrr_at_1000 value: 57.847 - type: mrr_at_3 value: 54.901999999999994 - type: mrr_at_5 value: 56.481 - type: ndcg_at_1 value: 46.594 - type: ndcg_at_10 value: 38.129000000000005 - type: ndcg_at_100 value: 35.54 - type: ndcg_at_1000 value: 44.172 - type: ndcg_at_3 value: 43.025999999999996 - type: ndcg_at_5 value: 41.052 - type: precision_at_1 value: 47.988 - type: precision_at_10 value: 28.111000000000004 - type: precision_at_100 value: 8.929 - type: precision_at_1000 value: 2.185 - type: precision_at_3 value: 40.144000000000005 - type: precision_at_5 value: 35.232 - type: recall_at_1 value: 6.776999999999999 - type: recall_at_10 value: 19.289 - type: recall_at_100 value: 36.359 - type: recall_at_1000 value: 67.54 - type: recall_at_3 value: 11.869 - type: recall_at_5 value: 14.999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 31.108000000000004 - type: map_at_10 value: 47.126000000000005 - type: map_at_100 value: 48.171 - type: map_at_1000 value: 48.199 - type: map_at_3 value: 42.734 - type: map_at_5 value: 45.362 - type: mrr_at_1 value: 34.936 - type: mrr_at_10 value: 49.571 - type: mrr_at_100 value: 50.345 - type: mrr_at_1000 value: 50.363 - type: mrr_at_3 value: 45.959 - type: mrr_at_5 value: 48.165 - type: ndcg_at_1 value: 34.936 - type: ndcg_at_10 value: 55.028999999999996 - type: ndcg_at_100 value: 59.244 - type: ndcg_at_1000 value: 59.861 - type: ndcg_at_3 value: 46.872 - type: ndcg_at_5 value: 51.217999999999996 - type: precision_at_1 value: 34.936 - type: precision_at_10 value: 9.099 - type: precision_at_100 value: 1.145 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 21.456 - type: precision_at_5 value: 15.411 - type: recall_at_1 value: 31.108000000000004 - type: recall_at_10 value: 76.53999999999999 - type: recall_at_100 value: 94.39 - type: recall_at_1000 value: 98.947 - type: recall_at_3 value: 55.572 - type: recall_at_5 value: 65.525 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.56400000000001 - type: map_at_10 value: 85.482 - type: map_at_100 value: 86.114 - type: map_at_1000 value: 86.13 - type: map_at_3 value: 82.607 - type: map_at_5 value: 84.405 - type: mrr_at_1 value: 82.42 - type: mrr_at_10 value: 88.304 - type: mrr_at_100 value: 88.399 - type: mrr_at_1000 value: 88.399 - type: mrr_at_3 value: 87.37 - type: mrr_at_5 value: 88.024 - type: ndcg_at_1 value: 82.45 - type: ndcg_at_10 value: 89.06500000000001 - type: ndcg_at_100 value: 90.232 - type: ndcg_at_1000 value: 90.305 - type: ndcg_at_3 value: 86.375 - type: ndcg_at_5 value: 87.85300000000001 - type: precision_at_1 value: 82.45 - type: precision_at_10 value: 13.486999999999998 - type: precision_at_100 value: 1.534 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.813 - type: precision_at_5 value: 24.773999999999997 - type: recall_at_1 value: 71.56400000000001 - type: recall_at_10 value: 95.812 - type: recall_at_100 value: 99.7 - type: recall_at_1000 value: 99.979 - type: recall_at_3 value: 87.966 - type: recall_at_5 value: 92.268 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 57.241876648614145 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 64.66212576446223 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 5.308 - type: map_at_10 value: 13.803 - type: map_at_100 value: 16.176 - type: map_at_1000 value: 16.561 - type: map_at_3 value: 9.761000000000001 - type: map_at_5 value: 11.802 - type: mrr_at_1 value: 26.200000000000003 - type: mrr_at_10 value: 37.621 - type: mrr_at_100 value: 38.767 - type: mrr_at_1000 value: 38.815 - type: mrr_at_3 value: 34.117 - type: mrr_at_5 value: 36.107 - type: ndcg_at_1 value: 26.200000000000003 - type: ndcg_at_10 value: 22.64 - type: ndcg_at_100 value: 31.567 - type: ndcg_at_1000 value: 37.623 - type: ndcg_at_3 value: 21.435000000000002 - type: ndcg_at_5 value: 18.87 - type: precision_at_1 value: 26.200000000000003 - type: precision_at_10 value: 11.74 - type: precision_at_100 value: 2.465 - type: precision_at_1000 value: 0.391 - type: precision_at_3 value: 20.033 - type: precision_at_5 value: 16.64 - type: recall_at_1 value: 5.308 - type: recall_at_10 value: 23.794999999999998 - type: recall_at_100 value: 50.015 - type: recall_at_1000 value: 79.283 - type: recall_at_3 value: 12.178 - type: recall_at_5 value: 16.882 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 84.93231134675553 - type: cos_sim_spearman value: 81.68319292603205 - type: euclidean_pearson value: 81.8396814380367 - type: euclidean_spearman value: 81.24641903349945 - type: manhattan_pearson value: 81.84698799204274 - type: manhattan_spearman value: 81.24269997904105 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 86.73241671587446 - type: cos_sim_spearman value: 79.05091082971826 - type: euclidean_pearson value: 83.91146869578044 - type: euclidean_spearman value: 79.87978465370936 - type: manhattan_pearson value: 83.90888338917678 - type: manhattan_spearman value: 79.87482848584241 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 85.14970731146177 - type: cos_sim_spearman value: 86.37363490084627 - type: euclidean_pearson value: 83.02154218530433 - type: euclidean_spearman value: 83.80258761957367 - type: manhattan_pearson value: 83.01664495119347 - type: manhattan_spearman value: 83.77567458007952 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.40474139886784 - type: cos_sim_spearman value: 82.77768789165984 - type: euclidean_pearson value: 80.7065877443695 - type: euclidean_spearman value: 81.375940662505 - type: manhattan_pearson value: 80.6507552270278 - type: manhattan_spearman value: 81.32782179098741 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 87.08585968722274 - type: cos_sim_spearman value: 88.03110031451399 - type: euclidean_pearson value: 85.74012019602384 - type: euclidean_spearman value: 86.13592849438209 - type: manhattan_pearson value: 85.74404842369206 - type: manhattan_spearman value: 86.14492318960154 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 84.95069052788875 - type: cos_sim_spearman value: 86.4867991595147 - type: euclidean_pearson value: 84.31013325754635 - type: euclidean_spearman value: 85.01529258006482 - type: manhattan_pearson value: 84.26995570085374 - type: manhattan_spearman value: 84.96982104986162 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.54617647971897 - type: cos_sim_spearman value: 87.49834181751034 - type: euclidean_pearson value: 86.01015322577122 - type: euclidean_spearman value: 84.63362652063199 - type: manhattan_pearson value: 86.13807574475706 - type: manhattan_spearman value: 84.7772370721132 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 67.20047755786615 - type: cos_sim_spearman value: 67.05324077987636 - type: euclidean_pearson value: 66.91930642976601 - type: euclidean_spearman value: 65.21491856099105 - type: manhattan_pearson value: 66.78756851976624 - type: manhattan_spearman value: 65.12356257740728 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 86.19852871539686 - type: cos_sim_spearman value: 87.5161895296395 - type: euclidean_pearson value: 84.59848645207485 - type: euclidean_spearman value: 85.26427328757919 - type: manhattan_pearson value: 84.59747366996524 - type: manhattan_spearman value: 85.24045855146915 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 87.63320317811032 - type: mrr value: 96.26242947321379 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 60.928000000000004 - type: map_at_10 value: 70.112 - type: map_at_100 value: 70.59299999999999 - type: map_at_1000 value: 70.623 - type: map_at_3 value: 66.846 - type: map_at_5 value: 68.447 - type: mrr_at_1 value: 64.0 - type: mrr_at_10 value: 71.212 - type: mrr_at_100 value: 71.616 - type: mrr_at_1000 value: 71.64500000000001 - type: mrr_at_3 value: 68.77799999999999 - type: mrr_at_5 value: 70.094 - type: ndcg_at_1 value: 64.0 - type: ndcg_at_10 value: 74.607 - type: ndcg_at_100 value: 76.416 - type: ndcg_at_1000 value: 77.102 - type: ndcg_at_3 value: 69.126 - type: ndcg_at_5 value: 71.41300000000001 - type: precision_at_1 value: 64.0 - type: precision_at_10 value: 9.933 - type: precision_at_100 value: 1.077 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 26.556 - type: precision_at_5 value: 17.467 - type: recall_at_1 value: 60.928000000000004 - type: recall_at_10 value: 87.322 - type: recall_at_100 value: 94.833 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 72.628 - type: recall_at_5 value: 78.428 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.86237623762376 - type: cos_sim_ap value: 96.72586477206649 - type: cos_sim_f1 value: 93.01858362631845 - type: cos_sim_precision value: 93.4409687184662 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.78019801980199 - type: dot_ap value: 93.72748205246228 - type: dot_f1 value: 89.04109589041096 - type: dot_precision value: 87.16475095785441 - type: dot_recall value: 91.0 - type: euclidean_accuracy value: 99.85445544554456 - type: euclidean_ap value: 96.6661459876145 - type: euclidean_f1 value: 92.58337481333997 - type: euclidean_precision value: 92.17046580773042 - type: euclidean_recall value: 93.0 - type: manhattan_accuracy value: 99.85445544554456 - type: manhattan_ap value: 96.6883549244056 - type: manhattan_f1 value: 92.57598405580468 - type: manhattan_precision value: 92.25422045680239 - type: manhattan_recall value: 92.9 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.72586477206649 - type: max_f1 value: 93.01858362631845 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 66.39930057069995 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.96398659903402 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.946944700355395 - type: mrr value: 56.97151398438164 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.541657650692905 - type: cos_sim_spearman value: 31.605804192286303 - type: dot_pearson value: 28.26905996736398 - type: dot_spearman value: 27.864801765851187 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22599999999999998 - type: map_at_10 value: 1.8870000000000002 - type: map_at_100 value: 9.78 - type: map_at_1000 value: 22.514 - type: map_at_3 value: 0.6669999999999999 - type: map_at_5 value: 1.077 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 89.86699999999999 - type: mrr_at_100 value: 89.86699999999999 - type: mrr_at_1000 value: 89.86699999999999 - type: mrr_at_3 value: 89.667 - type: mrr_at_5 value: 89.667 - type: ndcg_at_1 value: 79.0 - type: ndcg_at_10 value: 74.818 - type: ndcg_at_100 value: 53.715999999999994 - type: ndcg_at_1000 value: 47.082 - type: ndcg_at_3 value: 82.134 - type: ndcg_at_5 value: 79.81899999999999 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 78.0 - type: precision_at_100 value: 54.48 - type: precision_at_1000 value: 20.518 - type: precision_at_3 value: 87.333 - type: precision_at_5 value: 85.2 - type: recall_at_1 value: 0.22599999999999998 - type: recall_at_10 value: 2.072 - type: recall_at_100 value: 13.013 - type: recall_at_1000 value: 43.462 - type: recall_at_3 value: 0.695 - type: recall_at_5 value: 1.139 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.328 - type: map_at_10 value: 9.795 - type: map_at_100 value: 15.801000000000002 - type: map_at_1000 value: 17.23 - type: map_at_3 value: 4.734 - type: map_at_5 value: 6.644 - type: mrr_at_1 value: 30.612000000000002 - type: mrr_at_10 value: 46.902 - type: mrr_at_100 value: 47.495 - type: mrr_at_1000 value: 47.495 - type: mrr_at_3 value: 41.156 - type: mrr_at_5 value: 44.218 - type: ndcg_at_1 value: 28.571 - type: ndcg_at_10 value: 24.806 - type: ndcg_at_100 value: 36.419000000000004 - type: ndcg_at_1000 value: 47.272999999999996 - type: ndcg_at_3 value: 25.666 - type: ndcg_at_5 value: 25.448999999999998 - type: precision_at_1 value: 30.612000000000002 - type: precision_at_10 value: 23.061 - type: precision_at_100 value: 7.714 - type: precision_at_1000 value: 1.484 - type: precision_at_3 value: 26.531 - type: precision_at_5 value: 26.122 - type: recall_at_1 value: 2.328 - type: recall_at_10 value: 16.524 - type: recall_at_100 value: 47.179 - type: recall_at_1000 value: 81.22200000000001 - type: recall_at_3 value: 5.745 - type: recall_at_5 value: 9.339 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.9142 - type: ap value: 14.335574772555415 - type: f1 value: 54.62839595194111 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 59.94340690435768 - type: f1 value: 60.286487936731916 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 51.26597708987974 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.48882398521786 - type: cos_sim_ap value: 79.04326607602204 - type: cos_sim_f1 value: 71.64566826860633 - type: cos_sim_precision value: 70.55512918905092 - type: cos_sim_recall value: 72.77044854881267 - type: dot_accuracy value: 84.19264469213805 - type: dot_ap value: 67.96360043562528 - type: dot_f1 value: 64.06418393006827 - type: dot_precision value: 58.64941898706424 - type: dot_recall value: 70.58047493403694 - type: euclidean_accuracy value: 87.45902127913214 - type: euclidean_ap value: 78.9742237648272 - type: euclidean_f1 value: 71.5553235908142 - type: euclidean_precision value: 70.77955601445535 - type: euclidean_recall value: 72.34828496042216 - type: manhattan_accuracy value: 87.41729749061214 - type: manhattan_ap value: 78.90073137580596 - type: manhattan_f1 value: 71.3942611553533 - type: manhattan_precision value: 68.52705653967483 - type: manhattan_recall value: 74.51187335092348 - type: max_accuracy value: 87.48882398521786 - type: max_ap value: 79.04326607602204 - type: max_f1 value: 71.64566826860633 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.68125897465751 - type: cos_sim_ap value: 85.6003454431979 - type: cos_sim_f1 value: 77.6957163958641 - type: cos_sim_precision value: 73.0110366307807 - type: cos_sim_recall value: 83.02279026793964 - type: dot_accuracy value: 87.7672992587418 - type: dot_ap value: 82.4971301112899 - type: dot_f1 value: 75.90528233151184 - type: dot_precision value: 72.0370626469368 - type: dot_recall value: 80.21250384970742 - type: euclidean_accuracy value: 88.4503434625684 - type: euclidean_ap value: 84.91949884748384 - type: euclidean_f1 value: 76.92365018444684 - type: euclidean_precision value: 74.53245721712759 - type: euclidean_recall value: 79.47336002463813 - type: manhattan_accuracy value: 88.47556952691427 - type: manhattan_ap value: 84.8963689101517 - type: manhattan_f1 value: 76.85901249256395 - type: manhattan_precision value: 74.31693989071039 - type: manhattan_recall value: 79.58115183246073 - type: max_accuracy value: 88.68125897465751 - type: max_ap value: 85.6003454431979 - type: max_f1 value: 77.6957163958641 license: mit language: - en --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model that supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13") model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
sangdeptraivcl/videomae-large-finetuned-ucf101-subset
sangdeptraivcl
2024-05-21T04:14:45Z
63
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-large", "base_model:finetune:MCG-NJU/videomae-large", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-05-20T16:08:19Z
--- license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-large-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-large-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9542 - Accuracy: 0.6105 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 766 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4296 | 0.34 | 260 | 1.1089 | 0.6105 | | 1.1275 | 1.18 | 520 | 0.9542 | 0.6105 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
saaduddinM/Llama8B_test
saaduddinM
2024-05-21T04:12:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T04:12:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SpeshulK/distilhubert-finetuned-gtzan
SpeshulK
2024-05-21T04:11:38Z
159
0
transformers
[ "transformers", "tensorboard", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-21T02:33:36Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.82 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-gtzan This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.6837 - Accuracy: 0.82 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9472 | 1.0 | 113 | 1.8615 | 0.53 | | 1.1807 | 2.0 | 226 | 1.2908 | 0.61 | | 1.0092 | 3.0 | 339 | 0.9620 | 0.74 | | 0.6427 | 4.0 | 452 | 0.8441 | 0.76 | | 0.5151 | 5.0 | 565 | 0.6833 | 0.8 | | 0.3319 | 6.0 | 678 | 0.6107 | 0.82 | | 0.2511 | 7.0 | 791 | 0.5891 | 0.84 | | 0.1406 | 8.0 | 904 | 0.7047 | 0.8 | | 0.1741 | 9.0 | 1017 | 0.6508 | 0.81 | | 0.0986 | 10.0 | 1130 | 0.6837 | 0.82 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
baek26/all_4293_bart-all_rl
baek26
2024-05-21T04:11:18Z
50
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2024-05-21T04:10:29Z
--- license: apache-2.0 tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmpkgv_ea4c/baek26/all_4293_bart-all_rl") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpkgv_ea4c/baek26/all_4293_bart-all_rl") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpkgv_ea4c/baek26/all_4293_bart-all_rl") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
souvik0306/test_whisper_v3_finetuning_mozilla
souvik0306
2024-05-21T04:10:58Z
1
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-05-21T04:10:55Z
--- library_name: peft base_model: OpenAI/whisper-large-v3 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
MaziyarPanahi/T3qm7xpNeuralsynthesis-7B-GGUF
MaziyarPanahi
2024-05-21T04:03:54Z
51
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/T3qm7xpNeuralsynthesis-7B", "base_model:quantized:automerger/T3qm7xpNeuralsynthesis-7B" ]
text-generation
2024-05-21T03:35:06Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - base_model:Kukedlc/NeuralSynthesis-7b-v0.4-slerp - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: T3qm7xpNeuralsynthesis-7B-GGUF base_model: automerger/T3qm7xpNeuralsynthesis-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/T3qm7xpNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qm7xpNeuralsynthesis-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/T3qm7xpNeuralsynthesis-7B](https://huggingface.co/automerger/T3qm7xpNeuralsynthesis-7B) ## Description [MaziyarPanahi/T3qm7xpNeuralsynthesis-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qm7xpNeuralsynthesis-7B-GGUF) contains GGUF format model files for [automerger/T3qm7xpNeuralsynthesis-7B](https://huggingface.co/automerger/T3qm7xpNeuralsynthesis-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Narednra/Meditron_llama2_7b_12k
Narednra
2024-05-21T04:03:27Z
1
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
2024-05-17T03:50:53Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
ljnicol/Phi-3-mini-128k-instruct-Q4_0-GGUF
ljnicol
2024-05-21T04:01:56Z
11
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "en", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-21T04:01:48Z
--- language: - en license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # ljnicol/Phi-3-mini-128k-instruct-Q4_0-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ljnicol/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ljnicol/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_0.gguf -n 128 ```
chillies/llama3-8b-mental-health-v3
chillies
2024-05-21T04:00:27Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-21T04:00:23Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** chillies - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
andakm/mistral_7b_guanaco
andakm
2024-05-21T03:56:34Z
2
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2023-12-07T08:21:55Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.1.dev0
wendy41/llama2-koen-ft
wendy41
2024-05-21T03:51:06Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:50:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ByeByeFlyGuy/ReinforceCartPole-v1
ByeByeFlyGuy
2024-05-21T03:45:56Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-21T03:45:35Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: ReinforceCartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Oronto/Shared_Code
Oronto
2024-05-21T03:37:08Z
34
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:36:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
adhityaprimandhika/mistral_categorization_unsloth_lora_adapter
adhityaprimandhika
2024-05-21T03:36:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:Kurkur99/mistral_categorization3_new_sabtu", "base_model:finetune:Kurkur99/mistral_categorization3_new_sabtu", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:36:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: Kurkur99/mistral_categorization3_new_sabtu --- # Uploaded model - **Developed by:** adhityaprimandhika - **License:** apache-2.0 - **Finetuned from model :** Kurkur99/mistral_categorization3_new_sabtu This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
adhityaprimandhika/mistral_categorization_unsloth_q4
adhityaprimandhika
2024-05-21T03:36:26Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:Kurkur99/mistral_categorization3_new_sabtu", "base_model:quantized:Kurkur99/mistral_categorization3_new_sabtu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-21T03:31:33Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: Kurkur99/mistral_categorization3_new_sabtu --- # Uploaded model - **Developed by:** adhityaprimandhika - **License:** apache-2.0 - **Finetuned from model :** Kurkur99/mistral_categorization3_new_sabtu This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon0.1-pgd_layer10_harmless-3
PhillipGuo
2024-05-21T03:34:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:34:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon0.1-pgd_layer10_harmless-1
PhillipGuo
2024-05-21T03:34:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:34:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
damgomz/ft_bs32_lr7_mlm
damgomz
2024-05-21T03:33:55Z
106
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-20T20:18:57Z
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T05:33:50' project_name: ft_bs32_lr7_mlm_emissions_tracker run_id: ea51941f-36d1-40ad-93ce-8070f11b32ff duration: 28014.593188285828 emissions: 0.0169520737389673 emissions_rate: 6.051158274915012e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 3.75 cpu_energy: 0.3307273301712342 gpu_energy: 0 ram_energy: 0.0291816683419049 energy_consumed: 0.3599089985131388 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 2 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 10 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 28014.593188285828 | | Emissions (Co2eq in kg) | 0.0169520737389673 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 3.75 | | CPU energy (kWh) | 0.3307273301712342 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0291816683419049 | | Consumed energy (kWh) | 0.3599089985131388 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 2 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.053928091887450215 | | Emissions (Co2eq in kg) | 0.010972382332078616 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | damgomz/ThunBERT_bs16_lr5_MLM | | model_name | ft_bs32_lr7_mlm | | sequence_length | 400 | | num_epoch | 15 | | learning_rate | 5e-07 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 81450 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.680462 | 0.629265 | 0.695876 | 0.819018 | | 1 | 0.582688 | 0.540363 | 0.728277 | 0.802147 | | 2 | 0.504317 | 0.481331 | 0.773196 | 0.868098 | | 3 | 0.446641 | 0.430399 | 0.808542 | 0.889571 | | 4 | 0.400829 | 0.396269 | 0.817378 | 0.881902 | | 5 | 0.373893 | 0.376208 | 0.826951 | 0.881902 | | 6 | 0.354504 | 0.366698 | 0.834315 | 0.895706 | | 7 | 0.343825 | 0.356863 | 0.838733 | 0.849693 | | 8 | 0.336049 | 0.356482 | 0.844624 | 0.901840 | | 9 | 0.329104 | 0.349773 | 0.852725 | 0.892638 | | 10 | 0.323361 | 0.346467 | 0.850515 | 0.880368 | | 11 | 0.316434 | 0.344817 | 0.854934 | 0.880368 | | 12 | 0.309111 | 0.343348 | 0.857143 | 0.886503 | | 13 | 0.304864 | 0.341717 | 0.855670 | 0.878834 | | 14 | 0.299619 | 0.344598 | 0.854934 | 0.897239 |
kalytm/nous-11
kalytm
2024-05-21T03:29:46Z
212
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-18T14:02:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon10.0-pgd_layer15_harmless-1
PhillipGuo
2024-05-21T03:24:36Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:24:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon10.0-pgd_layer15_harmless-3
PhillipGuo
2024-05-21T03:24:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T03:24:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/T3qStrangemerges_32-7B-GGUF
MaziyarPanahi
2024-05-21T03:24:01Z
55
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/T3qStrangemerges_32-7B", "base_model:quantized:automerger/T3qStrangemerges_32-7B" ]
text-generation
2024-05-21T02:54:11Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: T3qStrangemerges_32-7B-GGUF base_model: automerger/T3qStrangemerges_32-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/T3qStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qStrangemerges_32-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/T3qStrangemerges_32-7B](https://huggingface.co/automerger/T3qStrangemerges_32-7B) ## Description [MaziyarPanahi/T3qStrangemerges_32-7B-GGUF](https://huggingface.co/MaziyarPanahi/T3qStrangemerges_32-7B-GGUF) contains GGUF format model files for [automerger/T3qStrangemerges_32-7B](https://huggingface.co/automerger/T3qStrangemerges_32-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
mika5883/pretrain_rugec
mika5883
2024-05-21T03:20:41Z
179
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:mika5883/pretrain_rugec", "base_model:finetune:mika5883/pretrain_rugec", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-03T16:12:51Z
--- base_model: mika5883/pretrain_rugec tags: - generated_from_trainer model-index: - name: pretrain_rugec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pretrain_rugec This model is a fine-tuned version of [mika5883/pretrain_rugec](https://huggingface.co/mika5883/pretrain_rugec) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
yonyou-sg/nllb-zh-khmer-14k
yonyou-sg
2024-05-21T03:19:43Z
96
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-21T03:13:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
letgoofthepizza/Llama-3-8B-Instruct-ko-news-summary
letgoofthepizza
2024-05-21T03:01:05Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T02:37:03Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-whp_repl-towards1_sft1_harmless-1
PhillipGuo
2024-05-21T02:49:01Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T02:48:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaziyarPanahi/CalmexperimentT3qm7-7B-GGUF
MaziyarPanahi
2024-05-21T02:44:24Z
83
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:automerger/CalmexperimentT3qm7-7B", "base_model:quantized:automerger/CalmexperimentT3qm7-7B" ]
text-generation
2024-05-21T02:14:10Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - merge - mergekit - lazymergekit - automerger - license:apache-2.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: CalmexperimentT3qm7-7B-GGUF base_model: automerger/CalmexperimentT3qm7-7B inference: false model_creator: automerger pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/CalmexperimentT3qm7-7B-GGUF](https://huggingface.co/MaziyarPanahi/CalmexperimentT3qm7-7B-GGUF) - Model creator: [automerger](https://huggingface.co/automerger) - Original model: [automerger/CalmexperimentT3qm7-7B](https://huggingface.co/automerger/CalmexperimentT3qm7-7B) ## Description [MaziyarPanahi/CalmexperimentT3qm7-7B-GGUF](https://huggingface.co/MaziyarPanahi/CalmexperimentT3qm7-7B-GGUF) contains GGUF format model files for [automerger/CalmexperimentT3qm7-7B](https://huggingface.co/automerger/CalmexperimentT3qm7-7B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
verrynatalia/open_ended_tutor
verrynatalia
2024-05-21T02:43:55Z
0
0
transformers
[ "transformers", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-21T02:33:35Z
--- license: mit language: - en library_name: transformers ---
AleRothermel/my-first-model
AleRothermel
2024-05-21T02:43:54Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-17T23:16:59Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-cased metrics: - accuracy model-index: - name: my-first-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my-first-model This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7528 - Accuracy: 0.59 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1521 | 1.0 | 250 | 1.0643 | 0.5225 | | 0.8389 | 2.0 | 500 | 0.9594 | 0.59 | | 0.5387 | 3.0 | 750 | 1.1801 | 0.58 | | 0.2835 | 4.0 | 1000 | 1.5372 | 0.5675 | | 0.1154 | 5.0 | 1250 | 1.7528 | 0.59 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Rimyy/Gemma-2b-finetuneGSMdata1epSameP
Rimyy
2024-05-21T02:41:53Z
133
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T02:39:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mxode/minicoder-7M-base
Mxode
2024-05-21T02:35:48Z
139
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-20T08:40:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon10.0-pgd_layer15-def_layer0-harmless-102
PhillipGuo
2024-05-21T02:31:08Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-21T02:31:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sam-2577/sft-tiny-chatbot
sam-2577
2024-05-21T02:30:00Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-05-21T02:29:40Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 model-index: - name: sft-tiny-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-tiny-chatbot This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1