modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 18:28:48
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 18:28:01
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
luaqi/sn9_large_v3_0
|
luaqi
| 2024-02-26T09:51:13Z | 164 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T09:50:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_spamming-email-classification2_1_tokenizer
|
CatBarks
| 2024-02-26T09:50:26Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T09:50:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_spamming-email-classification2_1_model
|
CatBarks
| 2024-02-26T09:50:24Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T09:49:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ccccccccced/mistral_7b_0226
|
ccccccccced
| 2024-02-26T09:50:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T06:54:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Onlydrinkwater/gpt2xl_format_math_520_from_scratch
|
Onlydrinkwater
| 2024-02-26T09:50:07Z | 133 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T09:18:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_spamming-email-classification1_4_tokenizer
|
CatBarks
| 2024-02-26T09:49:06Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T09:49:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uproai/ros-7b-v1
|
uproai
| 2024-02-26T09:48:43Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"OpenPipe/mistral-ft-optimized-1227",
"NeverSleep/Noromaid-7b-v0.2",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:finetune:OpenPipe/mistral-ft-optimized-1227",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T09:40:24Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- OpenPipe/mistral-ft-optimized-1227
- NeverSleep/Noromaid-7b-v0.2
base_model:
- OpenPipe/mistral-ft-optimized-1227
---
# ros-7b-v1
ros-7b-v1 is a merge of the following models using [Mergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* Base Model [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [cgato/Thespis-Mistral-7b-v0.6](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.6)
* [saishf/West-Hermes-7B](https://huggingface.co/saishf/West-Hermes-7B)
* [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: uproai/RosMistral-2x7B
layer_range: [0, 32]
- model: NeverSleep/Noromaid-7b-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: uproai/RosMistral-2x7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "uproai/ros-7b-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
CatBarks/bertES_spamming-email-classification1_2_tokenizer
|
CatBarks
| 2024-02-26T09:46:41Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T09:46:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_spamming-email-classification1_2_model
|
CatBarks
| 2024-02-26T09:46:39Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T09:45:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
maywell/TinyWand-kiqu
|
maywell
| 2024-02-26T09:46:15Z | 85 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-23T15:35:39Z |
---
license: cc-by-sa-4.0
---
해당 모델은 TinyWand-SFT를 kiqu-samples 데이터셋으로 3 Epoch 훈련시킨 모델입니다.
Speculative Decoding 을 목적으로 훈련되었습니다.
Sionic AI에서 GPU 자원을 지원받아 제작되었습니다.
|
Onlydrinkwater/gpt2xl_language_math_520_from_scratch
|
Onlydrinkwater
| 2024-02-26T09:45:59Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T09:18:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lee-H/lol-gopt
|
Lee-H
| 2024-02-26T09:37:44Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-02-26T08:07:08Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
EMBO/SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_large
|
EMBO
| 2024-02-26T09:32:42Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:EMBO/SourceData",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T16:26:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- EMBO/SourceData
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_GP
metrics:
- name: Precision
type: precision
value: 0.931830031282586
- name: Recall
type: recall
value: 0.9367138364779874
- name: F1
type: f1
value: 0.9342655514898066
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Accuracy Score: 0.9955
- Precision: 0.9318
- Recall: 0.9367
- F1: 0.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0147 | 1.0 | 942 | 0.0128 | 0.9955 | 0.9318 | 0.9367 | 0.9343 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
EMBO/SourceData_GP-CHEM-ROLES_v_1-0-2_BioLinkBERT_large
|
EMBO
| 2024-02-26T09:32:16Z | 181 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:EMBO/SourceData",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T17:11:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- EMBO/SourceData
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GP-CHEM-ROLES_v_1-0-2_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_MULTI
metrics:
- name: Precision
type: precision
value: 0.972972972972973
- name: Recall
type: recall
value: 0.9789864029666254
- name: F1
type: f1
value: 0.9759704251386322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GP-CHEM-ROLES_v_1-0-2_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0055
- Accuracy Score: 0.9985
- Precision: 0.9730
- Recall: 0.9790
- F1: 0.9760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0097 | 1.0 | 942 | 0.0055 | 0.9985 | 0.9730 | 0.9790 | 0.9760 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
EMBO/SourceData_GP-CHEM-ROLES_v_2-0-2_BioLinkBERT_large
|
EMBO
| 2024-02-26T09:31:55Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:EMBO/SourceData",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T18:17:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- EMBO/SourceData
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GP-CHEM-ROLES_v_2-0-2_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_MULTI
metrics:
- name: Precision
type: precision
value: 0.9667832167832168
- name: Recall
type: recall
value: 0.9765142150803461
- name: F1
type: f1
value: 0.9716243521040147
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GP-CHEM-ROLES_v_2-0-2_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0058
- Accuracy Score: 0.9984
- Precision: 0.9668
- Recall: 0.9765
- F1: 0.9716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0096 | 1.0 | 942 | 0.0058 | 0.9984 | 0.9668 | 0.9765 | 0.9716 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
EMBO/SourceData_GENEPROD-ROLES_v_2-0-3_BioLinkBERT_large
|
EMBO
| 2024-02-26T09:31:22Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:EMBO/SourceData",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T18:39:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- EMBO/SourceData
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GENEPROD-ROLES_v_2-0-3_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_GP
metrics:
- name: Precision
type: precision
value: 0.9279830038154699
- name: Recall
type: recall
value: 0.9347921034241788
- name: F1
type: f1
value: 0.9313751087902523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GENEPROD-ROLES_v_2-0-3_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Accuracy Score: 0.9954
- Precision: 0.9280
- Recall: 0.9348
- F1: 0.9314
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0147 | 1.0 | 942 | 0.0128 | 0.9954 | 0.9280 | 0.9348 | 0.9314 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
EMBO/SourceData_GP-CHEM-ROLES_v_2-0-3_BioLinkBERT_large
|
EMBO
| 2024-02-26T09:30:28Z | 183 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:EMBO/SourceData",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T19:24:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- EMBO/SourceData
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GP-CHEM-ROLES_v_2-0-3_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_MULTI
metrics:
- name: Precision
type: precision
value: 0.9656922807631717
- name: Recall
type: recall
value: 0.9742186120430867
- name: F1
type: f1
value: 0.9699367088607594
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GP-CHEM-ROLES_v_2-0-3_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0061
- Accuracy Score: 0.9982
- Precision: 0.9657
- Recall: 0.9742
- F1: 0.9699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0099 | 1.0 | 942 | 0.0061 | 0.9982 | 0.9657 | 0.9742 | 0.9699 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
Rostel/azerty
|
Rostel
| 2024-02-26T09:27:10Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-02-26T09:27:10Z |
---
license: bigscience-bloom-rail-1.0
---
|
harshith987/my-pet-dog
|
harshith987
| 2024-02-26T09:25:15Z | 9 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T09:21:22Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by harshith987 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
Litzy619/V0224P6
|
Litzy619
| 2024-02-26T09:24:42Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-02-26T00:43:46Z |
---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0224P6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0224P6
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.085 | 0.13 | 10 | 1.0269 |
| 0.9547 | 0.26 | 20 | 0.9054 |
| 0.8742 | 0.39 | 30 | 0.8555 |
| 0.8306 | 0.52 | 40 | 0.8292 |
| 0.8157 | 0.65 | 50 | 0.8145 |
| 0.8024 | 0.78 | 60 | 0.8040 |
| 0.7797 | 0.91 | 70 | 0.7959 |
| 0.77 | 1.04 | 80 | 0.7861 |
| 0.7427 | 1.17 | 90 | 0.7787 |
| 0.7945 | 1.3 | 100 | 0.8105 |
| 0.7863 | 1.43 | 110 | 0.7969 |
| 0.7692 | 1.55 | 120 | 0.7881 |
| 0.7529 | 1.68 | 130 | 0.7809 |
| 0.761 | 1.81 | 140 | 0.7746 |
| 0.7638 | 1.94 | 150 | 0.7692 |
| 0.7401 | 2.07 | 160 | 0.7682 |
| 0.7216 | 2.2 | 170 | 0.7656 |
| 0.7336 | 2.33 | 180 | 0.7641 |
| 0.7245 | 2.46 | 190 | 0.7634 |
| 0.7252 | 2.59 | 200 | 0.7629 |
| 0.7279 | 2.72 | 210 | 0.7622 |
| 0.7258 | 2.85 | 220 | 0.7621 |
| 0.7271 | 2.98 | 230 | 0.7620 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sadhaklal/bert-base-cased-finetuned-conll2003-ner
|
sadhaklal
| 2024-02-26T09:22:50Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"en",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-22T16:43:33Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-finetuned-conll2003-ner
results: []
datasets:
- conll2003
language:
- en
library_name: transformers
pipeline_tag: token-classification
---
# bert-base-cased-finetuned-conll2003-ner
This model is a fine-tuned version of BERT ([bert-base-cased](https://huggingface.co/bert-base-cased)) on the CoNLL-2003 (Conference on Computational Natural Language Learning) dataset.
The model performs named entity recognition (NER). It pertains to section 2 of chapter 7 of the Hugging Face "NLP Course" (https://huggingface.co/learn/nlp-course/chapter7/2).
It was trained using the Trainer API of Hugging Face Transformers.
Code: https://github.com/sambitmukherjee/huggingface-notebooks/blob/main/course/en/chapter7/section2_pt.ipynb
Experiment tracking: https://wandb.ai/sadhaklal/bert-base-cased-finetuned-conll2003-ner
## Usage
```
from transformers import pipeline
model_checkpoint = "sadhaklal/bert-base-cased-finetuned-conll2003-ner"
token_classifier = pipeline("token-classification", model=model_checkpoint, aggregation_strategy="simple")
print(token_classifier("My name is Sylvain and I work at Hugging Face in Brooklyn."))
```
## Dataset
From the dataset page:
> The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
Examples: https://huggingface.co/datasets/conll2003/viewer
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0125 | 1.0 | 1756 | 0.0729 | 0.9095 | 0.9339 | 0.9215 | 0.9810 |
| 0.0001 | 2.0 | 3512 | 0.0558 | 0.9265 | 0.9487 | 0.9375 | 0.9862 |
| 0.0001 | 3.0 | 5268 | 0.0578 | 0.9366 | 0.9515 | 0.9440 | 0.9867 |
### Framework versions
- Transformers 4.37.2
- PyTorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ThanviPriya/my-pet-dog
|
ThanviPriya
| 2024-02-26T09:19:29Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T09:12:49Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by ThanviPriya following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
daniel-chen/ppo-LunarLander-v2
|
daniel-chen
| 2024-02-26T09:18:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T09:18:22Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.21 +/- 23.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Griffin88/Qwen-1.8B-for-PDFs
|
Griffin88
| 2024-02-26T09:17:26Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T08:40:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thangvip/q-learning-taxi-v3
|
thangvip
| 2024-02-26T09:10:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T08:48:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="thangvip/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ThomasFG/0-0-135
|
ThomasFG
| 2024-02-26T09:09:57Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-26T06:49:06Z |
---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2024-02-26_07-49-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2024-02-26_07-49-00
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3915
- Wer: 14.0998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1632 | 1.0 | 516 | 0.3915 | 14.0998 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1+cu116
- Datasets 2.17.0
- Tokenizers 0.15.2
|
imone/gemma-7b-with-it-tokens
|
imone
| 2024-02-26T09:05:46Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-25T02:43:04Z |
---
library_name: transformers
tags: []
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma with Instruction-Tuning Special Tokens
This is the [Gemma-7b](https://huggingface.co/google/gemma-7b) base model, augmented with the `<start_of_turn>` and `<end_of_turn>` special tokens included in the [Gemma-7b-it](https://huggingface.co/google/gemma-7b-it) instruction-tuned model, for further instruction/RL fine-tuning usage.
Added special tokens:
```
<start_of_turn>
<end_of_turn>
```
|
BiMediX/BiMediX-Ara
|
BiMediX
| 2024-02-26T09:01:09Z | 7 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"feature-extraction",
"medical",
"text-generation",
"conversational",
"ar",
"arxiv:2402.13253",
"license:cc-by-nc-sa-4.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-20T17:09:28Z |
---
license: cc-by-nc-sa-4.0
language:
- ar
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- medical
---
## Model Card for BiMediX-Bilingual
### Model Details
- **Name:** BiMediX
- **Version:** 1.0
- **Type:** Bilingual Medical Mixture of Experts Large Language Model (LLM)
- **Languages:** Arabic
- **Model Architecture:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Training Data:** BiMed1.3M-Arabic, an arabic dataset with diverse medical interactions.
### Intended Use
- **Primary Use:** Medical interactions in both English and Arabic.
- **Capabilities:** MCQA, closed QA and chats.
## Getting Started
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "BiMediX/BiMediX-Ara"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "مرحبًا بيميديكس! لقد كنت أعاني من التعب المتزايد في الأسبوع الماضي."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Procedure
- **Dataset:** BiMed1.3M-Arabic.
- **QLoRA Adaptation:** Implements a low-rank adaptation technique, incorporating learnable low-rank adapter weights into the experts and the routing network. This results in training about 4% of the original parameters.
- **Training Resources:** The model underwent training on the Arabic corpus.
### Model Performance
| **Model** | **CKG** | **CBio** | **CMed** | **MedGen** | **ProMed** | **Ana** | **MedMCQA** | **MedQA** | **PubmedQA** | **AVG** |
|-----------|------------|-----------|-----------|-------------|-------------|---------|-------------|-----------|--------------|---------|
| Jais-30B | 52.1 | 50.7 | 40.5 | 49.0 | 39.3 | 43.0 | 37.0 | 28.8 | 74.6 | 46.1 |
| BiMediX (Arabic) | 60.0 | 54.9 | **55.5** | 58.0 | **58.1** | 49.6 | 46.0 | 40.2 | 76.6 | 55.4 |
| **BiMediX (Bilingual)** | **63.8** | **57.6** | 52.6 | **64.0** | 52.9 | **50.4** | **49.1** | **47.3** | **78.4** | **56.5** |
### Safety and Ethical Considerations
- **Potential issues**: hallucinations, toxicity, stereotypes.
- **Usage:** Research purposes only.
### Accessibility
- **Availability:** [BiMediX GitHub Repository](https://github.com/mbzuai-oryx/BiMediX).
- arxiv.org/abs/2402.13253
### Authors
Sara Pieri, Sahal Shaji Mullappilly, Fahad Shahbaz Khan, Rao Muhammad Anwer Salman Khan, Timothy Baldwin, Hisham Cholakkal
**Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)**
|
julycodes/Falcon
|
julycodes
| 2024-02-26T09:00:44Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2024-02-22T10:57:34Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
model-index:
- name: Falcon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 180
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
zhuluv/textual_inversion_cat
|
zhuluv
| 2024-02-26T08:59:19Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-04T06:23:02Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - zhuluv/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
Dev2410/SQL_llama
|
Dev2410
| 2024-02-26T08:59:08Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-26T08:39:27Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
baichuan-inc/Baichuan2-7B-Chat
|
baichuan-inc
| 2024-02-26T08:58:12Z | 17,450 | 165 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-29T02:21:41Z |
---
language:
- en
- zh
license_name: baichuan2-community-license
license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf
tasks:
- text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
Baichuan 2
</h1>
</div>
<div align="center">
<a href="https://github.com/baichuan-inc/Baichuan2" target="_blank">🦉GitHub</a> | <a href="https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true" target="_blank">💬WeChat</a>
</div>
<div align="center">
百川API支持搜索增强和192K长窗口,新增百川搜索增强知识库、限时免费!<br>
🚀 <a href="https://www.baichuan-ai.com/" target="_blank">百川大模型在线对话平台</a> 已正式向公众开放 🎉
</div>
# 目录/Table of Contents
- [📖 模型介绍/Introduction](#Introduction)
- [⚙️ 快速开始/Quick Start](#Start)
- [📊 Benchmark评估/Benchmark Evaluation](#Benchmark)
- [👥 社区与生态/Community](#Community)
- [📜 声明与协议/Terms and Conditions](#Terms)
# <span id="Introduction">模型介绍/Introduction</span>
Baichuan 2 是[百川智能]推出的新一代开源大语言模型,采用 **2.6 万亿** Tokens 的高质量语料训练,在权威的中文和英文 benchmark
上均取得同尺寸最好的效果。本次发布包含有 7B、13B 的 Base 和 Chat 版本,并提供了 Chat 版本的 4bits
量化,所有版本不仅对学术研究完全开放,开发者也仅需[邮件申请]并获得官方商用许可后,即可以免费商用。具体发布版本和下载见下表:
Baichuan 2 is the new generation of large-scale open-source language models launched by [Baichuan Intelligence inc.](https://www.baichuan-ai.com/).
It is trained on a high-quality corpus with 2.6 trillion tokens and has achieved the best performance in authoritative Chinese and English benchmarks of the same size.
This release includes 7B and 13B versions for both Base and Chat models, along with a 4bits quantized version for the Chat model.
All versions are fully open to academic research, and developers can also use them for free in commercial applications after obtaining an official commercial license through [email request](mailto:[email protected]).
The specific release versions and download links are listed in the table below:
| | Base Model | Chat Model | 4bits Quantized Chat Model |
|:---:|:--------------------:|:--------------------:|:--------------------------:|
| 7B | [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) | [Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) | [Baichuan2-7B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base-4bits) |
| 13B | [Baichuan2-13B-Base](https://huggingface.co/baichuan-inc/Baichuan2-13B-Base) | [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | [Baichuan2-13B-Chat-4bits](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits) |
# <span id="Start">快速开始/Quick Start</span>
在Baichuan2系列模型中,我们为了加快推理速度使用了Pytorch2.0加入的新功能F.scaled_dot_product_attention,因此模型需要在Pytorch2.0环境下运行。
In the Baichuan 2 series models, we have utilized the new feature `F.scaled_dot_product_attention` introduced in PyTorch 2.0 to accelerate inference speed. Therefore, the model needs to be run in a PyTorch 2.0 environment.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Chat", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Chat", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-7B-Chat")
messages = []
messages.append({"role": "user", "content": "解释一下“温故而知新”"})
response = model.chat(tokenizer, messages)
print(response)
"温故而知新"是一句中国古代的成语,出自《论语·为政》篇。这句话的意思是:通过回顾过去,我们可以发现新的知识和理解。换句话说,学习历史和经验可以让我们更好地理解现在和未来。
这句话鼓励我们在学习和生活中不断地回顾和反思过去的经验,从而获得新的启示和成长。通过重温旧的知识和经历,我们可以发现新的观点和理解,从而更好地应对不断变化的世界和挑战。
```
# <span id="Benchmark">Benchmark 结果/Benchmark Evaluation</span>
我们在[通用]、[法律]、[医疗]、[数学]、[代码]和[多语言翻译]六个领域的中英文权威数据集上对模型进行了广泛测试,更多详细测评结果可查看[GitHub]。
We have extensively tested the model on authoritative Chinese-English datasets across six domains: [General](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#general-domain), [Legal](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Medical](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#law-and-medicine), [Mathematics](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), [Code](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#mathematics-and-code), and [Multilingual Translation](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md#multilingual-translation). For more detailed evaluation results, please refer to [GitHub](https://github.com/baichuan-inc/Baichuan2/blob/main/README_EN.md).
### 7B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:-----------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-7B** | 27.10 | 35.10 | 26.75 | 27.81 | 28.17 | 32.38 |
| **LLaMA2-7B** | 28.90 | 45.73 | 31.38 | 25.97 | 26.53 | 39.16 |
| **MPT-7B** | 27.15 | 27.93 | 26.00 | 26.54 | 24.83 | 35.20 |
| **Falcon-7B** | 24.23 | 26.03 | 25.66 | 24.24 | 24.10 | 28.77 |
| **ChatGLM2-6B** | 50.20 | 45.90 | 49.00 | 49.44 | 45.28 | 31.65 |
| **[Baichuan-7B]** | 42.80 | 42.30 | 44.02 | 36.34 | 34.44 | 32.48 |
| **[Baichuan2-7B-Base]** | 54.00 | 54.16 | 57.07 | 47.47 | 42.73 | 41.56 |
### 13B Model Results
| | **C-Eval** | **MMLU** | **CMMLU** | **Gaokao** | **AGIEval** | **BBH** |
|:---------------------------:|:----------:|:--------:|:---------:|:----------:|:-----------:|:-------:|
| | 5-shot | 5-shot | 5-shot | 5-shot | 5-shot | 3-shot |
| **GPT-4** | 68.40 | 83.93 | 70.33 | 66.15 | 63.27 | 75.12 |
| **GPT-3.5 Turbo** | 51.10 | 68.54 | 54.06 | 47.07 | 46.13 | 61.59 |
| **LLaMA-13B** | 28.50 | 46.30 | 31.15 | 28.23 | 28.22 | 37.89 |
| **LLaMA2-13B** | 35.80 | 55.09 | 37.99 | 30.83 | 32.29 | 46.98 |
| **Vicuna-13B** | 32.80 | 52.00 | 36.28 | 30.11 | 31.55 | 43.04 |
| **Chinese-Alpaca-Plus-13B** | 38.80 | 43.90 | 33.43 | 34.78 | 35.46 | 28.94 |
| **XVERSE-13B** | 53.70 | 55.21 | 58.44 | 44.69 | 42.54 | 38.06 |
| **[Baichuan-13B-Base]** | 52.40 | 51.60 | 55.30 | 49.69 | 43.20 | 43.01 |
| **[Baichuan2-13B-Base]** | 58.10 | 59.17 | 61.97 | 54.33 | 48.17 | 48.78 |
## 训练过程模型/Training Dynamics
除了训练了 2.6 万亿 Tokens 的 [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) 模型,我们还提供了在此之前的另外 11 个中间过程的模型(分别对应训练了约 0.2 ~ 2.4 万亿 Tokens)供社区研究使用
([训练过程checkpoint下载](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints))。下图给出了这些 checkpoints 在 C-Eval、MMLU、CMMLU 三个 benchmark 上的效果变化:
In addition to the [Baichuan2-7B-Base](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base) model trained on 2.6 trillion tokens, we also offer 11 additional intermediate-stage models for community research, corresponding to training on approximately 0.2 to 2.4 trillion tokens each ([Intermediate Checkpoints Download](https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints)). The graph below shows the performance changes of these checkpoints on three benchmarks: C-Eval, MMLU, and CMMLU.

# <span id="Community">社区与生态/Community</span>
## Intel 酷睿 Ultra 平台运行百川大模型
使用酷睿™/至强® 可扩展处理器或配合锐炫™ GPU等进行部署[Baichuan2-7B-Chat],[Baichuan2-13B-Chat]模型,推荐使用 BigDL-LLM([CPU], [GPU])以发挥更好推理性能。
详细支持信息可参考[中文操作手册](https://github.com/intel-analytics/bigdl-llm-tutorial/tree/main/Chinese_Version),包括用notebook支持,[加载,优化,保存方法](https://github.com/intel-analytics/bigdl-llm-tutorial/blob/main/Chinese_Version/ch_3_AppDev_Basic/3_BasicApp.ipynb)等。
When deploy on Core™/Xeon® Scalable Processors or with Arc™ GPU, BigDL-LLM ([CPU], [GPU]) is recommended to take full advantage of better inference performance.
# <span id="Terms">声明与协议/Terms and Conditions</span>
## 声明
我们在此声明,我们的开发团队并未基于 Baichuan 2 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用
Baichuan 2 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan 2
模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用
Baichuan 2 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our team has not developed any applications based on Baichuan 2 models, not on iOS, Android, the web, or any other platform. We strongly call on all users not to use Baichuan 2 models for any activities that harm national / social security or violate the law. Also, we ask users not to use Baichuan 2 models for Internet services that have not undergone appropriate security reviews and filings. We hope that all users can abide by this principle and ensure that the development of technology proceeds in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our considerable efforts, there may still be some unforeseeable issues due to the complexity of the model and data. Therefore, if any problems arise due to the use of Baichuan 2 open-source models, including but not limited to data security issues, public opinion risks, or any risks and problems brought about by the model being misled, abused, spread or improperly exploited, we will not assume any responsibility.
## 协议
社区使用 Baichuan 2 模型需要遵循 [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) 和[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)。Baichuan 2 模型支持商业用途,如果您计划将 Baichuan 2 模型或其衍生品用于商业目的,请您确认您的主体符合以下情况:
1. 您或您的关联方的服务或产品的日均用户活跃量(DAU)低于100万。
2. 您或您的关联方不是软件服务提供商、云服务提供商。
3. 您或您的关联方不存在将授予您的商用许可,未经百川许可二次授权给其他第三方的可能。
在符合以上条件的前提下,您需要通过以下联系邮箱 [email protected] ,提交《Baichuan 2 模型社区许可协议》要求的申请材料。审核通过后,百川将特此授予您一个非排他性、全球性、不可转让、不可再许可、可撤销的商用版权许可。
The community usage of Baichuan 2 model requires adherence to [Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE) and [Community License for Baichuan2 Model](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf). The Baichuan 2 model supports commercial use. If you plan to use the Baichuan 2 model or its derivatives for commercial purposes, please ensure that your entity meets the following conditions:
1. The Daily Active Users (DAU) of your or your affiliate's service or product is less than 1 million.
2. Neither you nor your affiliates are software service providers or cloud service providers.
3. There is no possibility for you or your affiliates to grant the commercial license given to you, to reauthorize it to other third parties without Baichuan's permission.
Upon meeting the above conditions, you need to submit the application materials required by the Baichuan 2 Model Community License Agreement via the following contact email: [email protected]. Once approved, Baichuan will hereby grant you a non-exclusive, global, non-transferable, non-sublicensable, revocable commercial copyright license.
[GitHub]:https://github.com/baichuan-inc/Baichuan2
[Baichuan2]:https://github.com/baichuan-inc/Baichuan2
[Baichuan-7B]:https://huggingface.co/baichuan-inc/Baichuan-7B
[Baichuan2-7B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base
[Baichuan2-7B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat
[Baichuan2-7B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat-4bits
[Baichuan-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan-13B-Base
[Baichuan2-13B-Base]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Base
[Baichuan2-13B-Chat]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat
[Baichuan2-13B-Chat-4bits]:https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat-4bits
[通用]:https://github.com/baichuan-inc/Baichuan2#%E9%80%9A%E7%94%A8%E9%A2%86%E5%9F%9F
[法律]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[医疗]:https://github.com/baichuan-inc/Baichuan2#%E6%B3%95%E5%BE%8B%E5%8C%BB%E7%96%97
[数学]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[代码]:https://github.com/baichuan-inc/Baichuan2#%E6%95%B0%E5%AD%A6%E4%BB%A3%E7%A0%81
[多语言翻译]:https://github.com/baichuan-inc/Baichuan2#%E5%A4%9A%E8%AF%AD%E8%A8%80%E7%BF%BB%E8%AF%91
[《Baichuan 2 模型社区许可协议》]:https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf
[邮件申请]: mailto:[email protected]
[Email]: mailto:[email protected]
[[email protected]]: mailto:[email protected]
[训练过程heckpoint下载]: https://huggingface.co/baichuan-inc/Baichuan2-7B-Intermediate-Checkpoints
[百川智能]: https://www.baichuan-ai.com
[CPU]: https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/CPU/HF-Transformers-AutoModels/Model/baichuan2
[GPU]: https://github.com/intel-analytics/BigDL/tree/main/python/llm/example/GPU/HF-Transformers-AutoModels/Model/baichuan2
|
jetaimejeteveux/CNN-MNIST
|
jetaimejeteveux
| 2024-02-26T08:53:38Z | 0 | 0 | null |
[
"pytorch",
"object-detection",
"dataset:mnist",
"region:us"
] |
object-detection
| 2024-02-26T01:48:49Z |
---
datasets:
- mnist
metrics:
- accuracy
pipeline_tag: object-detection
---
|
sadhaklal/bert-base-cased-finetuned-conll2003-ner-v2
|
sadhaklal
| 2024-02-26T08:49:37Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"en",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-22T16:57:14Z |
---
library_name: transformers
license: apache-2.0
datasets:
- conll2003
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
pipeline_tag: token-classification
---
# bert-base-cased-finetuned-conll2003-ner-v2
BERT ("bert-base-cased") finetuned on CoNLL-2003 (Conference on Computational Natural Language Learning).
The model performs named entity recognition (NER). It pertains to section 2 of chapter 7 of the Hugging Face "NLP Course" (https://huggingface.co/learn/nlp-course/chapter7/2).
It was trained using a custom PyTorch loop with Hugging Face Accelerate.
Code: https://github.com/sambitmukherjee/huggingface-notebooks/blob/main/course/en/chapter7/section2_pt.ipynb
Experiment tracking: https://wandb.ai/sadhaklal/bert-base-cased-finetuned-conll2003-ner-v2
## Usage
```
from transformers import pipeline
model_checkpoint = "sadhaklal/bert-base-cased-finetuned-conll2003-ner-v2"
token_classifier = pipeline("token-classification", model=model_checkpoint, aggregation_strategy="simple")
print(token_classifier("My name is Sylvain and I work at Hugging Face in Brooklyn."))
```
## Dataset
From the dataset page:
> The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
Examples: https://huggingface.co/datasets/conll2003/viewer
## Metrics
Accuracy on the 'validation' split of CoNLL-2003: 0.9858
Precision on the 'validation' split of CoNLL-2003: 0.9243
Recall on the 'validation' split of CoNLL-2003: 0.947
F1 on the 'validation' split of CoNLL-2003: 0.9355
|
jaydeep07/jdp-yamaha-r1
|
jaydeep07
| 2024-02-26T08:49:09Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T08:45:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### JDP-YAMAHA-R1 Dreambooth model trained by jaydeep07 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SSGI20203711
Sample pictures of this concept:

|
ankursinghbisht/q-FrozenLake-v1-4x4-noSlippery
|
ankursinghbisht
| 2024-02-26T08:48:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T08:48:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ankursinghbisht/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dbruner23/videomae-base-finetuned-ucf101-subset
|
dbruner23
| 2024-02-26T08:47:59Z | 64 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-02-26T08:24:06Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3965
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8902 | 0.25 | 75 | 1.7885 | 0.3286 |
| 0.6263 | 1.25 | 150 | 0.8838 | 0.5571 |
| 0.5676 | 2.25 | 225 | 0.6316 | 0.6714 |
| 0.1811 | 3.25 | 300 | 0.3965 | 0.8 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ryusangwon/4984_Llama-2-7b-hf
|
ryusangwon
| 2024-02-26T08:46:47Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-26T08:46:42Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: 4984_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4984_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
winvswon78/distilbert-finetuned-squadv2
|
winvswon78
| 2024-02-26T08:33:20Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-26T06:53:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-finetuned-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-finetuned-squadv2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
sadhaklal/bert-base-uncased-finetuned-sst2-v2
|
sadhaklal
| 2024-02-26T08:29:37Z | 147 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:sst2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T11:07:29Z |
---
license: apache-2.0
datasets:
- sst2
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
widget:
- text: "this film 's relationship to actual tension is the same as what christmas-tree flocking in a spray can is to actual snow : a poor -- if durable -- imitation ."
example_title: "negative"
- text: "director rob marshall went out gunning to make a great one ."
example_title: "positive"
---
# bert-base-uncased-finetuned-sst2-v2
BERT (`"bert-base-uncased"`) finetuned on SST-2 (Stanford Sentiment Treebank Binary).
This model pertains to the "Try it out!" exercise in section 4 of chapter 3 of the Hugging Face "NLP Course" (https://huggingface.co/learn/nlp-course/chapter3/4).
It was trained using a custom PyTorch loop without Hugging Face Accelerate.
Code: https://github.com/sambitmukherjee/hf-nlp-course-exercises/blob/main/chapter3/section4.ipynb
Experiment tracking: https://wandb.ai/sadhaklal/bert-base-uncased-finetuned-sst2-v2
## Usage
```
from transformers import pipeline
classifier = pipeline("text-classification", model="sadhaklal/bert-base-uncased-finetuned-sst2-v2")
print(classifier("uneasy mishmash of styles and genres ."))
print(classifier("by the end of no such thing the audience , like beatrice , has a watchful affection for the monster ."))
```
## Dataset
From the dataset page:
> The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language...
> Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive with neutral sentences discarded) refer to the dataset as SST-2 or SST binary.
Examples: https://huggingface.co/datasets/sst2/viewer
## Metric
Accuracy on the `'validation'` split of SST-2: 0.9278
|
saqlainshah/gemma_2b_finetuned_medal
|
saqlainshah
| 2024-02-26T08:28:59Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"gemma fine tuned",
"medical",
"medal dataset finetuned",
"question answering",
"QA",
"conversational",
"en",
"dataset:medal",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T07:57:16Z |
---
library_name: transformers
tags:
- gemma fine tuned
- medical
- medal dataset finetuned
- question answering
- QA
datasets:
- medal
language:
- en
---
# Model Card for Model ID
This model is based on google/gemma-2b-it and trained on small chunk of data from medal dataset.
Trained on Colab TPU
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Small chunk from Medal dataset
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ev-wwt/output
|
ev-wwt
| 2024-02-26T08:22:26Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:finetune:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2024-02-26T08:17:09Z |
---
license: apache-2.0
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.13.3
|
praison/wikisql-4bit-1k
|
praison
| 2024-02-26T08:21:03Z | 8 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral",
"pretrained",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-26T05:45:31Z |
---
language:
- en
license: apache-2.0
tags:
- pretrained
- mlx
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.7
---
# praison/wikisql-4bit-1k
This model was converted to MLX format from [`mistralai/Mistral-7B-v0.1`]().
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("praison/wikisql-4bit-1k")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
KVNAditya/drl__u2__taxi_v3
|
KVNAditya
| 2024-02-26T08:16:31Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T08:16:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: drl__u2__taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="KVNAditya/drl__u2__taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
buaacaver/ppo-LunarLander-v2
|
buaacaver
| 2024-02-26T08:16:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T08:16:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.29 +/- 48.51
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zhuluv/textual_inversion_woman
|
zhuluv
| 2024-02-26T08:14:44Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T07:27:13Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - zhuluv/textual_inversion_woman
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
WangA/siamese_cnn_hanzi
|
WangA
| 2024-02-26T08:06:28Z | 0 | 0 | null |
[
"aversarial attack",
"Chinese text",
"image-classification",
"zh",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-02-25T08:56:30Z |
---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
language:
- zh
tags:
- aversarial attack
- Chinese text
---
# Siamese CNN
复现[Argot: Generating Adversarial Readable Chinese Texts IJCAI 2020](https://www.ijcai.org/Proceedings/2020/351) 字形变换的相似结构汉字筛选
## 介绍
基于CNN架构,采用孪生网络训练方式,对输入的汉字对进行编码并计算其欧式距离作为汉字字形相似度度量

## 架构
三层Conv2D 大小(Input_channel, output_channel, filter_size)= (3,64,8),(64,128,8),(128,128,8)
每层卷积层后添加MaxPool(2)
lr = 0.002
## 数据集
汉字来源:https://github.com/zzboy/chinese
采用pygame生成图片数据,默认采用黑体字体,图片大小为200*200
上述汉字每行作为相似字符,按照7:3划分数据集
并参考https://github.com/avilash/pytorch-siamese-triplet 生成三元组训练数据、测试数据,实际训练、测试时采用50000对、10000对三元组数据
## 评估
loss = MarginRankingLoss(margin=1)
| 0% of margin | 20% of margin | 50% of margin | loss |epoch|
| :--------------- | :---------------------- | :---|:---|:-----|
|0.9012 |0.7998 | 0.5700| 0.4674| 10|
0% of margin 相当于准确率
## 使用
采用Pytorch加载,一般用加载一个CNN模型就可以使用,注意删除state_dict中的字典名字
```py
model_dict = torch.load('./checkpoint.pth')['state_dict']
model_dict_mod = {}
for key, value in model_dict.items():
new_key = '.'.join(key.split('.')[1:])
model_dict_mod[new_key] = value
self.model.load_state_dict(model_dict_mod)
```
## 文件介绍
` prepare_data.py ` 生成数据集,将汉字转换为图片,默认黑体字体,也可以用别的,从C://Windows/Fonts Windows系统上下载
`character`文件表示训练数据,同一个子文件夹表示其中的汉字是相似的,不同的子文件夹表示汉字不相似
`train.py`为训练文件
|
mins0o0/transforemr
|
mins0o0
| 2024-02-26T08:06:05Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-26T08:05:33Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: transforemr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transforemr
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3345
- Bleu: 5.098
- Gen Len: 7.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 2.6892 | 1.0 | 12709 | 2.4746 | 4.0486 | 7.9876 |
| 2.5757 | 2.0 | 25418 | 2.3936 | 4.8489 | 7.992 |
| 2.5445 | 3.0 | 38127 | 2.3565 | 5.0781 | 7.9899 |
| 2.501 | 4.0 | 50836 | 2.3388 | 5.095 | 7.9828 |
| 2.4785 | 5.0 | 63545 | 2.3345 | 5.098 | 7.9826 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
liminerity/Blur-7b-slerp-v1.45
|
liminerity
| 2024-02-26T07:59:41Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"CultriX/MonaTrix-v4",
"liminerity/Blur-7b-slerp-v1.44",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T07:54:36Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- CultriX/MonaTrix-v4
- liminerity/Blur-7b-slerp-v1.44
---
# Blur-7b-slerp-v1.45
Blur-7b-slerp-v1.45 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [CultriX/MonaTrix-v4](https://huggingface.co/CultriX/MonaTrix-v4)
* [liminerity/Blur-7b-slerp-v1.44](https://huggingface.co/liminerity/Blur-7b-slerp-v1.44)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CultriX/MonaTrix-v4
layer_range: [0, 32]
- model: liminerity/Blur-7b-slerp-v1.44
layer_range: [0, 32]
merge_method: slerp
base_model: CultriX/MonaTrix-v4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Azureus212/missionarypos
|
Azureus212
| 2024-02-26T07:57:31Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-02-26T07:56:56Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0b\0e\0a\0u\0t\0i\0f\0u\0l\0,\0 \0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0e\0x\0t\0r\0e\0m\0e\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0f\0a\0c\0e\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \01\0g\0i\0r\0l\0,\0 \0s\0o\0l\0o\0,\0 \0 \0 \0 \0<\0l\0o\0r\0a\0:\0P\0O\0V\0M\0i\0s\0s\0i\0o\0n\0a\0r\0y\0:\00\0.\08\0>\0,\0 \0m\0i\0s\0s\0i\0o\0n\0a\0r\0y\0p\0o\0s\0e\0,\0 \01\0b\0o\0y\0,\0 \0p\0e\0n\0i\0s\0,\0 \0l\0y\0i\0n\0g\0,\0 \0 \0m\0i\0s\0s\0i\0o\0n\0a\0r\0y\0,\0 \0v\0a\0g\0i\0n\0a\0l\0,\0 \0f\0e\0e\0t\0,\0 \0 \0s\0o\0l\0e\0s\0,\0 \0u\0n\0c\0e\0n\0s\0o\0r\0e\0d\0"
output:
url: >-
images/00059-3676341798-beautiful, masterpiece, best quality, extremely
detailed face, perfect lighting, 1girl, solo, _lora_POVMissionary_0.8_,
missi.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# missionary position
<Gallery />
## Model description
sc = https://civitai.com/models/31385?modelVersionId=37826
## Download model
Weights for this model are available in Safetensors format.
[Download](/Azureus212/missionarypos/tree/main) them in the Files & versions tab.
|
myrtotsok/distilbert-base-uncased-lora-text-classification
|
myrtotsok
| 2024-02-26T07:56:46Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-06T10:30:33Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: {'accuracy': 1.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|
| No log | 1.0 | 171 | 0.0001 | {'accuracy': 1.0} |
| No log | 2.0 | 342 | 0.0001 | {'accuracy': 1.0} |
| 0.0245 | 3.0 | 513 | 0.0001 | {'accuracy': 1.0} |
| 0.0245 | 4.0 | 684 | 0.0000 | {'accuracy': 1.0} |
| 0.0245 | 5.0 | 855 | 0.0001 | {'accuracy': 1.0} |
| 0.0 | 6.0 | 1026 | 0.0000 | {'accuracy': 1.0} |
| 0.0 | 7.0 | 1197 | 0.0000 | {'accuracy': 1.0} |
| 0.0 | 8.0 | 1368 | 0.0000 | {'accuracy': 1.0} |
| 0.0 | 9.0 | 1539 | 0.0000 | {'accuracy': 1.0} |
| 0.0 | 10.0 | 1710 | 0.0000 | {'accuracy': 1.0} |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
saqlainshah/saqlainshah
|
saqlainshah
| 2024-02-26T07:55:17Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-25T18:28:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aryachakraborty/FineTuned_unsloth_mistral-7b-instruct-v0.2-bnb-4bit
|
aryachakraborty
| 2024-02-26T07:50:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T07:50:06Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yashyamsan/dogbooth
|
yashyamsan
| 2024-02-26T07:47:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T06:52:25Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - yashyamsan/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
abcdhhhh/Q-Learning-Taxi-v3
|
abcdhhhh
| 2024-02-26T07:46:14Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T07:46:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.26 +/- 2.59
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="abcdhhhh/Q-Learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
forever-yu/causal-7b-backup
|
forever-yu
| 2024-02-26T07:44:49Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"qwen",
"causallm",
"en",
"zh",
"dataset:JosephusCheung/GuanacoDataset",
"dataset:Open-Orca/OpenOrca",
"dataset:stingning/ultrachat",
"dataset:meta-math/MetaMathQA",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:jondurbin/airoboros-3.1",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:RyokoAI/ShareGPT52K",
"dataset:RyokoAI/Fandom23K",
"dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive",
"dataset:wikipedia",
"dataset:wiki_lingua",
"dataset:fnlp/moss-003-sft-data",
"dataset:garage-bAInd/Open-Platypus",
"dataset:LDJnr/Puffin",
"dataset:openbmb/llava_zh",
"dataset:BAAI/COIG",
"dataset:TigerResearch/tigerbot-zhihu-zh-10k",
"dataset:liwu/MNBVC",
"dataset:teknium/openhermes",
"dataset:openbmb/UltraFeedback",
"dataset:lmsys/lmsys-chat-1m",
"license:wtfpl",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T07:31:53Z |
---
license: wtfpl
datasets:
- JosephusCheung/GuanacoDataset
- Open-Orca/OpenOrca
- stingning/ultrachat
- meta-math/MetaMathQA
- liuhaotian/LLaVA-Instruct-150K
- jondurbin/airoboros-3.1
- WizardLM/WizardLM_evol_instruct_V2_196k
- RyokoAI/ShareGPT52K
- RyokoAI/Fandom23K
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive
- wikipedia
- wiki_lingua
- fnlp/moss-003-sft-data
- garage-bAInd/Open-Platypus
- LDJnr/Puffin
- openbmb/llava_zh
- BAAI/COIG
- TigerResearch/tigerbot-zhihu-zh-10k
- liwu/MNBVC
- teknium/openhermes
- openbmb/UltraFeedback
- lmsys/lmsys-chat-1m
language:
- en
- zh
pipeline_tag: text-generation
tags:
- llama
- llama2
- qwen
- causallm
---
For details, please refer to the version without DPO training: [CausalLM/7B](https://huggingface.co/CausalLM/7B).
| Model | MT-Bench |
| ------------------------- | ------------ |
| GPT-4 | 8.99 |
| GPT-3.5-Turbo | 7.94 |
| | |
| Zephyr-7b-β (Overfitting) | 7.34 |
| Zephyr-7b-α | 6.88 |
| | |
| **CausalLM/14B-DPO-α** | **7.618868** |
| **CausalLM/7B-DPO-α** | **7.038125** |
It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model.
The beta branch will soon be released, employing some aggressive approaches that might be detrimental in certain tasks, in order to achieve better alignment with human preferences, aiming to meet or exceed the GPT-3.5 benchmarks. Stay tuned.
Disclaimer: Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
更多详情,请参见未经DPO训练的版本:[CausalLM/14B](https://huggingface.co/CausalLM/14B)
需要注意的是,这并不是在 CausalLM/14B & 7B 上继续训练的版本,而是在之前的训练分支上同时进行了 DPO 训练的优化版本,一些细节参数可能发生了变化。 您仍然需要下载完整模型。
很快将会发布beta分支,采用了一些可能不利于某些任务的激进方法,以实现更好地符合人类偏好以接近和超过GPT-3.5基准。敬请期待。
免责声明:请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。
|
abcdhhhh/q-FrozenLake-v1-4x4-noSlippery
|
abcdhhhh
| 2024-02-26T07:44:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T07:44:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="abcdhhhh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hhs8746/sftestd1
|
hhs8746
| 2024-02-26T07:39:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T07:38:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hiraltalsaniya/fine-tunned-phi2-task-classification
|
hiraltalsaniya
| 2024-02-26T07:33:28Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T07:30:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RajuEEE/Mistral_V01_FineTunedModel
|
RajuEEE
| 2024-02-26T07:32:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T07:32:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CocaButon/xlm-roberta-base-finetuned-panx-all
|
CocaButon
| 2024-02-26T07:31:14Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T07:27:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1721
- F1: 0.8525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2974 | 1.0 | 835 | 0.2015 | 0.8069 |
| 0.1575 | 2.0 | 1670 | 0.1687 | 0.8432 |
| 0.1027 | 3.0 | 2505 | 0.1721 | 0.8525 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
huseinzol05/conformer-4M-ctc
|
huseinzol05
| 2024-02-26T07:30:22Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"conformer",
"feature-extraction",
"custom_code",
"region:us"
] |
feature-extraction
| 2024-02-15T04:12:07Z |
---
library_name: transformers
tags: []
---
# Conformer CTC 4M parameters
WanDB https://wandb.ai/huseinzol05/malaysian-conformer-ctc-4M?workspace=user-huseinzol05
|
AptaArkana/indonesian_sentiment_sbert_base
|
AptaArkana
| 2024-02-26T07:27:16Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:indonlu",
"base_model:naufalihsan/indonesian-sbert-large",
"base_model:finetune:naufalihsan/indonesian-sbert-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T06:50:02Z |
---
base_model: naufalihsan/indonesian-sbert-large
tags:
- generated_from_trainer
datasets:
- indonlu
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: indonlu
type: indonlu
config: smsa
split: validation
args: smsa
metrics:
- name: Accuracy
type: accuracy
value: 0.95
- name: Precision
type: precision
value: 0.9499758037063356
- name: Recall
type: recall
value: 0.95
- name: F1
type: f1
value: 0.9496487652420723
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment
This model is a fine-tuned version of [naufalihsan/indonesian-sbert-large](https://huggingface.co/naufalihsan/indonesian-sbert-large) on the indonlu dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4450
- Accuracy: 0.95
- Precision: 0.9500
- Recall: 0.95
- F1: 0.9496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 275 | 0.2837 | 0.9405 | 0.9427 | 0.9405 | 0.9396 |
| 0.0501 | 2.0 | 550 | 0.1966 | 0.9460 | 0.9468 | 0.9460 | 0.9458 |
| 0.0501 | 3.0 | 825 | 0.2927 | 0.9437 | 0.9435 | 0.9437 | 0.9427 |
| 0.0369 | 4.0 | 1100 | 0.3666 | 0.9460 | 0.9459 | 0.9460 | 0.9456 |
| 0.0369 | 5.0 | 1375 | 0.3579 | 0.9468 | 0.9465 | 0.9468 | 0.9465 |
| 0.0098 | 6.0 | 1650 | 0.4497 | 0.9476 | 0.9479 | 0.9476 | 0.9471 |
| 0.0098 | 7.0 | 1925 | 0.4308 | 0.95 | 0.9501 | 0.95 | 0.9496 |
| 0.0012 | 8.0 | 2200 | 0.4402 | 0.95 | 0.9499 | 0.95 | 0.9496 |
| 0.0012 | 9.0 | 2475 | 0.4429 | 0.95 | 0.9500 | 0.95 | 0.9496 |
| 0.0007 | 10.0 | 2750 | 0.4450 | 0.95 | 0.9500 | 0.95 | 0.9496 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CocaButon/xlm-roberta-base-finetuned-panx-it
|
CocaButon
| 2024-02-26T07:26:37Z | 96 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T07:25:45Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2714
- F1: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7111 | 1.0 | 70 | 0.3311 | 0.7243 |
| 0.2918 | 2.0 | 140 | 0.2697 | 0.7947 |
| 0.1795 | 3.0 | 210 | 0.2714 | 0.8212 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CocaButon/xlm-roberta-base-finetuned-panx-fr
|
CocaButon
| 2024-02-26T07:25:42Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T07:24:30Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2792
- F1: 0.8358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.572 | 1.0 | 191 | 0.3533 | 0.7615 |
| 0.2769 | 2.0 | 382 | 0.2787 | 0.8173 |
| 0.1834 | 3.0 | 573 | 0.2792 | 0.8358 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CocaButon/xlm-roberta-base-finetuned-panx-de-fr
|
CocaButon
| 2024-02-26T07:23:06Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T07:20:02Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2852 | 1.0 | 715 | 0.1750 | 0.8236 |
| 0.1458 | 2.0 | 1430 | 0.1585 | 0.8533 |
| 0.0934 | 3.0 | 2145 | 0.1626 | 0.8598 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Sotaro0124/my_xlm
|
Sotaro0124
| 2024-02-26T07:16:14Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-26T07:14:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abcdhhhh/ppo-LunarLander-v2
|
abcdhhhh
| 2024-02-26T07:08:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-26T07:08:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.38 +/- 36.51
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Watarungurunnn/w2v-bert-2.0-japanese-CV16.0_aynita_1
|
Watarungurunnn
| 2024-02-26T07:00:24Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:Watarungurunnn/w2v-bert-2.0-japanese-CV16.0",
"base_model:finetune:Watarungurunnn/w2v-bert-2.0-japanese-CV16.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-22T19:23:19Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
base_model: Watarungurunnn/w2v-bert-2.0-japanese-CV16.0
model-index:
- name: w2v-bert-2.0-japanese-CV16.0
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: ja
split: validation
args: ja
metrics:
- type: wer
value: 32.61876963445312
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-japanese-CV16.0
This model is a fine-tuned version of [Watarungurunnn/w2v-bert-2.0-japanese-CV16.0](https://huggingface.co/Watarungurunnn/w2v-bert-2.0-japanese-CV16.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5149
- Wer: 32.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1205 | 1.69 | 500 | 1.3602 | 36.4355 |
| 0.2116 | 3.39 | 1000 | 1.4580 | 35.1067 |
| 0.1054 | 5.08 | 1500 | 1.4180 | 34.6457 |
| 0.0661 | 6.78 | 2000 | 1.4557 | 32.3889 |
| 0.0208 | 8.47 | 2500 | 1.5149 | 32.6188 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
wbq/model-api-test
|
wbq
| 2024-02-26T06:54:25Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"token-classification",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T02:58:48Z |
---
language: en
license: apache-2.0
datasets:
- squad
metrics:
- squad
model-index:
- name: distilbert-base-cased-distilled-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- type: exact_match
value: 79.5998
name: Exact Match
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTViZDA2Y2E2NjUyMjNjYjkzNTUzODc5OTk2OTNkYjQxMDRmMDhlYjdmYWJjYWQ2N2RlNzY1YmI3OWY1NmRhOSIsInZlcnNpb24iOjF9.ZJHhboAMwsi3pqU-B-XKRCYP_tzpCRb8pEjGr2Oc-TteZeoWHI8CXcpDxugfC3f7d_oBcKWLzh3CClQxBW1iAQ
- type: f1
value: 86.9965
name: F1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZlMzY2MmE1NDNhOGNjNWRmODg0YjQ2Zjk5MjUzZDQ2MDYxOTBlMTNhNzQ4NTA2NjRmNDU3MGIzMTYwMmUyOSIsInZlcnNpb24iOjF9.z0ZDir87aT7UEmUeDm8Uw0oUdAqzlBz343gwnsQP3YLfGsaHe-jGlhco0Z7ISUd9NokyCiJCRc4NNxJQ83IuCw
---
# DistilBERT base cased distilled SQuAD
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than *bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language understanding benchmark.
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on [SQuAD v1.1](https://huggingface.co/datasets/squad).
- **Developed by:** Hugging Face
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** Apache 2.0
- **Related Models:** [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased)
- **Resources for more information:**
- See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including this model)
- See [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure
## How to Get Started with the Model
Use the code below to get started with the model.
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
... a model on a SQuAD task, you may leverage the examples/pytorch/question-answering/run_squad.py script.
... """
>>> result = question_answerer(question="What is a good example of a question answering dataset?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'SQuAD dataset', score: 0.5152, start: 147, end: 160
```
Here is how to use this model in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
import torch
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased-distilled-squad')
model = DistilBertModel.from_pretrained('distilbert-base-cased-distilled-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs)
```
And in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad")
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors="tf")
outputs = model(**inputs)
answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
tokenizer.decode(predict_answer_tokens)
```
## Uses
This model can be used for question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware that language generated by this model can be disturbing or offensive to some and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model='distilbert-base-cased-distilled-squad')
>>> context = r"""
... Alice is sitting on the bench. Bob is sitting next to her.
... """
>>> result = question_answerer(question="Who is the CEO?", context=context)
>>> print(
... f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}"
...)
Answer: 'Bob', score: 0.7527, start: 32, end: 35
```
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Training
#### Training Data
The [distilbert-base-cased model](https://huggingface.co/distilbert-base-cased) was trained using the same data as the [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased). The [distilbert-base-uncased model](https://huggingface.co/distilbert-base-uncased) model describes it's training data as:
> DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
To learn more about the SQuAD v1.1 dataset, see the [SQuAD v1.1 data card](https://huggingface.co/datasets/squad).
#### Training Procedure
##### Preprocessing
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
##### Pretraining
See the [distilbert-base-cased model card](https://huggingface.co/distilbert-base-cased) for further details.
## Evaluation
As discussed in the [model repository](https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/README.md)
> This model reaches a F1 score of 87.1 on the [SQuAD v1.1] dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and hours used based on the [associated paper](https://arxiv.org/pdf/1910.01108.pdf). Note that these details are just for training DistilBERT, not including the fine-tuning with SQuAD.
- **Hardware Type:** 8 16GB V100 GPUs
- **Hours used:** 90 hours
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/1910.01108) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{sanh2019distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas},
booktitle={NeurIPS EMC^2 Workshop},
year={2019}
}
```
APA:
- Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
## Model Card Authors
This model card was written by the Hugging Face team.
|
maeeeeee/maid-yuzu-v8-alter-4.0bpw-exl2
|
maeeeeee
| 2024-02-26T06:50:23Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"base_model:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss",
"base_model:merge:NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss",
"base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:merge:jondurbin/bagel-dpo-8x7b-v0.2",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:merge:mistralai/Mixtral-8x7B-v0.1",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-25T16:31:30Z |
---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- mistralai/Mixtral-8x7B-Instruct-v0.1
- jondurbin/bagel-dpo-8x7b-v0.2
- cognitivecomputations/dolphin-2.7-mixtral-8x7b
- NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
- ycros/BagelMIsteryTour-v2-8x7B
- smelborp/MixtralOrochi8x7B
library_name: transformers
tags:
- mergekit
- merge
---
4.0bpw quant of rhplus0831's maid-yuzu-v8-alter, which can be found here: https://huggingface.co/rhplus0831/maid-yuzu-v8-alter. Original model card below.
<hr>
# maid-yuzu-v8-alter
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
v7's approach worked better than I thought, so I tried something even weirder as a test. I don't think a proper model will come out, but I'm curious about the results.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
This models were merged using the SLERP method in the following order:
maid-yuzu-v8-base: mistralai/Mixtral-8x7B-v0.1 + mistralai/Mixtral-8x7B-Instruct-v0.1 = 0.5
maid-yuzu-v8-step1: above + jondurbin/bagel-dpo-8x7b-v0.2 = 0.25
maid-yuzu-v8-step2: above + cognitivecomputations/dolphin-2.7-mixtral-8x7b = 0.25
maid-yuzu-v8-step3: above + NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss = 0.25
maid-yuzu-v8-step4-alter: above + ycros/BagelMIsteryTour-v2-8x7B = 0.5
maid-yuzu-v8-alter: above + smelborp/MixtralOrochi8x7B = 0.5
### Models Merged
The following models were included in the merge:
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
* ../maid-yuzu-v8-step4-alter
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: ../maid-yuzu-v8-step4-alter
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: ../maid-yuzu-v8-step4-alter
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
```
|
fzzhang/mistral_gsm8k_tuneS_prod
|
fzzhang
| 2024-02-26T06:49:48Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-26T03:41:11Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_tuneS_prod
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_tuneS_prod
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0
|
CatBarks/bertES_Spam-HamOriginal_model
|
CatBarks
| 2024-02-26T06:44:56Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T06:44:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/bertES_spamming-email-classificationOriginal_model
|
CatBarks
| 2024-02-26T06:42:05Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T06:40:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rachittshah/gemma-2b-code
|
rachittshah
| 2024-02-26T06:35:23Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T06:30:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vilm/Quyen-v0.1-mlx-4bit
|
vilm
| 2024-02-26T06:31:47Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T06:09:15Z |
---
language:
- en
license: other
library_name: transformers
tags:
- mlx
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
---
# vilm/Quyen-v0.1-mlx-4bit
This model was converted to MLX format from [`vilm/Quyen-v0.1`]().
Refer to the [original model card](https://huggingface.co/vilm/Quyen-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("vilm/Quyen-v0.1-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
stan2/Qwen-0.5b-lora
|
stan2
| 2024-02-26T06:28:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T06:24:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adithiram/my-pet-dog-xzg
|
adithiram
| 2024-02-26T06:23:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T06:12:51Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-xzg Dreambooth model trained by adithiram following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4NM21AI002
Sample pictures of this concept:

|
iampalina/data-mining
|
iampalina
| 2024-02-26T06:21:31Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2024-02-26T06:17:24Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Palina Pauliuchenka
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nonetrix/Sanity-Check-7b
|
nonetrix
| 2024-02-26T06:21:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T05:51:39Z |
---
license: apache-2.0
---
## nonetrix/Sanity-Check-7B
Just a sanity check to see if I could merge any model, because I was having errors. Not meant to be a serious model, but it's my first merge so I thought I might as well upload. I don't expect it to be good at really anything, because I have no idea what I am doing. However, please give feedback in tips! :-)
i couldn't merge any other models something about a vocabulary idk ¯\_(ツ)_/¯
## Disclaimer
DO NOT rely on this model for math or medical tasks or ANYTHING unless you have a death wish and want to lower your IQ. Dear God why do i have to say this? Shouldn't it be obvious? I made this on a free Discord bot at 3AM (and recreated the FP16 version of my own PC)
## Merge settings
```
models:
- model: BioMistral/BioMistral-7B
- model: meta-math/MetaMath-Mistral-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: BioMistral/BioMistral-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## Examples of it's stupidity to further dissuade you from using it

## Model safety
lol

## sick image of a car

## migu
<video src="https://cdn-uploads.huggingface.co/production/uploads/65ab93082bf3e0cbbf717850/cIEP5e43VP0k0caRzl16e.mp4" controls="controls" style="max-width: 720px;">
</video>
|
Sachin7/llama-2-7b-storychat2
|
Sachin7
| 2024-02-26T06:17:35Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"4-bit",
"region:us"
] | null | 2024-02-26T06:13:38Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
vilm/Quyen-v0.1-mlx
|
vilm
| 2024-02-26T06:05:23Z | 77 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:LDJnr/Capybara",
"dataset:Intel/orca_dpo_pairs",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T04:57:19Z |
---
language:
- en
license: other
library_name: transformers
tags:
- mlx
datasets:
- teknium/OpenHermes-2.5
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- argilla/distilabel-capybara-dpo-7k-binarized
pipeline_tag: text-generation
---
# vilm/Quyen-v0.1-mlx
This model was converted to MLX format from [`vilm/Quyen-v0.1`]().
Refer to the [original model card](https://huggingface.co/vilm/Quyen-v0.1) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("vilm/Quyen-v0.1-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
dinaaaaaa/llama2-7b-chat-openassistant-guanaco-fine-tune
|
dinaaaaaa
| 2024-02-26T06:02:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T05:34:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gaurav1033/layoutlm-funsd
|
gaurav1033
| 2024-02-26T06:02:03Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T05:55:44Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1516
- Answer: {'precision': 0.38400702987697716, 'recall': 0.5401730531520396, 'f1': 0.44889573703133023, 'number': 809}
- Header: {'precision': 0.3218390804597701, 'recall': 0.23529411764705882, 'f1': 0.27184466019417475, 'number': 119}
- Question: {'precision': 0.5132192846034215, 'recall': 0.6197183098591549, 'f1': 0.5614632071458954, 'number': 1065}
- Overall Precision: 0.4480
- Overall Recall: 0.5645
- Overall F1: 0.4996
- Overall Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7219 | 1.0 | 10 | 1.5555 | {'precision': 0.04431137724550898, 'recall': 0.04573547589616811, 'f1': 0.04501216545012165, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.26320501342882724, 'recall': 0.27605633802816903, 'f1': 0.2694775435380385, 'number': 1065} | 0.1696 | 0.1661 | 0.1678 | 0.3589 |
| 1.4917 | 2.0 | 20 | 1.3323 | {'precision': 0.17622377622377622, 'recall': 0.311495673671199, 'f1': 0.2251004912907548, 'number': 809} | {'precision': 0.1, 'recall': 0.008403361344537815, 'f1': 0.015503875968992248, 'number': 119} | {'precision': 0.29382407985028075, 'recall': 0.4422535211267606, 'f1': 0.3530734632683658, 'number': 1065} | 0.2379 | 0.3633 | 0.2875 | 0.4413 |
| 1.2799 | 3.0 | 30 | 1.2482 | {'precision': 0.24236517218973358, 'recall': 0.4610630407911001, 'f1': 0.317717206132879, 'number': 809} | {'precision': 0.273972602739726, 'recall': 0.16806722689075632, 'f1': 0.20833333333333331, 'number': 119} | {'precision': 0.35655737704918034, 'recall': 0.4084507042253521, 'f1': 0.38074398249452956, 'number': 1065} | 0.2924 | 0.4155 | 0.3432 | 0.4580 |
| 1.1477 | 4.0 | 40 | 1.1758 | {'precision': 0.2900516795865633, 'recall': 0.5550061804697157, 'f1': 0.38099278744166315, 'number': 809} | {'precision': 0.3559322033898305, 'recall': 0.17647058823529413, 'f1': 0.2359550561797753, 'number': 119} | {'precision': 0.4393939393939394, 'recall': 0.49014084507042255, 'f1': 0.46338215712383485, 'number': 1065} | 0.3549 | 0.4977 | 0.4144 | 0.5219 |
| 1.0484 | 5.0 | 50 | 1.0885 | {'precision': 0.3271441202475685, 'recall': 0.4573547589616811, 'f1': 0.3814432989690722, 'number': 809} | {'precision': 0.2826086956521739, 'recall': 0.2184873949579832, 'f1': 0.24644549763033172, 'number': 119} | {'precision': 0.4808, 'recall': 0.564319248826291, 'f1': 0.5192224622030237, 'number': 1065} | 0.4032 | 0.5003 | 0.4465 | 0.5827 |
| 0.9672 | 6.0 | 60 | 1.0745 | {'precision': 0.30431309904153353, 'recall': 0.47095179233621753, 'f1': 0.36972343522561857, 'number': 809} | {'precision': 0.34782608695652173, 'recall': 0.20168067226890757, 'f1': 0.25531914893617025, 'number': 119} | {'precision': 0.43936243936243935, 'recall': 0.5953051643192488, 'f1': 0.5055821371610846, 'number': 1065} | 0.3759 | 0.5213 | 0.4368 | 0.5916 |
| 0.8787 | 7.0 | 70 | 1.1863 | {'precision': 0.3697033898305085, 'recall': 0.43139678615574784, 'f1': 0.3981745579007416, 'number': 809} | {'precision': 0.25, 'recall': 0.2184873949579832, 'f1': 0.23318385650224216, 'number': 119} | {'precision': 0.4801556420233463, 'recall': 0.5793427230046948, 'f1': 0.5251063829787234, 'number': 1065} | 0.4252 | 0.4977 | 0.4586 | 0.5870 |
| 0.8501 | 8.0 | 80 | 1.1043 | {'precision': 0.31553860819828405, 'recall': 0.40914709517923364, 'f1': 0.3562970936490851, 'number': 809} | {'precision': 0.3484848484848485, 'recall': 0.19327731092436976, 'f1': 0.24864864864864866, 'number': 119} | {'precision': 0.41997593261131166, 'recall': 0.6553990610328638, 'f1': 0.5119178584525119, 'number': 1065} | 0.3788 | 0.5278 | 0.4411 | 0.5878 |
| 0.805 | 9.0 | 90 | 1.0872 | {'precision': 0.3356828193832599, 'recall': 0.47095179233621753, 'f1': 0.39197530864197533, 'number': 809} | {'precision': 0.32894736842105265, 'recall': 0.21008403361344538, 'f1': 0.25641025641025644, 'number': 119} | {'precision': 0.45454545454545453, 'recall': 0.6197183098591549, 'f1': 0.5244338498212157, 'number': 1065} | 0.4003 | 0.5349 | 0.4579 | 0.6053 |
| 0.7686 | 10.0 | 100 | 1.1006 | {'precision': 0.35418427726120033, 'recall': 0.5179233621755254, 'f1': 0.42068273092369474, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.2184873949579832, 'f1': 0.2639593908629441, 'number': 119} | {'precision': 0.49634443541835904, 'recall': 0.5737089201877934, 'f1': 0.5322299651567944, 'number': 1065} | 0.4238 | 0.5299 | 0.4709 | 0.6028 |
| 0.7078 | 11.0 | 110 | 1.1631 | {'precision': 0.38475665748393023, 'recall': 0.5179233621755254, 'f1': 0.4415173867228662, 'number': 809} | {'precision': 0.28846153846153844, 'recall': 0.25210084033613445, 'f1': 0.26905829596412556, 'number': 119} | {'precision': 0.520764119601329, 'recall': 0.5887323943661972, 'f1': 0.5526663728514765, 'number': 1065} | 0.4489 | 0.5399 | 0.4902 | 0.6064 |
| 0.7162 | 12.0 | 120 | 1.1517 | {'precision': 0.36400817995910023, 'recall': 0.4400494437577256, 'f1': 0.3984331281477337, 'number': 809} | {'precision': 0.28421052631578947, 'recall': 0.226890756302521, 'f1': 0.25233644859813087, 'number': 119} | {'precision': 0.4661458333333333, 'recall': 0.672300469483568, 'f1': 0.550557477893118, 'number': 1065} | 0.4212 | 0.5514 | 0.4776 | 0.6014 |
| 0.6912 | 13.0 | 130 | 1.2013 | {'precision': 0.3880718954248366, 'recall': 0.5871446229913473, 'f1': 0.4672897196261682, 'number': 809} | {'precision': 0.3888888888888889, 'recall': 0.23529411764705882, 'f1': 0.2931937172774869, 'number': 119} | {'precision': 0.5526552655265526, 'recall': 0.5765258215962441, 'f1': 0.5643382352941176, 'number': 1065} | 0.4641 | 0.5605 | 0.5077 | 0.6082 |
| 0.664 | 14.0 | 140 | 1.1337 | {'precision': 0.37344028520499106, 'recall': 0.5179233621755254, 'f1': 0.4339720352149145, 'number': 809} | {'precision': 0.3218390804597701, 'recall': 0.23529411764705882, 'f1': 0.27184466019417475, 'number': 119} | {'precision': 0.5037650602409639, 'recall': 0.6281690140845071, 'f1': 0.5591307981613038, 'number': 1065} | 0.4399 | 0.5600 | 0.4927 | 0.6142 |
| 0.6496 | 15.0 | 150 | 1.1516 | {'precision': 0.38400702987697716, 'recall': 0.5401730531520396, 'f1': 0.44889573703133023, 'number': 809} | {'precision': 0.3218390804597701, 'recall': 0.23529411764705882, 'f1': 0.27184466019417475, 'number': 119} | {'precision': 0.5132192846034215, 'recall': 0.6197183098591549, 'f1': 0.5614632071458954, 'number': 1065} | 0.4480 | 0.5645 | 0.4996 | 0.6209 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CatBarks/bertES_smsspamOriginal_model
|
CatBarks
| 2024-02-26T05:59:47Z | 162 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T05:58:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jspringer/echo-mistral-7b-instruct-lasttoken
|
jspringer
| 2024-02-26T05:59:22Z | 545 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"arxiv:2402.15449",
"model-index",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-19T04:50:08Z |
---
tags:
- mteb
model-index:
- name: mlm
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 82.97014925373135
- type: ap
value: 49.6288385893607
- type: f1
value: 77.58957447993662
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.975425
- type: ap
value: 87.57349835900825
- type: f1
value: 90.96732416386632
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.708
- type: f1
value: 47.736228936979586
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.006
- type: map_at_10
value: 49.268
- type: map_at_100
value: 49.903999999999996
- type: map_at_1000
value: 49.909
- type: map_at_3
value: 44.334
- type: map_at_5
value: 47.374
- type: mrr_at_1
value: 32.788000000000004
- type: mrr_at_10
value: 49.707
- type: mrr_at_100
value: 50.346999999999994
- type: mrr_at_1000
value: 50.352
- type: mrr_at_3
value: 44.95
- type: mrr_at_5
value: 47.766999999999996
- type: ndcg_at_1
value: 32.006
- type: ndcg_at_10
value: 58.523
- type: ndcg_at_100
value: 61.095
- type: ndcg_at_1000
value: 61.190999999999995
- type: ndcg_at_3
value: 48.431000000000004
- type: ndcg_at_5
value: 53.94
- type: precision_at_1
value: 32.006
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.751
- type: recall_at_1
value: 32.006
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 98.86200000000001
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.75500000000001
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.01500173547629
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.52209238193538
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.1348784470504
- type: mrr
value: 76.93762916062083
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.8322696692348
- type: cos_sim_spearman
value: 86.53751398463592
- type: euclidean_pearson
value: 86.1435544054336
- type: euclidean_spearman
value: 86.70799979698164
- type: manhattan_pearson
value: 86.1206703865016
- type: manhattan_spearman
value: 86.47004256773585
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.1461038961039
- type: f1
value: 88.09877611214092
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.53021718892608
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.34236915611622
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.435
- type: map_at_10
value: 49.437999999999995
- type: map_at_100
value: 51.105999999999995
- type: map_at_1000
value: 51.217999999999996
- type: map_at_3
value: 44.856
- type: map_at_5
value: 47.195
- type: mrr_at_1
value: 45.78
- type: mrr_at_10
value: 56.302
- type: mrr_at_100
value: 56.974000000000004
- type: mrr_at_1000
value: 57.001999999999995
- type: mrr_at_3
value: 53.6
- type: mrr_at_5
value: 55.059999999999995
- type: ndcg_at_1
value: 44.921
- type: ndcg_at_10
value: 56.842000000000006
- type: ndcg_at_100
value: 61.586
- type: ndcg_at_1000
value: 63.039
- type: ndcg_at_3
value: 50.612
- type: ndcg_at_5
value: 53.181
- type: precision_at_1
value: 44.921
- type: precision_at_10
value: 11.245
- type: precision_at_100
value: 1.7069999999999999
- type: precision_at_1000
value: 0.216
- type: precision_at_3
value: 24.224999999999998
- type: precision_at_5
value: 17.511
- type: recall_at_1
value: 36.435
- type: recall_at_10
value: 70.998
- type: recall_at_100
value: 89.64
- type: recall_at_1000
value: 98.654
- type: recall_at_3
value: 53.034000000000006
- type: recall_at_5
value: 60.41
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.371
- type: map_at_10
value: 45.301
- type: map_at_100
value: 46.663
- type: map_at_1000
value: 46.791
- type: map_at_3
value: 41.79
- type: map_at_5
value: 43.836999999999996
- type: mrr_at_1
value: 42.611
- type: mrr_at_10
value: 51.70400000000001
- type: mrr_at_100
value: 52.342
- type: mrr_at_1000
value: 52.38
- type: mrr_at_3
value: 49.374
- type: mrr_at_5
value: 50.82
- type: ndcg_at_1
value: 42.166
- type: ndcg_at_10
value: 51.49
- type: ndcg_at_100
value: 56.005
- type: ndcg_at_1000
value: 57.748
- type: ndcg_at_3
value: 46.769
- type: ndcg_at_5
value: 49.155
- type: precision_at_1
value: 42.166
- type: precision_at_10
value: 9.841
- type: precision_at_100
value: 1.569
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 22.803
- type: precision_at_5
value: 16.229
- type: recall_at_1
value: 33.371
- type: recall_at_10
value: 62.52799999999999
- type: recall_at_100
value: 81.269
- type: recall_at_1000
value: 91.824
- type: recall_at_3
value: 48.759
- type: recall_at_5
value: 55.519
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.421
- type: map_at_10
value: 55.985
- type: map_at_100
value: 56.989999999999995
- type: map_at_1000
value: 57.028
- type: map_at_3
value: 52.271
- type: map_at_5
value: 54.517
- type: mrr_at_1
value: 47.272999999999996
- type: mrr_at_10
value: 59.266
- type: mrr_at_100
value: 59.821999999999996
- type: mrr_at_1000
value: 59.839
- type: mrr_at_3
value: 56.677
- type: mrr_at_5
value: 58.309999999999995
- type: ndcg_at_1
value: 47.147
- type: ndcg_at_10
value: 62.596
- type: ndcg_at_100
value: 66.219
- type: ndcg_at_1000
value: 66.886
- type: ndcg_at_3
value: 56.558
- type: ndcg_at_5
value: 59.805
- type: precision_at_1
value: 47.147
- type: precision_at_10
value: 10.245
- type: precision_at_100
value: 1.302
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.663999999999998
- type: precision_at_5
value: 17.793
- type: recall_at_1
value: 41.421
- type: recall_at_10
value: 78.77499999999999
- type: recall_at_100
value: 93.996
- type: recall_at_1000
value: 98.60600000000001
- type: recall_at_3
value: 62.891
- type: recall_at_5
value: 70.819
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.517999999999997
- type: map_at_10
value: 37.468
- type: map_at_100
value: 38.667
- type: map_at_1000
value: 38.743
- type: map_at_3
value: 34.524
- type: map_at_5
value: 36.175000000000004
- type: mrr_at_1
value: 29.378999999999998
- type: mrr_at_10
value: 39.54
- type: mrr_at_100
value: 40.469
- type: mrr_at_1000
value: 40.522000000000006
- type: mrr_at_3
value: 36.685
- type: mrr_at_5
value: 38.324000000000005
- type: ndcg_at_1
value: 29.718
- type: ndcg_at_10
value: 43.091
- type: ndcg_at_100
value: 48.44
- type: ndcg_at_1000
value: 50.181
- type: ndcg_at_3
value: 37.34
- type: ndcg_at_5
value: 40.177
- type: precision_at_1
value: 29.718
- type: precision_at_10
value: 6.723
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.083
- type: precision_at_5
value: 11.322000000000001
- type: recall_at_1
value: 27.517999999999997
- type: recall_at_10
value: 58.196999999999996
- type: recall_at_100
value: 82.07799999999999
- type: recall_at_1000
value: 94.935
- type: recall_at_3
value: 42.842
- type: recall_at_5
value: 49.58
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.621
- type: map_at_10
value: 30.175
- type: map_at_100
value: 31.496000000000002
- type: map_at_1000
value: 31.602000000000004
- type: map_at_3
value: 26.753
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 25.497999999999998
- type: mrr_at_10
value: 35.44
- type: mrr_at_100
value: 36.353
- type: mrr_at_1000
value: 36.412
- type: mrr_at_3
value: 32.275999999999996
- type: mrr_at_5
value: 34.434
- type: ndcg_at_1
value: 24.502
- type: ndcg_at_10
value: 36.423
- type: ndcg_at_100
value: 42.289
- type: ndcg_at_1000
value: 44.59
- type: ndcg_at_3
value: 30.477999999999998
- type: ndcg_at_5
value: 33.787
- type: precision_at_1
value: 24.502
- type: precision_at_10
value: 6.978
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 15.008
- type: precision_at_5
value: 11.468
- type: recall_at_1
value: 19.621
- type: recall_at_10
value: 50.516000000000005
- type: recall_at_100
value: 75.721
- type: recall_at_1000
value: 91.77199999999999
- type: recall_at_3
value: 34.695
- type: recall_at_5
value: 42.849
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.525
- type: map_at_10
value: 46.153
- type: map_at_100
value: 47.61
- type: map_at_1000
value: 47.715
- type: map_at_3
value: 42.397
- type: map_at_5
value: 44.487
- type: mrr_at_1
value: 42.445
- type: mrr_at_10
value: 52.174
- type: mrr_at_100
value: 52.986999999999995
- type: mrr_at_1000
value: 53.016
- type: mrr_at_3
value: 49.647000000000006
- type: mrr_at_5
value: 51.215999999999994
- type: ndcg_at_1
value: 42.156
- type: ndcg_at_10
value: 52.698
- type: ndcg_at_100
value: 58.167
- type: ndcg_at_1000
value: 59.71300000000001
- type: ndcg_at_3
value: 47.191
- type: ndcg_at_5
value: 49.745
- type: precision_at_1
value: 42.156
- type: precision_at_10
value: 9.682
- type: precision_at_100
value: 1.469
- type: precision_at_1000
value: 0.17700000000000002
- type: precision_at_3
value: 22.682
- type: precision_at_5
value: 16.035
- type: recall_at_1
value: 33.525
- type: recall_at_10
value: 66.142
- type: recall_at_100
value: 88.248
- type: recall_at_1000
value: 97.806
- type: recall_at_3
value: 50.541000000000004
- type: recall_at_5
value: 57.275
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.249000000000002
- type: map_at_10
value: 41.659
- type: map_at_100
value: 43.001
- type: map_at_1000
value: 43.094
- type: map_at_3
value: 37.607
- type: map_at_5
value: 39.662
- type: mrr_at_1
value: 36.301
- type: mrr_at_10
value: 47.482
- type: mrr_at_100
value: 48.251
- type: mrr_at_1000
value: 48.288
- type: mrr_at_3
value: 44.444
- type: mrr_at_5
value: 46.013999999999996
- type: ndcg_at_1
value: 35.616
- type: ndcg_at_10
value: 49.021
- type: ndcg_at_100
value: 54.362
- type: ndcg_at_1000
value: 55.864999999999995
- type: ndcg_at_3
value: 42.515
- type: ndcg_at_5
value: 45.053
- type: precision_at_1
value: 35.616
- type: precision_at_10
value: 9.372
- type: precision_at_100
value: 1.4120000000000001
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 21.043
- type: precision_at_5
value: 14.84
- type: recall_at_1
value: 28.249000000000002
- type: recall_at_10
value: 65.514
- type: recall_at_100
value: 87.613
- type: recall_at_1000
value: 97.03
- type: recall_at_3
value: 47.21
- type: recall_at_5
value: 54.077
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.164583333333333
- type: map_at_10
value: 40.632000000000005
- type: map_at_100
value: 41.96875
- type: map_at_1000
value: 42.07508333333333
- type: map_at_3
value: 37.18458333333333
- type: map_at_5
value: 39.13700000000001
- type: mrr_at_1
value: 35.2035
- type: mrr_at_10
value: 45.28816666666666
- type: mrr_at_100
value: 46.11466666666667
- type: mrr_at_1000
value: 46.15741666666667
- type: mrr_at_3
value: 42.62925
- type: mrr_at_5
value: 44.18141666666667
- type: ndcg_at_1
value: 34.88958333333333
- type: ndcg_at_10
value: 46.90650000000001
- type: ndcg_at_100
value: 52.135333333333335
- type: ndcg_at_1000
value: 53.89766666666668
- type: ndcg_at_3
value: 41.32075
- type: ndcg_at_5
value: 44.02083333333333
- type: precision_at_1
value: 34.88958333333333
- type: precision_at_10
value: 8.392833333333332
- type: precision_at_100
value: 1.3085833333333334
- type: precision_at_1000
value: 0.16458333333333333
- type: precision_at_3
value: 19.361166666666666
- type: precision_at_5
value: 13.808416666666668
- type: recall_at_1
value: 29.164583333333333
- type: recall_at_10
value: 60.874666666666656
- type: recall_at_100
value: 83.21008333333334
- type: recall_at_1000
value: 95.09275000000001
- type: recall_at_3
value: 45.37591666666667
- type: recall_at_5
value: 52.367666666666665
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.682000000000002
- type: map_at_10
value: 37.913000000000004
- type: map_at_100
value: 39.037
- type: map_at_1000
value: 39.123999999999995
- type: map_at_3
value: 35.398
- type: map_at_5
value: 36.906
- type: mrr_at_1
value: 32.362
- type: mrr_at_10
value: 40.92
- type: mrr_at_100
value: 41.748000000000005
- type: mrr_at_1000
value: 41.81
- type: mrr_at_3
value: 38.701
- type: mrr_at_5
value: 39.936
- type: ndcg_at_1
value: 32.208999999999996
- type: ndcg_at_10
value: 42.84
- type: ndcg_at_100
value: 47.927
- type: ndcg_at_1000
value: 50.048
- type: ndcg_at_3
value: 38.376
- type: ndcg_at_5
value: 40.661
- type: precision_at_1
value: 32.208999999999996
- type: precision_at_10
value: 6.718
- type: precision_at_100
value: 1.012
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 16.667
- type: precision_at_5
value: 11.503
- type: recall_at_1
value: 28.682000000000002
- type: recall_at_10
value: 54.872
- type: recall_at_100
value: 77.42999999999999
- type: recall_at_1000
value: 93.054
- type: recall_at_3
value: 42.577999999999996
- type: recall_at_5
value: 48.363
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.698
- type: map_at_10
value: 28.777
- type: map_at_100
value: 30.091
- type: map_at_1000
value: 30.209999999999997
- type: map_at_3
value: 25.874000000000002
- type: map_at_5
value: 27.438000000000002
- type: mrr_at_1
value: 24.295
- type: mrr_at_10
value: 33.077
- type: mrr_at_100
value: 34.036
- type: mrr_at_1000
value: 34.1
- type: mrr_at_3
value: 30.523
- type: mrr_at_5
value: 31.891000000000002
- type: ndcg_at_1
value: 24.535
- type: ndcg_at_10
value: 34.393
- type: ndcg_at_100
value: 40.213
- type: ndcg_at_1000
value: 42.748000000000005
- type: ndcg_at_3
value: 29.316
- type: ndcg_at_5
value: 31.588
- type: precision_at_1
value: 24.535
- type: precision_at_10
value: 6.483
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.201
- type: precision_at_5
value: 10.344000000000001
- type: recall_at_1
value: 19.698
- type: recall_at_10
value: 46.903
- type: recall_at_100
value: 72.624
- type: recall_at_1000
value: 90.339
- type: recall_at_3
value: 32.482
- type: recall_at_5
value: 38.452
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.56
- type: map_at_10
value: 41.993
- type: map_at_100
value: 43.317
- type: map_at_1000
value: 43.399
- type: map_at_3
value: 38.415
- type: map_at_5
value: 40.472
- type: mrr_at_1
value: 36.474000000000004
- type: mrr_at_10
value: 46.562
- type: mrr_at_100
value: 47.497
- type: mrr_at_1000
value: 47.532999999999994
- type: mrr_at_3
value: 43.905
- type: mrr_at_5
value: 45.379000000000005
- type: ndcg_at_1
value: 36.287000000000006
- type: ndcg_at_10
value: 48.262
- type: ndcg_at_100
value: 53.789
- type: ndcg_at_1000
value: 55.44
- type: ndcg_at_3
value: 42.358000000000004
- type: ndcg_at_5
value: 45.221000000000004
- type: precision_at_1
value: 36.287000000000006
- type: precision_at_10
value: 8.265
- type: precision_at_100
value: 1.24
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 19.558
- type: precision_at_5
value: 13.880999999999998
- type: recall_at_1
value: 30.56
- type: recall_at_10
value: 62.891
- type: recall_at_100
value: 85.964
- type: recall_at_1000
value: 97.087
- type: recall_at_3
value: 46.755
- type: recall_at_5
value: 53.986000000000004
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.432000000000002
- type: map_at_10
value: 40.898
- type: map_at_100
value: 42.794
- type: map_at_1000
value: 43.029
- type: map_at_3
value: 37.658
- type: map_at_5
value: 39.519
- type: mrr_at_1
value: 36.364000000000004
- type: mrr_at_10
value: 46.9
- type: mrr_at_100
value: 47.819
- type: mrr_at_1000
value: 47.848
- type: mrr_at_3
value: 44.202999999999996
- type: mrr_at_5
value: 45.715
- type: ndcg_at_1
value: 35.573
- type: ndcg_at_10
value: 47.628
- type: ndcg_at_100
value: 53.88699999999999
- type: ndcg_at_1000
value: 55.584
- type: ndcg_at_3
value: 42.669000000000004
- type: ndcg_at_5
value: 45.036
- type: precision_at_1
value: 35.573
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 1.8159999999999998
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 20.29
- type: precision_at_5
value: 14.387
- type: recall_at_1
value: 29.432000000000002
- type: recall_at_10
value: 60.388
- type: recall_at_100
value: 87.144
- type: recall_at_1000
value: 97.154
- type: recall_at_3
value: 45.675
- type: recall_at_5
value: 52.35300000000001
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.462999999999997
- type: map_at_10
value: 31.824
- type: map_at_100
value: 32.853
- type: map_at_1000
value: 32.948
- type: map_at_3
value: 28.671999999999997
- type: map_at_5
value: 30.579
- type: mrr_at_1
value: 23.66
- type: mrr_at_10
value: 34.091
- type: mrr_at_100
value: 35.077999999999996
- type: mrr_at_1000
value: 35.138999999999996
- type: mrr_at_3
value: 31.516
- type: mrr_at_5
value: 33.078
- type: ndcg_at_1
value: 23.845
- type: ndcg_at_10
value: 37.594
- type: ndcg_at_100
value: 42.74
- type: ndcg_at_1000
value: 44.93
- type: ndcg_at_3
value: 31.667
- type: ndcg_at_5
value: 34.841
- type: precision_at_1
value: 23.845
- type: precision_at_10
value: 6.229
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 14.11
- type: precision_at_5
value: 10.388
- type: recall_at_1
value: 21.462999999999997
- type: recall_at_10
value: 52.772
- type: recall_at_100
value: 76.794
- type: recall_at_1000
value: 92.852
- type: recall_at_3
value: 37.049
- type: recall_at_5
value: 44.729
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.466
- type: map_at_10
value: 25.275
- type: map_at_100
value: 27.176000000000002
- type: map_at_1000
value: 27.374
- type: map_at_3
value: 21.438
- type: map_at_5
value: 23.366
- type: mrr_at_1
value: 35.699999999999996
- type: mrr_at_10
value: 47.238
- type: mrr_at_100
value: 47.99
- type: mrr_at_1000
value: 48.021
- type: mrr_at_3
value: 44.463
- type: mrr_at_5
value: 46.039
- type: ndcg_at_1
value: 35.244
- type: ndcg_at_10
value: 34.559
- type: ndcg_at_100
value: 41.74
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 29.284
- type: ndcg_at_5
value: 30.903999999999996
- type: precision_at_1
value: 35.244
- type: precision_at_10
value: 10.463000000000001
- type: precision_at_100
value: 1.8259999999999998
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 21.65
- type: precision_at_5
value: 16.078
- type: recall_at_1
value: 15.466
- type: recall_at_10
value: 39.782000000000004
- type: recall_at_100
value: 64.622
- type: recall_at_1000
value: 83.233
- type: recall_at_3
value: 26.398
- type: recall_at_5
value: 31.676
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.414
- type: map_at_10
value: 22.435
- type: map_at_100
value: 32.393
- type: map_at_1000
value: 34.454
- type: map_at_3
value: 15.346000000000002
- type: map_at_5
value: 18.282999999999998
- type: mrr_at_1
value: 71.5
- type: mrr_at_10
value: 78.795
- type: mrr_at_100
value: 79.046
- type: mrr_at_1000
value: 79.054
- type: mrr_at_3
value: 77.333
- type: mrr_at_5
value: 78.146
- type: ndcg_at_1
value: 60.75000000000001
- type: ndcg_at_10
value: 46.829
- type: ndcg_at_100
value: 52.370000000000005
- type: ndcg_at_1000
value: 59.943999999999996
- type: ndcg_at_3
value: 51.33
- type: ndcg_at_5
value: 48.814
- type: precision_at_1
value: 71.75
- type: precision_at_10
value: 37.525
- type: precision_at_100
value: 12.075
- type: precision_at_1000
value: 2.464
- type: precision_at_3
value: 54.75
- type: precision_at_5
value: 47.55
- type: recall_at_1
value: 9.414
- type: recall_at_10
value: 28.67
- type: recall_at_100
value: 59.924
- type: recall_at_1000
value: 83.921
- type: recall_at_3
value: 16.985
- type: recall_at_5
value: 21.372
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.18000000000001
- type: f1
value: 47.04613218997081
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 82.57900000000001
- type: map_at_10
value: 88.465
- type: map_at_100
value: 88.649
- type: map_at_1000
value: 88.661
- type: map_at_3
value: 87.709
- type: map_at_5
value: 88.191
- type: mrr_at_1
value: 88.899
- type: mrr_at_10
value: 93.35900000000001
- type: mrr_at_100
value: 93.38499999999999
- type: mrr_at_1000
value: 93.38499999999999
- type: mrr_at_3
value: 93.012
- type: mrr_at_5
value: 93.282
- type: ndcg_at_1
value: 88.98899999999999
- type: ndcg_at_10
value: 91.22
- type: ndcg_at_100
value: 91.806
- type: ndcg_at_1000
value: 92.013
- type: ndcg_at_3
value: 90.236
- type: ndcg_at_5
value: 90.798
- type: precision_at_1
value: 88.98899999999999
- type: precision_at_10
value: 10.537
- type: precision_at_100
value: 1.106
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 33.598
- type: precision_at_5
value: 20.618
- type: recall_at_1
value: 82.57900000000001
- type: recall_at_10
value: 94.95400000000001
- type: recall_at_100
value: 97.14
- type: recall_at_1000
value: 98.407
- type: recall_at_3
value: 92.203
- type: recall_at_5
value: 93.747
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.871000000000002
- type: map_at_10
value: 46.131
- type: map_at_100
value: 48.245
- type: map_at_1000
value: 48.361
- type: map_at_3
value: 40.03
- type: map_at_5
value: 43.634
- type: mrr_at_1
value: 52.932
- type: mrr_at_10
value: 61.61299999999999
- type: mrr_at_100
value: 62.205
- type: mrr_at_1000
value: 62.224999999999994
- type: mrr_at_3
value: 59.388
- type: mrr_at_5
value: 60.760999999999996
- type: ndcg_at_1
value: 53.395
- type: ndcg_at_10
value: 54.506
- type: ndcg_at_100
value: 61.151999999999994
- type: ndcg_at_1000
value: 62.882000000000005
- type: ndcg_at_3
value: 49.903999999999996
- type: ndcg_at_5
value: 51.599
- type: precision_at_1
value: 53.395
- type: precision_at_10
value: 15.247
- type: precision_at_100
value: 2.221
- type: precision_at_1000
value: 0.255
- type: precision_at_3
value: 33.539
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.871000000000002
- type: recall_at_10
value: 62.074
- type: recall_at_100
value: 86.531
- type: recall_at_1000
value: 96.574
- type: recall_at_3
value: 45.003
- type: recall_at_5
value: 53.00899999999999
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.513
- type: map_at_10
value: 69.066
- type: map_at_100
value: 69.903
- type: map_at_1000
value: 69.949
- type: map_at_3
value: 65.44200000000001
- type: map_at_5
value: 67.784
- type: mrr_at_1
value: 80.891
- type: mrr_at_10
value: 86.42699999999999
- type: mrr_at_100
value: 86.577
- type: mrr_at_1000
value: 86.58200000000001
- type: mrr_at_3
value: 85.6
- type: mrr_at_5
value: 86.114
- type: ndcg_at_1
value: 81.026
- type: ndcg_at_10
value: 76.412
- type: ndcg_at_100
value: 79.16
- type: ndcg_at_1000
value: 79.989
- type: ndcg_at_3
value: 71.45
- type: ndcg_at_5
value: 74.286
- type: precision_at_1
value: 81.026
- type: precision_at_10
value: 16.198999999999998
- type: precision_at_100
value: 1.831
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 46.721000000000004
- type: precision_at_5
value: 30.266
- type: recall_at_1
value: 40.513
- type: recall_at_10
value: 80.99300000000001
- type: recall_at_100
value: 91.526
- type: recall_at_1000
value: 96.935
- type: recall_at_3
value: 70.081
- type: recall_at_5
value: 75.665
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.42320000000001
- type: ap
value: 83.59975323233843
- type: f1
value: 87.38669942597816
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.676
- type: map_at_10
value: 35.865
- type: map_at_100
value: 37.019000000000005
- type: map_at_1000
value: 37.062
- type: map_at_3
value: 31.629
- type: map_at_5
value: 34.050999999999995
- type: mrr_at_1
value: 23.023
- type: mrr_at_10
value: 36.138999999999996
- type: mrr_at_100
value: 37.242
- type: mrr_at_1000
value: 37.28
- type: mrr_at_3
value: 32.053
- type: mrr_at_5
value: 34.383
- type: ndcg_at_1
value: 23.308999999999997
- type: ndcg_at_10
value: 43.254
- type: ndcg_at_100
value: 48.763
- type: ndcg_at_1000
value: 49.788
- type: ndcg_at_3
value: 34.688
- type: ndcg_at_5
value: 38.973
- type: precision_at_1
value: 23.308999999999997
- type: precision_at_10
value: 6.909999999999999
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 14.818999999999999
- type: precision_at_5
value: 11.072
- type: recall_at_1
value: 22.676
- type: recall_at_10
value: 66.077
- type: recall_at_100
value: 91.4
- type: recall_at_1000
value: 99.143
- type: recall_at_3
value: 42.845
- type: recall_at_5
value: 53.08500000000001
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.16279069767444
- type: f1
value: 96.02183835878418
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 85.74783401732788
- type: f1
value: 70.59661579230463
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 79.67047747141895
- type: f1
value: 77.06311183471965
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.82447881640887
- type: f1
value: 82.37598020010746
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.266131881264467
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.673653452453998
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.91846122902102
- type: mrr
value: 34.2557300204471
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.762
- type: map_at_10
value: 15.134
- type: map_at_100
value: 19.341
- type: map_at_1000
value: 20.961
- type: map_at_3
value: 10.735999999999999
- type: map_at_5
value: 12.751999999999999
- type: mrr_at_1
value: 52.941
- type: mrr_at_10
value: 60.766
- type: mrr_at_100
value: 61.196
- type: mrr_at_1000
value: 61.227
- type: mrr_at_3
value: 58.720000000000006
- type: mrr_at_5
value: 59.866
- type: ndcg_at_1
value: 50.929
- type: ndcg_at_10
value: 39.554
- type: ndcg_at_100
value: 36.307
- type: ndcg_at_1000
value: 44.743
- type: ndcg_at_3
value: 44.157000000000004
- type: ndcg_at_5
value: 42.142
- type: precision_at_1
value: 52.322
- type: precision_at_10
value: 29.412
- type: precision_at_100
value: 9.365
- type: precision_at_1000
value: 2.2159999999999997
- type: precision_at_3
value: 40.557
- type: precision_at_5
value: 35.913000000000004
- type: recall_at_1
value: 6.762
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 36.687
- type: recall_at_1000
value: 67.23
- type: recall_at_3
value: 11.773
- type: recall_at_5
value: 15.18
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.612
- type: map_at_10
value: 54.208
- type: map_at_100
value: 55.056000000000004
- type: map_at_1000
value: 55.069
- type: map_at_3
value: 49.45
- type: map_at_5
value: 52.556000000000004
- type: mrr_at_1
value: 41.976
- type: mrr_at_10
value: 56.972
- type: mrr_at_100
value: 57.534
- type: mrr_at_1000
value: 57.542
- type: mrr_at_3
value: 53.312000000000005
- type: mrr_at_5
value: 55.672999999999995
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 62.309000000000005
- type: ndcg_at_100
value: 65.557
- type: ndcg_at_1000
value: 65.809
- type: ndcg_at_3
value: 53.74100000000001
- type: ndcg_at_5
value: 58.772999999999996
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 10.107
- type: precision_at_100
value: 1.1900000000000002
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.488
- type: precision_at_5
value: 17.596
- type: recall_at_1
value: 36.612
- type: recall_at_10
value: 84.408
- type: recall_at_100
value: 97.929
- type: recall_at_1000
value: 99.725
- type: recall_at_3
value: 62.676
- type: recall_at_5
value: 74.24199999999999
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.573
- type: map_at_10
value: 85.81
- type: map_at_100
value: 86.434
- type: map_at_1000
value: 86.446
- type: map_at_3
value: 82.884
- type: map_at_5
value: 84.772
- type: mrr_at_1
value: 82.53
- type: mrr_at_10
value: 88.51299999999999
- type: mrr_at_100
value: 88.59700000000001
- type: mrr_at_1000
value: 88.598
- type: mrr_at_3
value: 87.595
- type: mrr_at_5
value: 88.266
- type: ndcg_at_1
value: 82.39999999999999
- type: ndcg_at_10
value: 89.337
- type: ndcg_at_100
value: 90.436
- type: ndcg_at_1000
value: 90.498
- type: ndcg_at_3
value: 86.676
- type: ndcg_at_5
value: 88.241
- type: precision_at_1
value: 82.39999999999999
- type: precision_at_10
value: 13.58
- type: precision_at_100
value: 1.543
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.04
- type: precision_at_5
value: 25.044
- type: recall_at_1
value: 71.573
- type: recall_at_10
value: 96.066
- type: recall_at_100
value: 99.73100000000001
- type: recall_at_1000
value: 99.991
- type: recall_at_3
value: 88.34
- type: recall_at_5
value: 92.79899999999999
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 61.767168063971724
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.00502775826037
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.718
- type: map_at_10
value: 12.13
- type: map_at_100
value: 14.269000000000002
- type: map_at_1000
value: 14.578
- type: map_at_3
value: 8.605
- type: map_at_5
value: 10.483
- type: mrr_at_1
value: 23.7
- type: mrr_at_10
value: 34.354
- type: mrr_at_100
value: 35.522
- type: mrr_at_1000
value: 35.571999999999996
- type: mrr_at_3
value: 31.15
- type: mrr_at_5
value: 32.98
- type: ndcg_at_1
value: 23.3
- type: ndcg_at_10
value: 20.171
- type: ndcg_at_100
value: 28.456
- type: ndcg_at_1000
value: 33.826
- type: ndcg_at_3
value: 19.104
- type: ndcg_at_5
value: 16.977999999999998
- type: precision_at_1
value: 23.3
- type: precision_at_10
value: 10.45
- type: precision_at_100
value: 2.239
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.933
- type: precision_at_5
value: 15.1
- type: recall_at_1
value: 4.718
- type: recall_at_10
value: 21.221999999999998
- type: recall_at_100
value: 45.42
- type: recall_at_1000
value: 71.642
- type: recall_at_3
value: 10.922
- type: recall_at_5
value: 15.322
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.2065344862739
- type: cos_sim_spearman
value: 83.2276569587515
- type: euclidean_pearson
value: 83.42726762105312
- type: euclidean_spearman
value: 83.31396596997742
- type: manhattan_pearson
value: 83.41123401762816
- type: manhattan_spearman
value: 83.34393052682026
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.28253173719754
- type: cos_sim_spearman
value: 76.12995701324436
- type: euclidean_pearson
value: 75.30693691794121
- type: euclidean_spearman
value: 75.12472789129536
- type: manhattan_pearson
value: 75.35860808729171
- type: manhattan_spearman
value: 75.30445827952794
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.09358031005694
- type: cos_sim_spearman
value: 83.18811147636619
- type: euclidean_pearson
value: 82.65513459991631
- type: euclidean_spearman
value: 82.71085530442987
- type: manhattan_pearson
value: 82.67700926821576
- type: manhattan_spearman
value: 82.73815539380426
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.51365440223137
- type: cos_sim_spearman
value: 80.59933905019179
- type: euclidean_pearson
value: 80.56660025433806
- type: euclidean_spearman
value: 80.27926539084027
- type: manhattan_pearson
value: 80.64632724055481
- type: manhattan_spearman
value: 80.43616365139444
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8590461417506
- type: cos_sim_spearman
value: 87.16337291721602
- type: euclidean_pearson
value: 85.8847725068404
- type: euclidean_spearman
value: 86.12602873624066
- type: manhattan_pearson
value: 86.04095861363909
- type: manhattan_spearman
value: 86.35535645007629
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.61371557181502
- type: cos_sim_spearman
value: 85.16330754442785
- type: euclidean_pearson
value: 84.20831431260608
- type: euclidean_spearman
value: 84.33191523212125
- type: manhattan_pearson
value: 84.34911007642411
- type: manhattan_spearman
value: 84.49670164290394
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.54452933158781
- type: cos_sim_spearman
value: 90.88214621695892
- type: euclidean_pearson
value: 91.38488015281216
- type: euclidean_spearman
value: 91.01822259603908
- type: manhattan_pearson
value: 91.36449776198687
- type: manhattan_spearman
value: 90.90478717381717
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.00941643037453
- type: cos_sim_spearman
value: 67.03588472081898
- type: euclidean_pearson
value: 67.35224911601603
- type: euclidean_spearman
value: 66.35544831459266
- type: manhattan_pearson
value: 67.35080066508304
- type: manhattan_spearman
value: 66.07893473733782
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.18291011086279
- type: cos_sim_spearman
value: 85.66913777481429
- type: euclidean_pearson
value: 84.81115930027242
- type: euclidean_spearman
value: 85.07133983924173
- type: manhattan_pearson
value: 84.88932120524983
- type: manhattan_spearman
value: 85.176903109055
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.67543572266588
- type: mrr
value: 95.9468146232852
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 59.633
- type: map_at_10
value: 69.801
- type: map_at_100
value: 70.504
- type: map_at_1000
value: 70.519
- type: map_at_3
value: 67.72500000000001
- type: map_at_5
value: 68.812
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.956
- type: mrr_at_100
value: 71.489
- type: mrr_at_1000
value: 71.504
- type: mrr_at_3
value: 69.44399999999999
- type: mrr_at_5
value: 70.244
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 73.98599999999999
- type: ndcg_at_100
value: 76.629
- type: ndcg_at_1000
value: 77.054
- type: ndcg_at_3
value: 70.513
- type: ndcg_at_5
value: 71.978
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 9.633
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.556000000000004
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 59.633
- type: recall_at_10
value: 85.52199999999999
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 75.767
- type: recall_at_5
value: 79.76100000000001
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.77821782178218
- type: cos_sim_ap
value: 94.58684455008866
- type: cos_sim_f1
value: 88.51282051282053
- type: cos_sim_precision
value: 90.84210526315789
- type: cos_sim_recall
value: 86.3
- type: dot_accuracy
value: 99.77623762376237
- type: dot_ap
value: 94.86277541733045
- type: dot_f1
value: 88.66897575457693
- type: dot_precision
value: 87.75710088148874
- type: dot_recall
value: 89.60000000000001
- type: euclidean_accuracy
value: 99.76732673267327
- type: euclidean_ap
value: 94.12114402691984
- type: euclidean_f1
value: 87.96804792810784
- type: euclidean_precision
value: 87.83649052841476
- type: euclidean_recall
value: 88.1
- type: manhattan_accuracy
value: 99.77227722772277
- type: manhattan_ap
value: 94.33665105240306
- type: manhattan_f1
value: 88.25587206396803
- type: manhattan_precision
value: 88.21178821178822
- type: manhattan_recall
value: 88.3
- type: max_accuracy
value: 99.77821782178218
- type: max_ap
value: 94.86277541733045
- type: max_f1
value: 88.66897575457693
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 72.03943478268592
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.285037897356496
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.83578447913503
- type: mrr
value: 52.69070696460402
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.89437612567638
- type: cos_sim_spearman
value: 30.7277819987126
- type: dot_pearson
value: 30.999783674122526
- type: dot_spearman
value: 30.992168551124905
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22699999999999998
- type: map_at_10
value: 1.8950000000000002
- type: map_at_100
value: 11.712
- type: map_at_1000
value: 28.713
- type: map_at_3
value: 0.65
- type: map_at_5
value: 1.011
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 95.39999999999999
- type: mrr_at_100
value: 95.39999999999999
- type: mrr_at_1000
value: 95.39999999999999
- type: mrr_at_3
value: 95.0
- type: mrr_at_5
value: 95.39999999999999
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 76.658
- type: ndcg_at_100
value: 60.755
- type: ndcg_at_1000
value: 55.05
- type: ndcg_at_3
value: 82.961
- type: ndcg_at_5
value: 80.008
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 79.80000000000001
- type: precision_at_100
value: 62.019999999999996
- type: precision_at_1000
value: 24.157999999999998
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 83.6
- type: recall_at_1
value: 0.22699999999999998
- type: recall_at_10
value: 2.086
- type: recall_at_100
value: 15.262
- type: recall_at_1000
value: 51.800000000000004
- type: recall_at_3
value: 0.679
- type: recall_at_5
value: 1.0739999999999998
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.521
- type: map_at_10
value: 7.281
- type: map_at_100
value: 12.717
- type: map_at_1000
value: 14.266000000000002
- type: map_at_3
value: 3.62
- type: map_at_5
value: 4.7010000000000005
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 34.906
- type: mrr_at_100
value: 36.333
- type: mrr_at_1000
value: 36.348
- type: mrr_at_3
value: 29.592000000000002
- type: mrr_at_5
value: 33.367000000000004
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 18.523
- type: ndcg_at_100
value: 30.932
- type: ndcg_at_1000
value: 42.942
- type: ndcg_at_3
value: 18.901
- type: ndcg_at_5
value: 17.974999999999998
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 17.347
- type: precision_at_100
value: 6.898
- type: precision_at_1000
value: 1.482
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 19.184
- type: recall_at_1
value: 1.521
- type: recall_at_10
value: 13.406
- type: recall_at_100
value: 43.418
- type: recall_at_1000
value: 80.247
- type: recall_at_3
value: 4.673
- type: recall_at_5
value: 7.247000000000001
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.9084
- type: ap
value: 15.388385311898144
- type: f1
value: 55.760189174489426
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.399547255234864
- type: f1
value: 62.61398519525303
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.041094760846164
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.92394349406926
- type: cos_sim_ap
value: 79.93037248584875
- type: cos_sim_f1
value: 73.21063394683026
- type: cos_sim_precision
value: 70.99652949925633
- type: cos_sim_recall
value: 75.56728232189973
- type: dot_accuracy
value: 87.80473266972642
- type: dot_ap
value: 79.11055417163318
- type: dot_f1
value: 72.79587473273801
- type: dot_precision
value: 69.55058880076905
- type: dot_recall
value: 76.35883905013192
- type: euclidean_accuracy
value: 87.91202241163496
- type: euclidean_ap
value: 79.61955502404068
- type: euclidean_f1
value: 72.65956080647231
- type: euclidean_precision
value: 70.778083562672
- type: euclidean_recall
value: 74.64379947229551
- type: manhattan_accuracy
value: 87.7749299636407
- type: manhattan_ap
value: 79.33286131650932
- type: manhattan_f1
value: 72.44748412310699
- type: manhattan_precision
value: 67.43974533879036
- type: manhattan_recall
value: 78.25857519788919
- type: max_accuracy
value: 87.92394349406926
- type: max_ap
value: 79.93037248584875
- type: max_f1
value: 73.21063394683026
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.89987192921178
- type: cos_sim_ap
value: 87.49525152555509
- type: cos_sim_f1
value: 80.05039276715578
- type: cos_sim_precision
value: 77.15714285714286
- type: cos_sim_recall
value: 83.1690791499846
- type: dot_accuracy
value: 89.58163542515621
- type: dot_ap
value: 86.87353801172357
- type: dot_f1
value: 79.50204384986993
- type: dot_precision
value: 76.83522482401953
- type: dot_recall
value: 82.36064059131506
- type: euclidean_accuracy
value: 89.81255093724532
- type: euclidean_ap
value: 87.41058010369022
- type: euclidean_f1
value: 79.94095829233214
- type: euclidean_precision
value: 78.61396456751525
- type: euclidean_recall
value: 81.3135201724669
- type: manhattan_accuracy
value: 89.84553886754377
- type: manhattan_ap
value: 87.41173628281432
- type: manhattan_f1
value: 79.9051922079846
- type: manhattan_precision
value: 76.98016269444841
- type: manhattan_recall
value: 83.06128734216199
- type: max_accuracy
value: 89.89987192921178
- type: max_ap
value: 87.49525152555509
- type: max_f1
value: 80.05039276715578
---
# Repetition Improves Language Model Embeddings
Please refer to our paper: [https://arxiv.org/abs/2402.15449](https://arxiv.org/abs/2402.15449)
And our GitHub: [https://github.com/jakespringer/echo-embeddings](https://github.com/jakespringer/echo-embeddings)
We provide a description of the model as well as example usage in the above links.
|
Pavan27/TeluguCommentContextClassifier
|
Pavan27
| 2024-02-26T05:58:20Z | 1 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"arxiv:1910.09700",
"base_model:PosteriorAI/godavari-telugu-llama2-7B",
"base_model:adapter:PosteriorAI/godavari-telugu-llama2-7B",
"region:us"
] | null | 2024-02-26T05:57:57Z |
---
library_name: peft
base_model: PosteriorAI/godavari-telugu-llama2-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
yeniceriSGK/miniCPM-pi-brain-v3
|
yeniceriSGK
| 2024-02-26T05:52:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"minicpm",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-02-26T05:50:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MH114/donut-base-sroie
|
MH114
| 2024-02-26T05:51:38Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base-finetuned-cord-v2",
"base_model:finetune:naver-clova-ix/donut-base-finetuned-cord-v2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-02-23T11:35:28Z |
---
license: mit
base_model: naver-clova-ix/donut-base-finetuned-cord-v2
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
quocviethere/mamba_text_classification
|
quocviethere
| 2024-02-26T05:47:52Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-26T04:20:16Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mamba_text_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mamba_text_classification
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- Accuracy: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0072 | 0.1 | 1487 | 0.4826 | 0.8833 |
| 0.9742 | 0.2 | 2974 | 0.4212 | 0.8689 |
| 1.9626 | 0.3 | 4461 | 0.4013 | 0.8949 |
| 0.0048 | 0.4 | 5948 | 0.4107 | 0.8954 |
| 0.9199 | 0.5 | 7435 | 0.3877 | 0.8938 |
| 1.3472 | 0.6 | 8922 | 0.4172 | 0.8949 |
| 0.1115 | 0.7 | 10409 | 0.3733 | 0.8971 |
| 0.1208 | 0.8 | 11896 | 0.3935 | 0.8998 |
| 0.0072 | 0.9 | 13383 | 0.3710 | 0.8993 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
dasjdlkasjldasjlkdaslk/bert-finetuned-ner
|
dasjdlkasjldasjlkdaslk
| 2024-02-26T05:44:16Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-26T04:30:33Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0560
- Precision: 0.9324
- Recall: 0.9517
- F1: 0.9420
- Accuracy: 0.9872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0809 | 1.0 | 1756 | 0.0736 | 0.9142 | 0.9366 | 0.9253 | 0.9806 |
| 0.0398 | 2.0 | 3512 | 0.0546 | 0.9280 | 0.9480 | 0.9379 | 0.9865 |
| 0.0244 | 3.0 | 5268 | 0.0560 | 0.9324 | 0.9517 | 0.9420 | 0.9872 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
RK2004/my-pet-dog
|
RK2004
| 2024-02-26T05:34:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-26T05:27:28Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by RK2004 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
|
Stonekraken/Ngram_classifier
|
Stonekraken
| 2024-02-26T05:27:13Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-25T00:47:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Ngram_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Ngram_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4452
- Accuracy: 0.8265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7247 | 1.0 | 591 | 0.4752 | 0.8146 |
| 0.4141 | 2.0 | 1182 | 0.4452 | 0.8265 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
CocaButon/xlm-roberta-base-finetuned-panx-de
|
CocaButon
| 2024-02-26T05:26:14Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-24T08:02:04Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1388
- F1: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2598 | 1.0 | 525 | 0.1540 | 0.8268 |
| 0.1302 | 2.0 | 1050 | 0.1357 | 0.8447 |
| 0.08 | 3.0 | 1575 | 0.1388 | 0.8641 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
starmpcc/Asclepius-Llama2-13B-Pretraining-Only
|
starmpcc
| 2024-02-26T05:20:16Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"en",
"dataset:starmpcc/Asclepius-Synthetic-Clinical-Notes",
"arxiv:2309.00237",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T03:24:30Z |
---
license: cc-by-nc-4.0
datasets:
- starmpcc/Asclepius-Synthetic-Clinical-Notes
language:
- en
pipeline_tag: text-generation
tags:
- medical
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is an pre-trained Llama2-13B model, which was trained using causal language modeling on [Asclepius-Synthetic-Clinical-Notes](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes).
The [Asclepius-Llama2-13B](https://huggingface.co/starmpcc/Asclepius-Llama2-13B) model was developed from this checkpoint by applying instruction fine-tuning.
## UPDATE
### 2024.01.10
- Asclepius-R, the variant of Asclepius that trained on MIMIC-III discharge summaries, is now available on [Physionet](https://physionet.org/content/asclepius-r/1.0.0/)!
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** Clinical LLM (Large Language Model)
- **Language(s) (NLP):** English
- **License:** CC-BY-NC-SA 4.0
- **Finetuned from model:** Llama2-13B
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/starmpcc/Asclepius
- **Paper:** https://arxiv.org/abs/2309.00237
- **Data:** https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This model is trained with causal launguage modeling, using [Asclepius-Synthetic-Clinical-Notes](https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
ONLY USE THIS MODEL FOR RESEARCH PURPOSE!!
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-Llama2-13B-Pretraining-Only", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-Llama2-13B-Pretraining-Only")
model_input = "YOUR INPUT"
input_ids = tokenizer(model_input, return_tensors="pt").input_ids
output = model.generate(input_ids)
print(tokenizer.decode(output[0]))
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/starmpcc/Asclepius-Synthetic-Clinical-Notes
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
- Causal language modeling on synthetic clinical notes.
#### Training Hyperparameters
- We followed config used in [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
-
#### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- Pre-Training (1 epoch): 1h 58m with 8x A100 80G
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{kweon2023publicly,
title={Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes},
author={Sunjun Kweon and Junu Kim and Jiyoun Kim and Sujeong Im and Eunbyeol Cho and Seongsu Bae and Jungwoo Oh and Gyubok Lee and Jong Hak Moon and Seng Chan You and Seungjin Baek and Chang Hoon Han and Yoon Bin Jung and Yohan Jo and Edward Choi},
year={2023},
eprint={2309.00237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
RumiaChannel/safetensors-ELYZA-japanese-Llama-2-13b-instruct
|
RumiaChannel
| 2024-02-26T05:16:43Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-26T04:20:15Z |
---
license: llama2
---
https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-instruct
|
porthole42/food_classifier
|
porthole42
| 2024-02-26T05:16:23Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-26T04:10:58Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: porthole42/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# porthole42/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3883
- Validation Loss: 0.3472
- Train Accuracy: 0.917
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 20000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.7958 | 1.7078 | 0.767 | 0 |
| 1.2134 | 0.8347 | 0.886 | 1 |
| 0.6971 | 0.5456 | 0.901 | 2 |
| 0.4979 | 0.3958 | 0.918 | 3 |
| 0.3883 | 0.3472 | 0.917 | 4 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
InMedData/InMD-X-INF
|
InMedData
| 2024-02-26T05:04:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"text-generation",
"en",
"arxiv:2402.11883",
"base_model:Intel/neural-chat-7b-v3-1",
"base_model:finetune:Intel/neural-chat-7b-v3-1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T08:57:27Z |
---
library_name: transformers
tags:
- medical
license: cc-by-nc-sa-4.0
language:
- en
pipeline_tag: text-generation
base_model: Intel/neural-chat-7b-v3-1
---
## InMD-X: Large Language Models for Internal Medicine Doctors
We introduce InMD-X, a collection of
multiple large language models specifically designed
to cater to the unique characteristics and demands
of Internal Medicine Doctors (IMD). InMD-X represents
a groundbreaking development in natural language
processing, offering a suite of language models
fine-tuned for various aspects of the internal medicine
field. These models encompass a wide range of medical
sub-specialties, enabling IMDs to perform more
efficient and accurate research, diagnosis, and documentation.
InMD-X’s versatility and adaptability
make it a valuable tool for improving the healthcare
industry, enhancing communication between healthcare
professionals, and advancing medical research.
Each model within InMD-X is meticulously tailored
to address specific challenges faced by IMDs, ensuring
the highest level of precision and comprehensiveness
in clinical text analysis and decision support.
(This model card is for the INFECTIOUS DISEASES subspecialty.)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** [CausalLM]
- **Language(s) (NLP):** [English]
- **License:** [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/)
- **Finetuned from model [optional]:** [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper [optional]:** [InMD-X](http://arxiv.org/abs/2402.11883)
## Uses
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "InMedData/InMD-X-INF"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer = tokenizer,
device_map="auto" # if you have GPU
)
def inference(pipeline, Qustion,answer_only = False):
sequences = pipeline("Answer the next question in one sentence.\n"+
Qustion,
do_sample=True,
top_k=10,
top_p = 0.9,
temperature = 0.2,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=500, # can increase the length of sequence
)
Answers = []
for seq in sequences:
Answer = seq['generated_text'].split(Qustion)[-1].replace("\n","")
Answers.append(Answer)
return Answers
question = 'What is the association between long-term beta-blocker use after myocardial infarction (MI) and the risk of reinfarction and death?'
answers = inference(pipeline, question)
print(answers)
```
### List of LoRA config
based on [Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft)
Parameter | PT | SFT
:------:| :------:| :------:
r | 8 | 8
lora alpha | 32 | 32
lora dropout | 0.05 | 0.05
target | q, k, v, o,up, down, gate | q, k, v, o,up,down, gate
### List of Training arguments
based on [Transformer Reinforcement Learning (TRL)](https://github.com/huggingface/trl)
Parameter | PT | SFT
:------:| :------:| :------:
train epochs | 3 | 1
per device train batch size | 1 | 1
optimizer | adamw_hf | adamw_hf
evaluation strategy | no | no
learning_rate | 1e-4 | 1e-4
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Experimental setup
- **Ubuntu 22.04.3 LTS**
- **GPU - NVIDIA A100(40GB)**
- **Python**: 3.10.12
- **Pytorch**:2.1.1+cu118
- **Transformer**:4.37.0.dev0
## Limitations
InMD-X consists of a collection of segmented models. The integration of the models has not yet been fully accomplished, resulting in each model being fragmented.
Due to the absence of benchmarks, the segmented models have not been adequately evaluated. Future research will involve the development of new benchmarks and the integration of models to facilitate an objective evaluation.
## Non-commercial use
These models are available exclusively for research purposes and are not intended for commercial use.
<!-- ## Citation
**BibTeX:**
-->
## INMED DATA
INMED DATA is developing large language models (LLMs) specifically tailored for medical applications. For more information, please visit our website [TBD].
|
InMedData/InMD-X-HEM
|
InMedData
| 2024-02-26T05:03:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"text-generation",
"en",
"arxiv:2402.11883",
"base_model:Intel/neural-chat-7b-v3-1",
"base_model:finetune:Intel/neural-chat-7b-v3-1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T08:56:52Z |
---
library_name: transformers
tags:
- medical
license: cc-by-nc-sa-4.0
language:
- en
pipeline_tag: text-generation
base_model: Intel/neural-chat-7b-v3-1
---
## InMD-X: Large Language Models for Internal Medicine Doctors
We introduce InMD-X, a collection of
multiple large language models specifically designed
to cater to the unique characteristics and demands
of Internal Medicine Doctors (IMD). InMD-X represents
a groundbreaking development in natural language
processing, offering a suite of language models
fine-tuned for various aspects of the internal medicine
field. These models encompass a wide range of medical
sub-specialties, enabling IMDs to perform more
efficient and accurate research, diagnosis, and documentation.
InMD-X’s versatility and adaptability
make it a valuable tool for improving the healthcare
industry, enhancing communication between healthcare
professionals, and advancing medical research.
Each model within InMD-X is meticulously tailored
to address specific challenges faced by IMDs, ensuring
the highest level of precision and comprehensiveness
in clinical text analysis and decision support.
(This model card is for the HEMATOLOGY subspecialty.)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** [CausalLM]
- **Language(s) (NLP):** [English]
- **License:** [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/)
- **Finetuned from model [optional]:** [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper [optional]:** [InMD-X](http://arxiv.org/abs/2402.11883)
## Uses
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "InMedData/InMD-X-HEM"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer = tokenizer,
device_map="auto" # if you have GPU
)
def inference(pipeline, Qustion,answer_only = False):
sequences = pipeline("Answer the next question in one sentence.\n"+
Qustion,
do_sample=True,
top_k=10,
top_p = 0.9,
temperature = 0.2,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=500, # can increase the length of sequence
)
Answers = []
for seq in sequences:
Answer = seq['generated_text'].split(Qustion)[-1].replace("\n","")
Answers.append(Answer)
return Answers
question = 'What is the association between long-term beta-blocker use after myocardial infarction (MI) and the risk of reinfarction and death?'
answers = inference(pipeline, question)
print(answers)
```
### List of LoRA config
based on [Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft)
Parameter | PT | SFT
:------:| :------:| :------:
r | 8 | 8
lora alpha | 32 | 32
lora dropout | 0.05 | 0.05
target | q, k, v, o,up, down, gate | q, k, v, o,up,down, gate
### List of Training arguments
based on [Transformer Reinforcement Learning (TRL)](https://github.com/huggingface/trl)
Parameter | PT | SFT
:------:| :------:| :------:
train epochs | 3 | 1
per device train batch size | 1 | 1
optimizer | adamw_hf | adamw_hf
evaluation strategy | no | no
learning_rate | 1e-4 | 1e-4
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Experimental setup
- **Ubuntu 22.04.3 LTS**
- **GPU - NVIDIA A100(40GB)**
- **Python**: 3.10.12
- **Pytorch**:2.1.1+cu118
- **Transformer**:4.37.0.dev0
## Limitations
InMD-X consists of a collection of segmented models. The integration of the models has not yet been fully accomplished, resulting in each model being fragmented.
Due to the absence of benchmarks, the segmented models have not been adequately evaluated. Future research will involve the development of new benchmarks and the integration of models to facilitate an objective evaluation.
## Non-commercial use
These models are available exclusively for research purposes and are not intended for commercial use.
<!-- ## Citation
**BibTeX:**
-->
## INMED DATA
INMED DATA is developing large language models (LLMs) specifically tailored for medical applications. For more information, please visit our website [TBD].
|
InMedData/InMD-X-URO
|
InMedData
| 2024-02-26T05:00:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"medical",
"text-generation",
"en",
"arxiv:2402.11883",
"base_model:Intel/neural-chat-7b-v3-1",
"base_model:finetune:Intel/neural-chat-7b-v3-1",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T08:55:43Z |
---
library_name: transformers
tags:
- medical
license: cc-by-nc-sa-4.0
language:
- en
pipeline_tag: text-generation
base_model: Intel/neural-chat-7b-v3-1
---
## InMD-X: Large Language Models for Internal Medicine Doctors
We introduce InMD-X, a collection of
multiple large language models specifically designed
to cater to the unique characteristics and demands
of Internal Medicine Doctors (IMD). InMD-X represents
a groundbreaking development in natural language
processing, offering a suite of language models
fine-tuned for various aspects of the internal medicine
field. These models encompass a wide range of medical
sub-specialties, enabling IMDs to perform more
efficient and accurate research, diagnosis, and documentation.
InMD-X’s versatility and adaptability
make it a valuable tool for improving the healthcare
industry, enhancing communication between healthcare
professionals, and advancing medical research.
Each model within InMD-X is meticulously tailored
to address specific challenges faced by IMDs, ensuring
the highest level of precision and comprehensiveness
in clinical text analysis and decision support.
(This model card is for the UROLOGY & NEPHROLOGY subspecialty.)
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** [CausalLM]
- **Language(s) (NLP):** [English]
- **License:** [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/)
- **Finetuned from model [optional]:** [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Paper [optional]:** [InMD-X](http://arxiv.org/abs/2402.11883)
## Uses
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "InMedData/InMD-X-URO"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer = tokenizer,
device_map="auto" # if you have GPU
)
def inference(pipeline, Qustion,answer_only = False):
sequences = pipeline("Answer the next question in one sentence.\n"+
Qustion,
do_sample=True,
top_k=10,
top_p = 0.9,
temperature = 0.2,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=500, # can increase the length of sequence
)
Answers = []
for seq in sequences:
Answer = seq['generated_text'].split(Qustion)[-1].replace("\n","")
Answers.append(Answer)
return Answers
question = 'What is the association between long-term beta-blocker use after myocardial infarction (MI) and the risk of reinfarction and death?'
answers = inference(pipeline, question)
print(answers)
```
### List of LoRA config
based on [Parameter-Efficient Fine-Tuning (PEFT)](https://github.com/huggingface/peft)
Parameter | PT | SFT
:------:| :------:| :------:
r | 8 | 8
lora alpha | 32 | 32
lora dropout | 0.05 | 0.05
target | q, k, v, o,up, down, gate | q, k, v, o,up,down, gate
### List of Training arguments
based on [Transformer Reinforcement Learning (TRL)](https://github.com/huggingface/trl)
Parameter | PT | SFT
:------:| :------:| :------:
train epochs | 3 | 1
per device train batch size | 1 | 1
optimizer | adamw_hf | adamw_hf
evaluation strategy | no | no
learning_rate | 1e-4 | 1e-4
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Experimental setup
- **Ubuntu 22.04.3 LTS**
- **GPU - NVIDIA A100(40GB)**
- **Python**: 3.10.12
- **Pytorch**:2.1.1+cu118
- **Transformer**:4.37.0.dev0
## Limitations
InMD-X consists of a collection of segmented models. The integration of the models has not yet been fully accomplished, resulting in each model being fragmented.
Due to the absence of benchmarks, the segmented models have not been adequately evaluated. Future research will involve the development of new benchmarks and the integration of models to facilitate an objective evaluation.
## Non-commercial use
These models are available exclusively for research purposes and are not intended for commercial use.
<!-- ## Citation
**BibTeX:**
-->
## INMED DATA
INMED DATA is developing large language models (LLMs) specifically tailored for medical applications. For more information, please visit our website [TBD].
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.