modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
alexgastev/q-FrozenLake-v1-4x4-noSlippery
|
alexgastev
| 2024-02-02T11:23:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T11:23:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="alexgastev/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ben434/ben
|
ben434
| 2024-02-02T11:04:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:h94/IP-Adapter-FaceID",
"base_model:adapter:h94/IP-Adapter-FaceID",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-02-02T11:04:00Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/Screenshot_20240129-165151.png
base_model: h94/IP-Adapter-FaceID
instance_prompt: null
license: apache-2.0
---
# ben
<Gallery />
## Download model
[Download](/ben434/ben/tree/main) them in the Files & versions tab.
|
buelfhood/GraphCodeBERT_BCB_ChaFT
|
buelfhood
| 2024-02-02T11:02:59Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T11:01:16Z |
--num_epochs 1 --train_batch_size 16 --eval_batch_size 16 --learning_rate 2e-5 --mixed_precision fp16
|
Kamaljp/t5-small-finetuned-xsum
|
Kamaljp
| 2024-02-02T11:00:41Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T10:47:41Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
mjm4dl/openchat_intent_r1v0
|
mjm4dl
| 2024-02-02T10:59:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:30:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mjm4dl/openchat_intent_r8v0
|
mjm4dl
| 2024-02-02T10:59:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:44:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s3nh/MiniCPM-2B-dpo-fp32-GGUF
|
s3nh
| 2024-02-02T10:52:55Z | 24 | 8 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-02T10:47:50Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/openbmb/MiniCPM-2B-dpo-fp32).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
# Original model card
|
Federic/CDAgpt-llama2-7b
|
Federic
| 2024-02-02T10:50:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-02T08:39:41Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: CDAgpt-llama2-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CDAgpt-llama2-7b
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
golesheed/whisper-native-children-6-dutch
|
golesheed
| 2024-02-02T10:49:24Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T09:13:43Z |
---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1506
- Wer: 5.1288
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4241 | 0.38 | 30 | 0.1816 | 7.9883 |
| 0.1734 | 0.75 | 60 | 0.1585 | 6.3247 |
| 0.1334 | 1.12 | 90 | 0.1560 | 5.9874 |
| 0.0787 | 1.5 | 120 | 0.1468 | 6.0718 |
| 0.0745 | 1.88 | 150 | 0.1465 | 7.3674 |
| 0.0512 | 2.25 | 180 | 0.1452 | 7.1297 |
| 0.0314 | 2.62 | 210 | 0.1405 | 5.4814 |
| 0.0321 | 3.0 | 240 | 0.1376 | 5.4125 |
| 0.0154 | 3.38 | 270 | 0.1469 | 5.2208 |
| 0.0144 | 3.75 | 300 | 0.1493 | 5.2515 |
| 0.011 | 4.12 | 330 | 0.1443 | 5.0905 |
| 0.0064 | 4.5 | 360 | 0.1502 | 5.1058 |
| 0.007 | 4.88 | 390 | 0.1506 | 5.1288 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
smith0901/business
|
smith0901
| 2024-02-02T10:48:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-02T10:26:12Z |
# Transformative Real Estate Investment for Sustainable Communities
In the dynamic world of real estate investment, Shalom Lamm stands as a visionary leader committed to more than just profit margins. His strategic approach to real estate development goes beyond erecting structures; it revolves around creating sustainable communities that thrive socially, economically, and environmentally. Shalom Lamm's impact on the real estate landscape is marked by innovation, community-centric design, and a dedication to fostering positive change.
## Community-Centric Development
At the heart of [Shalom Lamm](https://www.iawfoundation.org/community/shalomlamm/profile/)'s real estate philosophy is a deep appreciation for the communities he serves. Instead of adopting a one-size-fits-all approach, Lamm takes a community-centric approach to development. He engages with local stakeholders, understands their unique needs, and tailors projects to enhance the fabric of each neighborhood. This ensures that his developments not only meet market demands but also contribute positively to the communities they become a part of.
## Revitalizing Neglected Areas
One of Shalom Lamm's distinctive contributions is his focus on revitalizing neglected or underdeveloped areas. He identifies opportunities where others might see challenges, breathing new life into neighborhoods that have been overlooked. Through strategic investments and thoughtful planning, Lamm has been instrumental in transforming these areas into vibrant, desirable places to live, work, and play.
## Sustainable and Green Initiatives
[Shalom Lamm](https://www.crunchbase.com/person/shalom-lamm) understands the importance of sustainable practices in real estate development. His projects incorporate green building techniques, energy-efficient systems, and environmentally friendly designs. By prioritizing sustainability, Lamm not only reduces the environmental impact of his developments but also positions them as forward-thinking, responsible contributions to the urban landscape.
## Inclusive Economic Growth
Beyond the physical structures, Shalom Lamm's real estate investments are designed to stimulate local economies. By fostering job creation, supporting small businesses, and contributing to economic growth, his projects serve as catalysts for positive change. Lamm's commitment to inclusive economic growth ensures that the benefits of his developments are widely distributed, contributing to the overall well-being of the communities involved.
### Affordable Housing Initiatives
Shalom Lamm recognizes the importance of addressing the housing needs of diverse populations. His commitment to affordable housing initiatives is evident in projects that aim to provide quality living spaces for individuals from all walks of life. By creating inclusive housing options, Lamm contributes to the development of diverse and socially harmonious communities.
### Educational and Cultural Integration
In addition to physical infrastructure, Shalom Lamm's real estate investments often include spaces for educational and cultural activities. This integration reflects his belief in the power of education and cultural enrichment to enhance community life. By providing spaces for learning and cultural expression, Lamm's projects contribute to the holistic development of the communities they serve.
### A Legacy of Impactful Real Estate Investment
Shalom Lamm's legacy in real estate investment goes beyond financial success. His commitment to community, sustainability, inclusivity, and cultural enrichment sets a standard for responsible and impactful development. As urban landscapes continue to evolve, Shalom Lamm's work serves as a testament to the transformative potential of real estate investments that prioritize the well-being and prosperity of communities.
|
CLMBR/pp-mod-subj-transformer-3
|
CLMBR
| 2024-02-02T10:46:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:07:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2304 | 0.03 | 76320 | 4.2434 |
| 4.0273 | 1.03 | 152640 | 4.0739 |
| 3.9168 | 0.03 | 228960 | 3.9985 |
| 3.8483 | 1.03 | 305280 | 3.9588 |
| 3.799 | 0.03 | 381600 | 3.9349 |
| 3.75 | 0.03 | 457920 | 3.9180 |
| 3.7146 | 1.03 | 534240 | 3.9084 |
| 3.6816 | 0.03 | 610560 | 3.9017 |
| 3.6536 | 1.03 | 686880 | 3.8982 |
| 3.6321 | 0.03 | 763200 | 3.8960 |
| 3.6052 | 1.03 | 839520 | 3.8936 |
| 3.5849 | 0.03 | 915840 | 3.8942 |
| 3.5686 | 1.03 | 992160 | 3.8936 |
| 3.5512 | 0.03 | 1068480 | 3.8955 |
| 3.5337 | 1.03 | 1144800 | 3.8962 |
| 3.5182 | 0.03 | 1221120 | 3.8980 |
| 3.5053 | 1.03 | 1297440 | 3.9001 |
| 3.4935 | 0.03 | 1373760 | 3.9003 |
| 3.4789 | 1.03 | 1450080 | 3.9032 |
| 3.4708 | 0.03 | 1526400 | 3.9033 |
| 3.4644 | 1.03 | 1602720 | 3.9063 |
| 3.4495 | 0.03 | 1679040 | 3.9084 |
| 3.4367 | 1.03 | 1755360 | 3.9119 |
| 3.4234 | 0.03 | 1831680 | 3.9138 |
| 3.4104 | 1.03 | 1908000 | 3.9149 |
| 3.403 | 0.03 | 1984320 | 3.9162 |
| 3.3885 | 1.03 | 2060640 | 3.9171 |
| 3.3782 | 0.03 | 2136960 | 3.9195 |
| 3.3693 | 1.03 | 2213280 | 3.9197 |
| 3.3588 | 0.03 | 2289600 | 3.9216 |
| 3.3474 | 0.03 | 2365920 | 3.9225 |
| 3.3383 | 0.03 | 2442240 | 3.9235 |
| 3.3305 | 1.03 | 2518560 | 3.9250 |
| 3.322 | 0.03 | 2594880 | 3.9253 |
| 3.3136 | 1.03 | 2671200 | 3.9247 |
| 3.3064 | 0.03 | 2747520 | 3.9262 |
| 3.3045 | 0.03 | 2823840 | 3.9255 |
| 3.2906 | 1.03 | 2900160 | 3.9256 |
| 3.2833 | 0.03 | 2976480 | 3.9249 |
| 3.2756 | 1.02 | 3052726 | 3.9229 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rhplus0831/maid-yuzu-v3-alter-exl2-6.0bpw-rpcal
|
rhplus0831
| 2024-02-02T10:41:13Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:merge:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:35:15Z |
---
base_model:
- NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
- ycros/BagelMIsteryTour-v2-8x7B
- smelborp/MixtralOrochi8x7B
tags:
- mergekit
- merge
---
# maid-yuzu-v3-alter
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was created because I wanted to know how the density and weight values of the dare_ties method affect the base model.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: smelborp/MixtralOrochi8x7B
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
parameters:
density: 0.6
weight: 0.5
- layer_range: [0, 32]
model:
model:
path: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
parameters:
density: 0.4
weight: 0.25
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
```
|
ekojs/internlm2-7b
|
ekojs
| 2024-02-02T10:35:56Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:31:18Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
# InternLM (but it's Llama)
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">hot??</font></i>
</a>
</sup>
<div> </div>
</div>
</div>
[chargoddard/internlm2-7b-llama](https://huggingface.co/chargoddard/internlm2-7b-llama) with [an updated tokenizer](https://huggingface.co/RangiLyu/InternLM2-tokenizer-).
|
thiagobarbosa/whisper-small-common-voice-16-pt-v2
|
thiagobarbosa
| 2024-02-02T10:32:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T01:58:48Z |
---
language:
- pt
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper small using Common Voice 16 (pt)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voices - 16.0 - Portuguese
type: mozilla-foundation/common_voice_16_0
config: pt
split: test
args: pt
metrics:
- name: Wer
type: wer
value: 16.035875888817067
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small using Common Voice 16 (pt)
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Mozilla Common Voices - 16.0 - Portuguese dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Wer: 16.0359
- Wer Normalized: 10.3867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Wer Normalized |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:--------------:|
| 0.2484 | 0.26 | 500 | 0.2712 | 19.2259 | 13.0929 |
| 0.2184 | 0.52 | 1000 | 0.2464 | 17.8895 | 11.9404 |
| 0.236 | 0.77 | 1500 | 0.2339 | 17.1348 | 11.3016 |
| 0.1401 | 1.03 | 2000 | 0.2285 | 16.7001 | 11.0432 |
| 0.1206 | 1.29 | 2500 | 0.2251 | 16.3235 | 10.6467 |
| 0.1199 | 1.55 | 3000 | 0.2236 | 16.1732 | 10.5424 |
| 0.1231 | 1.81 | 3500 | 0.2197 | 16.1587 | 10.5038 |
| 0.0935 | 2.06 | 4000 | 0.2220 | 16.0359 | 10.3867 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bineric/NorskGPT-Mistral-7b
|
bineric
| 2024-02-02T10:32:22Z | 102 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"no",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T09:48:38Z |
---
base_model: mistralai/Mistral-7B-v0.1
tags:
- mistral
- instruct
- finetune
language:
- no
license: cc-by-nc-sa-4.0
---
# NorskGPT-Mistral-7b
This model is a Norwegian variant of Mistral-7b-v0.1, fine-tuned on a carefully selected mix of Norwegian instruction pairs. The model is tuned to understand and generate text in Norwegain.
As per 02.02.2024, it's tied for 2nd place on the [Mainland Scandinavian NLG leaderboard](https://scandeval.com/mainland-scandinavian-nlg/) (after GPT3.5), and is scored as the best Norwegian model after GPT3.5.
## Intended Use
This model is intended for personal and research use in Norwegian and can be used as an assistant-like chat.
## Prompt Template
```
### Instruction:
Summarize following text.
### Input:
Text to be summarized
### Response:
```
## Limitations
* This is an LLM, not a knowledge model. It can not be expected to have more information about Norway than the base model.
* It will generally preform better on tasks that involves summarization, question answering and chat, than on tasks that requires more knowledge about Norway, specific domains, or tasks where the model can answer freely.
* The model is released as is, and would in most cases need prompt tuning to achieve optimal results.
* The base model is lacking censorship, and our fine-tune do not directly address these issues. Therefor it's expected that the model can produce harmful/hateful content if prompted for it.
## License
[Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
You are free to:
Share โ copy and redistribute the material in any medium or format
Adapt โ remix, transform, and build upon the material
The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
Attribution โ You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial โ You may not use the material for commercial purposes .
ShareAlike โ If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
No additional restrictions โ You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
|
MoritzJost/ppo-LunarLander-v2
|
MoritzJost
| 2024-02-02T10:32:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T10:31:39Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 163.06 +/- 80.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Patcas/plbart-assert-nodocnew-v2
|
Patcas
| 2024-02-02T10:31:31Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"plbart",
"text2text-generation",
"generated_from_trainer",
"base_model:Patcas/my_awesome-assert-new",
"base_model:finetune:Patcas/my_awesome-assert-new",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T09:40:12Z |
---
base_model: Patcas/my_awesome-assert-new
tags:
- generated_from_trainer
model-index:
- name: plbart-assert-nodocnew-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# plbart-assert-nodocnew-v2
This model is a fine-tuned version of [Patcas/my_awesome-assert-new](https://huggingface.co/Patcas/my_awesome-assert-new) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.1672 |
| No log | 2.0 | 460 | 1.0022 |
| 1.4059 | 3.0 | 690 | 0.9667 |
| 1.4059 | 4.0 | 920 | 0.9625 |
| 0.4797 | 5.0 | 1150 | 0.9779 |
| 0.4797 | 6.0 | 1380 | 0.9764 |
| 0.2511 | 7.0 | 1610 | 0.9693 |
| 0.2511 | 8.0 | 1840 | 0.9764 |
| 0.1582 | 9.0 | 2070 | 0.9813 |
| 0.1582 | 10.0 | 2300 | 0.9840 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Outrun32/CLIP-ViT-B-16-noise-tuned
|
Outrun32
| 2024-02-02T10:29:04Z | 34 | 1 |
open_clip
|
[
"open_clip",
"safetensors",
"clip",
"zero-shot-image-classification",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2024-02-02T08:58:48Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
---
# Model card for CLIP-ViT-B-16-noise-tuned
WandB run: https://wandb.ai/stamps-labs/open-clip/reports/Untitled-Report--Vmlldzo2NzE3NTkx/edit?firstReport=&runsetFilter
|
jboye/NeuralPipe-7B-slerp
|
jboye
| 2024-02-02T10:22:03Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:17:39Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jboye/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Weyaxi/Qwen-72B-Llama
|
Weyaxi
| 2024-02-02T10:20:07Z | 115 | 12 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T15:42:26Z |
---
license: other
license_name: qwen
license_link: LICENSE
---
# ๐ฆ Qwen-72B-Llama
This is the ๐ฆ llamafied version of [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B).
## ๐ ๏ธ Reproduction
I used [this script](https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py) to convert the weights:
[LLaMA-Factory/tests/llamafy_qwen.py](https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py)
## ๐ Tokenizer
After I converted the weights, I took the tokenizer from [KnutJaegersberg/Qwen-14B-Llamafied](https://huggingface.co/KnutJaegersberg/Qwen-14B-Llamafied) and uploaded it to this repository.
## ๐ Eval Scores Compared to Original Model
Here are some of the evaluation score comparisons based on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Metric | Qwen-72B | **Qwen-72B-Llama** |
|-----------------------|---------------|--------------------|
| Avg. | 73.6 | **69.53** |
| ARC (25-shot) | 65.19 | **64.85** |
| HellaSwag (10-shot) | 85.94 | **83.27** |
| MMLU (5-shot) | 77.37 | **73.66** |
| TruthfulQA (0-shot) | 60.19 | **57.6** |
| Winogrande (5-shot) | 82.48 | **81.53** |
| GSM8K (5-shot) | 70.43 | **56.25** |

|
dhdbsrlw/pet-zero
|
dhdbsrlw
| 2024-02-02T10:08:56Z | 9 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T10:04:39Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### zero-wearing-pink-clothes Dreambooth model trained by dhdbsrlw with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
marianna13/llava-phi-2-3b-sharegpt4v-sbu
|
marianna13
| 2024-02-02T10:08:32Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi-llava",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:05:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Arbaz0348/twintwist-Llama2_lyrics
|
Arbaz0348
| 2024-02-02T10:00:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-02-02T09:59:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
Martin-Michael/gockle_v2
|
Martin-Michael
| 2024-02-02T09:58:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-01T09:48:46Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: gockle_v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7843691148775894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gockle_v2
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9618
- Accuracy: 0.7844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 11
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.7231 | 0.64 | 100 | 2.6467 | 0.2279 |
| 2.3217 | 1.28 | 200 | 2.4386 | 0.2288 |
| 2.0819 | 1.92 | 300 | 2.2887 | 0.2815 |
| 1.9583 | 2.56 | 400 | 2.1686 | 0.4501 |
| 1.8098 | 3.21 | 500 | 2.0731 | 0.5085 |
| 1.7511 | 3.85 | 600 | 1.9978 | 0.5320 |
| 1.6581 | 4.49 | 700 | 1.9233 | 0.5584 |
| 1.6094 | 5.13 | 800 | 1.8703 | 0.5706 |
| 1.5241 | 5.77 | 900 | 1.8192 | 0.6017 |
| 1.501 | 6.41 | 1000 | 1.7757 | 0.6111 |
| 1.4308 | 7.05 | 1100 | 1.7415 | 0.6281 |
| 1.3985 | 7.69 | 1200 | 1.7015 | 0.6375 |
| 1.3559 | 8.33 | 1300 | 1.6652 | 0.6403 |
| 1.3092 | 8.97 | 1400 | 1.6290 | 0.6488 |
| 1.3059 | 9.62 | 1500 | 1.6142 | 0.6620 |
| 1.2597 | 10.26 | 1600 | 1.5771 | 0.6704 |
| 1.2147 | 10.9 | 1700 | 1.5501 | 0.6902 |
| 1.1942 | 11.54 | 1800 | 1.5288 | 0.6911 |
| 1.1668 | 12.18 | 1900 | 1.5081 | 0.6902 |
| 1.1371 | 12.82 | 2000 | 1.4883 | 0.6949 |
| 1.1256 | 13.46 | 2100 | 1.4770 | 0.6930 |
| 1.0922 | 14.1 | 2200 | 1.4500 | 0.7081 |
| 1.0559 | 14.74 | 2300 | 1.4369 | 0.7072 |
| 1.054 | 15.38 | 2400 | 1.4157 | 0.7128 |
| 1.0465 | 16.03 | 2500 | 1.3899 | 0.7279 |
| 0.9965 | 16.67 | 2600 | 1.3734 | 0.7194 |
| 0.9876 | 17.31 | 2700 | 1.3603 | 0.7298 |
| 0.9791 | 17.95 | 2800 | 1.3422 | 0.7298 |
| 0.9551 | 18.59 | 2900 | 1.3309 | 0.7373 |
| 0.9313 | 19.23 | 3000 | 1.3223 | 0.7335 |
| 0.9211 | 19.87 | 3100 | 1.3052 | 0.7345 |
| 0.9071 | 20.51 | 3200 | 1.2897 | 0.7420 |
| 0.875 | 21.15 | 3300 | 1.2762 | 0.7561 |
| 0.8676 | 21.79 | 3400 | 1.2657 | 0.7542 |
| 0.8498 | 22.44 | 3500 | 1.2575 | 0.7580 |
| 0.8529 | 23.08 | 3600 | 1.2435 | 0.7542 |
| 0.8341 | 23.72 | 3700 | 1.2369 | 0.7561 |
| 0.8056 | 24.36 | 3800 | 1.2306 | 0.7533 |
| 0.8038 | 25.0 | 3900 | 1.2181 | 0.7665 |
| 0.7733 | 25.64 | 4000 | 1.2031 | 0.7655 |
| 0.7834 | 26.28 | 4100 | 1.2015 | 0.7637 |
| 0.7697 | 26.92 | 4200 | 1.1887 | 0.7637 |
| 0.7438 | 27.56 | 4300 | 1.1788 | 0.7674 |
| 0.733 | 28.21 | 4400 | 1.1740 | 0.7637 |
| 0.7244 | 28.85 | 4500 | 1.1671 | 0.7674 |
| 0.7091 | 29.49 | 4600 | 1.1563 | 0.7693 |
| 0.7138 | 30.13 | 4700 | 1.1543 | 0.7665 |
| 0.693 | 30.77 | 4800 | 1.1445 | 0.7665 |
| 0.6837 | 31.41 | 4900 | 1.1348 | 0.7731 |
| 0.6706 | 32.05 | 5000 | 1.1282 | 0.7702 |
| 0.6514 | 32.69 | 5100 | 1.1222 | 0.7712 |
| 0.6513 | 33.33 | 5200 | 1.1323 | 0.7665 |
| 0.6517 | 33.97 | 5300 | 1.1138 | 0.7693 |
| 0.637 | 34.62 | 5400 | 1.1014 | 0.7712 |
| 0.6277 | 35.26 | 5500 | 1.0949 | 0.7759 |
| 0.6103 | 35.9 | 5600 | 1.0882 | 0.7759 |
| 0.5916 | 36.54 | 5700 | 1.0888 | 0.7693 |
| 0.6101 | 37.18 | 5800 | 1.0890 | 0.7721 |
| 0.6042 | 37.82 | 5900 | 1.0779 | 0.7750 |
| 0.5618 | 38.46 | 6000 | 1.0769 | 0.7750 |
| 0.5878 | 39.1 | 6100 | 1.0638 | 0.7787 |
| 0.5522 | 39.74 | 6200 | 1.0611 | 0.7731 |
| 0.557 | 40.38 | 6300 | 1.0639 | 0.7768 |
| 0.5665 | 41.03 | 6400 | 1.0668 | 0.7740 |
| 0.5269 | 41.67 | 6500 | 1.0531 | 0.7759 |
| 0.5672 | 42.31 | 6600 | 1.0493 | 0.7759 |
| 0.5197 | 42.95 | 6700 | 1.0469 | 0.7759 |
| 0.5273 | 43.59 | 6800 | 1.0481 | 0.7740 |
| 0.5149 | 44.23 | 6900 | 1.0434 | 0.7712 |
| 0.5146 | 44.87 | 7000 | 1.0462 | 0.7787 |
| 0.5033 | 45.51 | 7100 | 1.0358 | 0.7759 |
| 0.5073 | 46.15 | 7200 | 1.0322 | 0.7806 |
| 0.4964 | 46.79 | 7300 | 1.0313 | 0.7815 |
| 0.4832 | 47.44 | 7400 | 1.0238 | 0.7797 |
| 0.484 | 48.08 | 7500 | 1.0355 | 0.7768 |
| 0.4856 | 48.72 | 7600 | 1.0263 | 0.7834 |
| 0.4688 | 49.36 | 7700 | 1.0178 | 0.7815 |
| 0.4628 | 50.0 | 7800 | 1.0161 | 0.7787 |
| 0.457 | 50.64 | 7900 | 1.0195 | 0.7768 |
| 0.4547 | 51.28 | 8000 | 1.0064 | 0.7825 |
| 0.4551 | 51.92 | 8100 | 1.0108 | 0.7806 |
| 0.4408 | 52.56 | 8200 | 1.0136 | 0.7768 |
| 0.4471 | 53.21 | 8300 | 1.0016 | 0.7834 |
| 0.4431 | 53.85 | 8400 | 1.0038 | 0.7863 |
| 0.4393 | 54.49 | 8500 | 1.0057 | 0.7815 |
| 0.4246 | 55.13 | 8600 | 0.9961 | 0.7797 |
| 0.4237 | 55.77 | 8700 | 1.0019 | 0.7806 |
| 0.4128 | 56.41 | 8800 | 0.9941 | 0.7806 |
| 0.4285 | 57.05 | 8900 | 0.9946 | 0.7815 |
| 0.4121 | 57.69 | 9000 | 0.9932 | 0.7806 |
| 0.4167 | 58.33 | 9100 | 0.9916 | 0.7825 |
| 0.4001 | 58.97 | 9200 | 0.9915 | 0.7825 |
| 0.4053 | 59.62 | 9300 | 0.9886 | 0.7815 |
| 0.3993 | 60.26 | 9400 | 0.9910 | 0.7844 |
| 0.3881 | 60.9 | 9500 | 0.9856 | 0.7863 |
| 0.3846 | 61.54 | 9600 | 0.9917 | 0.7806 |
| 0.3913 | 62.18 | 9700 | 0.9820 | 0.7834 |
| 0.3897 | 62.82 | 9800 | 0.9806 | 0.7844 |
| 0.3821 | 63.46 | 9900 | 0.9804 | 0.7825 |
| 0.3742 | 64.1 | 10000 | 0.9873 | 0.7844 |
| 0.3835 | 64.74 | 10100 | 0.9807 | 0.7834 |
| 0.3571 | 65.38 | 10200 | 0.9792 | 0.7844 |
| 0.38 | 66.03 | 10300 | 0.9786 | 0.7844 |
| 0.3612 | 66.67 | 10400 | 0.9769 | 0.7844 |
| 0.3628 | 67.31 | 10500 | 0.9991 | 0.7740 |
| 0.3655 | 67.95 | 10600 | 0.9737 | 0.7806 |
| 0.3489 | 68.59 | 10700 | 0.9745 | 0.7853 |
| 0.371 | 69.23 | 10800 | 0.9853 | 0.7787 |
| 0.3454 | 69.87 | 10900 | 0.9676 | 0.7825 |
| 0.3457 | 70.51 | 11000 | 0.9708 | 0.7853 |
| 0.3559 | 71.15 | 11100 | 0.9691 | 0.7863 |
| 0.3523 | 71.79 | 11200 | 0.9690 | 0.7872 |
| 0.3357 | 72.44 | 11300 | 0.9707 | 0.7815 |
| 0.344 | 73.08 | 11400 | 0.9690 | 0.7863 |
| 0.3527 | 73.72 | 11500 | 0.9788 | 0.7825 |
| 0.327 | 74.36 | 11600 | 0.9703 | 0.7825 |
| 0.3376 | 75.0 | 11700 | 0.9770 | 0.7787 |
| 0.3518 | 75.64 | 11800 | 0.9718 | 0.7834 |
| 0.3031 | 76.28 | 11900 | 0.9736 | 0.7863 |
| 0.3404 | 76.92 | 12000 | 0.9661 | 0.7825 |
| 0.3243 | 77.56 | 12100 | 0.9731 | 0.7853 |
| 0.3381 | 78.21 | 12200 | 0.9685 | 0.7900 |
| 0.3258 | 78.85 | 12300 | 0.9691 | 0.7844 |
| 0.3149 | 79.49 | 12400 | 0.9615 | 0.7844 |
| 0.3234 | 80.13 | 12500 | 0.9661 | 0.7853 |
| 0.3296 | 80.77 | 12600 | 0.9722 | 0.7815 |
| 0.3215 | 81.41 | 12700 | 0.9672 | 0.7834 |
| 0.3121 | 82.05 | 12800 | 0.9641 | 0.7834 |
| 0.3163 | 82.69 | 12900 | 0.9636 | 0.7834 |
| 0.3225 | 83.33 | 13000 | 0.9649 | 0.7853 |
| 0.3136 | 83.97 | 13100 | 0.9652 | 0.7825 |
| 0.3172 | 84.62 | 13200 | 0.9639 | 0.7853 |
| 0.3098 | 85.26 | 13300 | 0.9671 | 0.7834 |
| 0.3081 | 85.9 | 13400 | 0.9627 | 0.7806 |
| 0.3099 | 86.54 | 13500 | 0.9626 | 0.7815 |
| 0.3144 | 87.18 | 13600 | 0.9612 | 0.7815 |
| 0.2952 | 87.82 | 13700 | 0.9645 | 0.7863 |
| 0.3092 | 88.46 | 13800 | 0.9604 | 0.7853 |
| 0.3193 | 89.1 | 13900 | 0.9630 | 0.7844 |
| 0.3005 | 89.74 | 14000 | 0.9667 | 0.7815 |
| 0.2928 | 90.38 | 14100 | 0.9638 | 0.7844 |
| 0.315 | 91.03 | 14200 | 0.9644 | 0.7844 |
| 0.3095 | 91.67 | 14300 | 0.9637 | 0.7834 |
| 0.3036 | 92.31 | 14400 | 0.9615 | 0.7834 |
| 0.298 | 92.95 | 14500 | 0.9617 | 0.7844 |
| 0.2944 | 93.59 | 14600 | 0.9658 | 0.7834 |
| 0.3065 | 94.23 | 14700 | 0.9625 | 0.7834 |
| 0.2983 | 94.87 | 14800 | 0.9622 | 0.7844 |
| 0.2953 | 95.51 | 14900 | 0.9626 | 0.7834 |
| 0.3063 | 96.15 | 15000 | 0.9608 | 0.7853 |
| 0.3058 | 96.79 | 15100 | 0.9631 | 0.7853 |
| 0.2974 | 97.44 | 15200 | 0.9614 | 0.7844 |
| 0.3004 | 98.08 | 15300 | 0.9608 | 0.7844 |
| 0.3001 | 98.72 | 15400 | 0.9613 | 0.7853 |
| 0.2968 | 99.36 | 15500 | 0.9623 | 0.7853 |
| 0.2985 | 100.0 | 15600 | 0.9618 | 0.7844 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
gizmo-ai/Yi-34B-Chat-AWQ
|
gizmo-ai
| 2024-02-02T09:54:51Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:01-ai/Yi-34B-Chat",
"base_model:quantized:01-ai/Yi-34B-Chat",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-02-02T09:54:50Z |
---
base_model: 01-ai/Yi-34B-Chat
inference: false
license: other
license_link: LICENSE
license_name: yi-license
model_creator: 01-ai
model_name: Yi 34B Chat
model_type: yi
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
widget:
- example_title: Yi-34B-Chat
output:
text: ' Hello! How can I assist you today?'
text: hi
- example_title: Yi-34B
output:
text: " an eerie sense that something is just not right\u2026\nBetween the two\
\ worlds lies The Forgotten Kingdom - home to creatures long since thought extinct\
\ and ancient magic so strong it defies belief! Only here can you find what\
\ has been lost for centuries: An Elixir Of Life which will restore youth and\
\ vitality if only those who seek its power are brave enough to face up against\
\ all manner of dangers lurking in this mysterious land! But beware; some say\
\ there may even exist powerful entities beyond our comprehension whose intentions\
\ towards humanity remain unclear at best ---- they might want nothing more\
\ than destruction itself rather then anything else from their quest after immortality\
\ (and maybe someone should tell them about modern medicine)? In any event though\
\ \u2013 one thing remains true regardless : whether or not success comes easy\
\ depends entirely upon how much effort we put into conquering whatever challenges\
\ lie ahead along with having faith deep down inside ourselves too ;) So let\u2019\
s get started now shall We?"
text: There's a place where time stands still. A place of breath taking wonder,
but also
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi 34B Chat - AWQ
- Model creator: [01-ai](https://huggingface.co/01-ai)
- Original model: [Yi 34B Chat](https://huggingface.co/01-ai/Yi-34B-Chat)
<!-- description start -->
## Description
This repo contains AWQ model files for [01-ai's Yi 34B Chat](https://huggingface.co/01-ai/Yi-34B-Chat).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-Chat-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-Chat-GGUF)
* [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-34B-Chat)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Yi-34B-Chat-AWQ/tree/main) | 4 | 128 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 19.23 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Yi-34B-Chat-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Yi-34B-Chat-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Yi-34B-Chat-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Yi-34B-Chat-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Yi-34B-Chat-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Yi-34B-Chat-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: 01-ai's Yi 34B Chat
<div align="center">
<p align="center">
<img width="200px" src="https://github.com/01-ai/Yi/raw/main/assets/img/Yi.svg?sanitize=true">
</p>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/01-ai/Yi/issues">
<img src="https://img.shields.io/github/issues/01-ai/Yi?logo=github" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml">
<img src="https://github.com/01-ai/Yi/actions/workflows/build_docker_image.yml/badge.svg" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a href="https://huggingface.co/01-ai">
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-01--ai-blue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://www.modelscope.cn/organization/01ai/">
<img src="https://img.shields.io/badge/ModelScope-01--ai-blue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://wisemodel.cn/organization/01.AI">
<img src="https://img.shields.io/badge/WiseModel-01--ai-blue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://replicate.com/01-ai">
<img src="https://img.shields.io/badge/Replicate-01--ai-blue?logo=data:image/svg%2bxml;base64,PHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiIHg9IjBweCIgeT0iMHB4IiB2aWV3Qm94PSIwIDAgMTAwMCAxMDAwIiBjbGFzcz0ibG9nbyIgZmlsbD0iY3VycmVudENvbG9yIiB4bWw6c3BhY2U9InByZXNlcnZlIj4KICA8Zz4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCw0MjcuNiAxMDAwLDU0MC42IDYwMy40LDU0MC42IDYwMy40LDEwMDAgNDc3LDEwMDAgNDc3LDQyNy42IAkiPjwvcG9seWdvbj4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCwyMTMuOCAxMDAwLDMyNyAzNjQuOCwzMjcgMzY0LjgsMTAwMCAyMzguNCwxMDAwIDIzOC40LDIxMy44IAkiPjwvcG9seWdvbj4KICAgIDxwb2x5Z29uIHBvaW50cz0iMTAwMCwwIDEwMDAsMTEzLjIgMTI2LjQsMTEzLjIgMTI2LjQsMTAwMCAwLDEwMDAgMCwwIAkiPjwvcG9seWdvbj4KICA8L2c+Cjwvc3ZnPg==" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code_License-Apache_2.0-lightblue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">
<img src="https://img.shields.io/badge/Model_License-Model_Agreement-lightblue" style="margin: 0 0;">
</a>
</div>
<div style="display: inline-block;">
<a rel="noopener nofollow" href="mailto:[email protected]">
<img src="https://img.shields.io/badge/โ๏ธ[email protected]" style="margin: 0 0;">
</a>
</div>
</div>
## Introduction
The **Yi** series models are large language models trained from scratch by
developers at [01.AI](https://01.ai/).
## News
<details open>
<summary>๐ฏ <b>2023/11/23</b>: The chat models are open to public.</summary>
This release contains two chat models based on previous released base models, two 8-bits models quntinized by GPTQ, two 4-bits models quantinized by AWQ.
- `Yi-34B-Chat`
- `Yi-34B-Chat-4bits`
- `Yi-34B-Chat-8bits`
- `Yi-6B-Chat`
- `Yi-6B-Chat-4bits`
- `Yi-6B-Chat-8bits`
You can try some of them interactively at:
- [HuggingFace](https://huggingface.co/spaces/01-ai/Yi-34B-Chat)
- [Replicate](https://replicate.com/01-ai)
</details>
<details open>
<summary>๐ <b>2023/11/23</b>: The Yi Series Models Community License Agreement is updated to v2.1.</summary>
</details>
<details>
<summary>๐ฅ <b>2023/11/08</b>: Invited test of Yi-34B chat model.</summary>
Application form:
- [English](https://cn.mikecrm.com/l91ODJf)
- [Chinese](https://cn.mikecrm.com/gnEZjiQ)
</details>
<details>
<summary>๐ฏ <b>2023/11/05</b>: The base model of <code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>.</summary>
This release contains two base models with the same parameter sizes of previous
release, except that the context window is extended to 200K.
</details>
<details>
<summary>๐ฏ <b>2023/11/02</b>: The base model of <code>Yi-6B</code> and <code>Yi-34B</code>.</summary>
The first public release contains two bilingual (English/Chinese) base models
with the parameter sizes of 6B and 34B. Both of them are trained with 4K
sequence length and can be extended to 32K during inference time.
</details>
## Model Performance
### Base Model Performance
| Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code |
| :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: |
| | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - |
| LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 |
| LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 |
| Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 |
| Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** |
| Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 |
| InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 |
| Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - |
| Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 |
| Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 |
| Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 |
| **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 |
| Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 |
While benchmarking open-source models, we have observed a disparity between the
results generated by our pipeline and those reported in public sources (e.g.
OpenCompass). Upon conducting a more in-depth investigation of this difference,
we have discovered that various models may employ different prompts,
post-processing strategies, and sampling techniques, potentially resulting in
significant variations in the outcomes. Our prompt and post-processing strategy
remains consistent with the original benchmark, and greedy decoding is employed
during evaluation without any post-processing for the generated content. For
scores that were not reported by the original authors (including scores reported
with different settings), we try to get results with our pipeline.
To evaluate the model's capability extensively, we adopted the methodology
outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande,
ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ
were incorporated to evaluate reading comprehension. CSQA was exclusively tested
using a 7-shot setup, while all other tests were conducted with a 0-shot
configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1),
HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due
to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score
is derived by averaging the scores on the remaining tasks. Since the scores for
these two tasks are generally lower than the average, we believe that
Falcon-180B's performance was not underestimated.
### Chat Model Performance
| Model | MMLU | MMLU | CMMLU | CMMLU | C-Eval(val)<sup>*</sup> | C-Eval(val)<sup>*</sup> | Truthful QA | BBH | BBH | GSM8k | GSM8k |
| ----------------------- | --------- | --------- | --------- | --------- | ----------------------- | ----------------------- | ----------- | --------- | --------- | --------- | --------- |
| | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 0-shot | 3-shot | 0-shot | 4-shot |
| LLaMA2-13B-Chat | 50.88 | 47.33 | 27.47 | 35.08 | 27.93 | 35.88 | 36.84 | 32.90 | 58.22 | 36.85 | 2.73 |
| LLaMA2-70B-Chat | 59.42 | 59.86 | 36.10 | 40.99 | 34.99 | 41.31 | 53.95 | 42.36 | 58.53 | 47.08 | 58.68 |
| Baichuan2-13B-Chat | 55.09 | 50.14 | 58.64 | 59.47 | 56.02 | 54.75 | 48.98 | 38.81 | 47.15 | 45.72 | 23.28 |
| Qwen-14B-Chat | 63.99 | 64.98 | 67.73 | 70.57 | 66.12 | 70.06 | 52.49 | 49.65 | 54.98 | 59.51 | 61.18 |
| InternLM-Chat-20B | 55.55 | 57.42 | 53.55 | 53.75 | 51.19 | 53.57 | 51.75 | 42.41 | 36.68 | 15.69 | 43.44 |
| AquilaChat2-34B v1.2 | 65.15 | 66.70 | 67.51 | 70.02 | **82.99** | **89.38** | **64.33** | 20.12 | 34.28 | 11.52 | 48.45 |
| Yi-6B-Chat | 58.24 | 60.99 | 69.44 | 74.71 | 68.80 | 74.22 | 50.58 | 39.70 | 47.15 | 38.44 | 44.88 |
| Yi-6B-Chat-8bits(GPTQ) | 58.29 | 60.96 | 69.21 | 74.69 | 69.17 | 73.85 | 49.85 | 40.35 | 47.26 | 39.42 | 44.88 |
| Yi-6B-Chat-4bits(AWQ) | 56.78 | 59.89 | 67.70 | 73.29 | 67.53 | 72.29 | 50.29 | 37.74 | 43.62 | 35.71 | 38.36 |
| Yi-34B-Chat | **67.62** | 73.46 | **79.11** | **81.34** | 77.04 | 78.53 | 62.43 | 51.41 | **71.74** | **71.65** | **75.97** |
| Yi-34B-Chat-8bits(GPTQ) | 66.24 | **73.69** | 79.05 | 81.23 | 76.82 | 78.97 | 61.84 | **52.08** | 70.97 | 70.74 | 75.74 |
| Yi-34B-Chat-4bits(AWQ) | 65.77 | 72.42 | 78.21 | 80.50 | 75.71 | 77.27 | 61.84 | 48.30 | 69.39 | 70.51 | 74.00 |
We evaluated various benchmarks using both zero-shot and few-shot methods, except for TruthfulQA. Generally, the zero-shot approach is more common in chat models. Our evaluation strategy involves generating responses while following instructions explicitly or implicitly (such as using few-shot examples). We then isolate relevant answers from the generated text. Some models are not well-suited to produce output in the specific format required by instructions in few datasets, which leads to suboptimal results.
<strong>*</strong>: C-Eval results are evaluated on the validation datasets
### Quantized Chat Model Performance
We also provide both 4-bit (AWQ) and 8-bit (GPTQ) quantized Yi chat models. Evaluation results on various benchmarks have shown that the quantized models have negligible losses. Additionally, they reduce the memory footprint size. After testing different configurations of prompts and generation lengths, we highly recommend following the guidelines in the memory footprint table below when selecting a device to run our models.
| | batch=1 | batch=4 | batch=16 | batch=32 |
| ----------------------- | ------- | ------- | -------- | -------- |
| Yi-34B-Chat | 65GiB | 68GiB | 76GiB | >80GiB |
| Yi-34B-Chat-8bits(GPTQ) | 35GiB | 37GiB | 46GiB | 58GiB |
| Yi-34B-Chat-4bits(AWQ) | 19GiB | 20GiB | 30GiB | 40GiB |
| Yi-6B-Chat | 12GiB | 13GiB | 15GiB | 18GiB |
| Yi-6B-Chat-8bits(GPTQ) | 7GiB | 8GiB | 10GiB | 14GiB |
| Yi-6B-Chat-4bits(AWQ) | 4GiB | 5GiB | 7GiB | 10GiB |
Note: All the numbers in the table represent the minimum recommended memory for running models of the corresponding size.
### Limitations of Chat Model
The released chat model has undergone exclusive training using Supervised Fine-Tuning (SFT). Compared to other standard chat models, our model produces more diverse responses, making it suitable for various downstream tasks, such as creative scenarios. Furthermore, this diversity is expected to enhance the likelihood of generating higher quality responses, which will be advantageous for subsequent Reinforcement Learning (RL) training.
However, this higher diversity might amplify certain existing issues, including:
- **Hallucination**: This refers to the model generating factually incorrect or nonsensical information. With the model's responses being more varied, there's a higher chance of hallucination that are not based on accurate data or logical reasoning.
- **Non-determinism in re-generation**: When attempting to regenerate or sample responses, inconsistencies in the outcomes may occur. The increased diversity can lead to varying results even under similar input conditions.
- **Cumulative Error**: This occurs when errors in the model's responses compound over time. As the model generates more diverse responses, the likelihood of small inaccuracies building up into larger errors increases, especially in complex tasks like extended reasoning, mathematical problem-solving, etc.
To achieve more coherent and consistent responses, it is advisable to adjust generation configuration parameters such as`temperature`,`top_p`, or`top_k`. These adjustments can help in the balance between creativity and coherence in the model's outputs.
## Usage
Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you
encounter any problem when using the **Yi** series models.
### 1. Prepare development environment
#### 1.1 Docker
The best approach to try the **Yi** series models is through Docker with GPUs. We
provide the following docker images to help you get started.
- `registry.lingyiwanwu.com/ci/01-ai/yi:latest`
- `ghcr.io/01-ai/yi:latest`
Note that the `latest` tag always points to the latest code in the `main`
branch. To test a stable version, please replace it with a specific
[tag](https://github.com/01-ai/Yi/tags).
#### 1.2 Local development environment
We use [`conda-lock`](https://github.com/conda/conda-lock) to generate fully reproducible lock files for conda environments. You can refer to [conda-lock.yml](./conda-lock.yml) for the exact versions of the dependencies. Additionally, we utilize [`micromamba`](https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html) for installing these dependencies.
To install the dependencies, please follow these steps:
1. Install `micromamba` by following the instructions available [here](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html).
2. Execute `micromamba install -y -n yi -f conda-lock.yml` to create a conda environment named `yi` and install the necessary dependencies.
### 2. Download the model (optional)
By default, the model weights and tokenizer will be downloaded from
[HuggingFace](https://huggingface.co/01-ai) automatically in the next step. You
can also download them manually from the following places:
- [ModelScope](https://www.modelscope.cn/organization/01ai/)
- [WiseModel](https://wisemodel.cn/organization/01.AI)
### 3. Examples
#### 3.1 Use the chat model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '01-ai/Yi-34b-Chat'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
# Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
#### 3.2 Use the base model
```bash
python demo/text_generation.py
```
To reuse the downloaded models in the previous step, you can provide the extra
`--model` argument:
```bash
python demo/text_generation.py --model /path/to/model
```
Or if you'd like to get your hands dirty:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("01-ai/Yi-34B", device_map="auto", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("01-ai/Yi-34B", trust_remote_code=True)
inputs = tokenizer("There's a place where time stands still. A place of breath taking wonder, but also", return_tensors="pt")
max_length = 256
outputs = model.generate(
inputs.input_ids.cuda(),
max_length=max_length,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
repetition_penalty=1.3,
no_repeat_ngram_size=5,
temperature=0.7,
top_k=40,
top_p=0.8,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
<details>
<summary>Output</summary>
**Prompt**: There's a place where time stands still. A place of breath taking wonder, but also
**Generation**: There's a place where time stands still. A place of breath taking wonder, but also of great danger. A place where the very air you breathe could kill you. A place where the only way to survive is to be prepared.
The place is called the Arctic.
The Arctic is a vast, frozen wilderness. It is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
The Arctic is also a place of great beauty. The ice and snow are a pristine white. The sky is a deep blue. The sunsets are spectacular.
But the Arctic is also a place of great danger. The ice can be treacherous. The winds can be deadly. The sun can be blinding.
The Arctic is a place where the only way to survive is to be prepared.
The Arctic is a place of extremes. The temperatures can drop to -40 degrees Celsius. The winds can reach speeds of 100 kilometers per hour. The sun can shine for 24 hours a day, or not at all for weeks on end.
The Arctic is a place of great beauty. The ice and snow are a
</details>
For more advanced usage, please refer to the
[doc](https://github.com/01-ai/Yi/tree/main/demo).
#### 3.3 Finetuning from the base model:
```bash
bash finetune/scripts/run_sft_Yi_6b.sh
```
Once finished, you can compare the finetuned model and the base model with the following command:
```bash
bash finetune/scripts/run_eval.sh
```
For more advanced usage like fine-tuning based on your custom data, please refer
the [doc](https://github.com/01-ai/Yi/tree/main/finetune).
#### 3.4 Quantization
##### GPT-Q
```bash
python quantization/gptq/quant_autogptq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulting model as follows:
```bash
python quantization/gptq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
For a more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/gptq)
##### AWQ
```bash
python quantization/awq/quant_autoawq.py \
--model /base_model \
--output_dir /quantized_model \
--trust_remote_code
```
Once finished, you can then evaluate the resulted model as follows:
```bash
python quantization/awq/eval_quantized_model.py \
--model /quantized_model \
--trust_remote_code
```
For more detailed explanation, please read the [doc](https://github.com/01-ai/Yi/tree/main/quantization/awq)
## Ecosystem
๐ค You are encouraged to create a PR and share your awesome work built on top of
the Yi series models.
- Serving
- [ScaleLLM](https://github.com/vectorch-ai/ScaleLLM#supported-models): Efficiently run Yi models locally.
- Quantization
- [TheBloke/Yi-34B-GGUF](https://huggingface.co/TheBloke/Yi-34B-GGUF)
- [TheBloke/Yi-34B-GPTQ](https://huggingface.co/TheBloke/Yi-34B-GPTQ)
- Finetuning
- [NousResearch/Nous-Capybara-34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
## FAQ
1. **What dataset was this trained with?**
The dataset we use contains Chinese & English only. We used approximately 3T
tokens. The detailed number and its construction will be described in the
upcoming technical report.
## Disclaimer
We use data compliance checking algorithms during the training process, to
ensure the compliance of the trained model to the best of our ability. Due to
complex data and the diversity of language model usage scenarios, we cannot
guarantee that the model will generate correct, and reasonable output in all
scenarios. Please be aware that there is still a risk of the model producing
problematic outputs. We will not be responsible for any risks and issues
resulting from misuse, misguidance, illegal usage, and related misinformation,
as well as any associated data security concerns.
## License
The source code in this repo is licensed under the [Apache 2.0
license](https://github.com/01-ai/Yi/blob/main/LICENSE). The Yi series models
are fully open for academic research and free commercial usage with permission
via applications. All usage must adhere to the [Model License
Agreement 2.0](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
To apply for the official commercial license, please contact us
([[email protected]](mailto:[email protected])).
|
birgermoell/WestLake-Munin-Cat-NorskGPT
|
birgermoell
| 2024-02-02T09:34:38Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"RJuro/munin-neuralbeagle-7b",
"timpal0l/BeagleCatMunin",
"macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"bineric/NorskGPT-Mistral-7b",
"meta-math/MetaMath-Mistral-7B",
"teknium/OpenHermes-2.5-Mistral-7B",
"base_model:RJuro/munin-neuralbeagle-7b",
"base_model:merge:RJuro/munin-neuralbeagle-7b",
"base_model:bineric/NorskGPT-Mistral-7b",
"base_model:merge:bineric/NorskGPT-Mistral-7b",
"base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:merge:macadeliccc/WestLake-7B-v2-laser-truthy-dpo",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:merge:meta-math/MetaMath-Mistral-7B",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:merge:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:timpal0l/BeagleCatMunin",
"base_model:merge:timpal0l/BeagleCatMunin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T09:30:42Z |
---
tags:
- merge
- mergekit
- lazymergekit
- RJuro/munin-neuralbeagle-7b
- timpal0l/BeagleCatMunin
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- bineric/NorskGPT-Mistral-7b
- meta-math/MetaMath-Mistral-7B
- teknium/OpenHermes-2.5-Mistral-7B
base_model:
- RJuro/munin-neuralbeagle-7b
- timpal0l/BeagleCatMunin
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo
- bineric/NorskGPT-Mistral-7b
- meta-math/MetaMath-Mistral-7B
- teknium/OpenHermes-2.5-Mistral-7B
---
# WestLake-Munin-Cat-NorskGPT
WestLake-Munin-Cat-NorskGPT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [RJuro/munin-neuralbeagle-7b](https://huggingface.co/RJuro/munin-neuralbeagle-7b)
* [timpal0l/BeagleCatMunin](https://huggingface.co/timpal0l/BeagleCatMunin)
* [macadeliccc/WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo)
* [bineric/NorskGPT-Mistral-7b](https://huggingface.co/bineric/NorskGPT-Mistral-7b)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
## ๐งฉ Configuration
```yaml
models:
- model: RJuro/munin-neuralbeagle-7b
parameters:
density: 0.53
weight: 0.2
- model: timpal0l/BeagleCatMunin
parameters:
density: 0.53
weight: 0.2
- model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
density: 0.53
weight: 0.2
- model: bineric/NorskGPT-Mistral-7b
parameters:
density: 0.53
weight: 0.2
- model: meta-math/MetaMath-Mistral-7B
parameters:
density: 0.53
weight: 0.1
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.1
merge_method: dare_ties
base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "birgermoell/WestLake-Munin-Cat-NorskGPT"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jlbaker361/ddpo-stability-dcgan-e5
|
jlbaker361
| 2024-02-02T09:14:19Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T09:12:43Z |
---
{}
---
# DDPO trained model
num_epochs=5
train_gradient_accumulation_steps=1
sample_num_steps=30
sample_batch_size=16
train_batch_size=16
sample_num_batches_per_epoch=32
based off of stabilityai/stable-diffusion-2-base
and then trained off of None
|
elinaparajuli/phi-1_5-finetuned-gsm8k_QA
|
elinaparajuli
| 2024-02-02T09:11:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-02-02T09:02:25Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: phi-1_5-finetuned-gsm8k_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-gsm8k_QA
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
zeppdev/phi2-url
|
zeppdev
| 2024-02-02T09:03:11Z | 6 | 0 |
mlx
|
[
"mlx",
"safetensors",
"phi",
"nlp",
"code",
"text-generation",
"custom_code",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2024-02-02T09:01:23Z |
---
language:
- en
license: mit
tags:
- nlp
- code
- mlx
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
pipeline_tag: text-generation
---
# zeppdev/phi2-url
This model was converted to MLX format from [`microsoft/phi-2`]().
Refer to the [original model card](https://huggingface.co/microsoft/phi-2) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("zeppdev/phi2-url")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
YuriPaglierani/q-FrozenLake-v1-4x4-noSlippery
|
YuriPaglierani
| 2024-02-02T08:56:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T08:56:07Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="YuriPaglierani/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-with-explanation-75-3-epoch-4bit-full-qlora-4bit
|
mtc
| 2024-02-02T08:55:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-02T08:54:28Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mtc/mistralai-Mistral-7B-v0.1-7b-xnli-with-explanation-100-5-epoch-qlora-4bit
|
mtc
| 2024-02-02T08:50:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-02T08:50:18Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mtc/mistralai-Mistral-7B-v0.1-7b-xnli-with-explanation-100-5-epoch-lora-full
|
mtc
| 2024-02-02T08:40:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-01T22:00:30Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
heldJan/llama-2-7b-froozen_mvit
|
heldJan
| 2024-02-02T08:36:58Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"VideoChatGPT",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"region:us"
] |
text-generation
| 2024-02-01T18:11:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DanielClough/Candle_Mistral-7B-Instruct-v0.2
|
DanielClough
| 2024-02-02T08:30:03Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"en",
"dataset:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T07:54:38Z |
---
datasets:
- mistralai/Mistral-7B-Instruct-v0.2
language:
- en
pipeline_tag: text-generation
license: apache-2.0
---
This repo includes `.gguf` built for HuggingFace/Candle.
They will not work with `llama.cpp`.
Refer to the [original repo](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) for more details.
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-with-explanation-3-epoch-full-dataset-lora-full
|
mtc
| 2024-02-02T08:24:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-02T08:23:41Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
loony-huggingface/twitter_text_classification_model
|
loony-huggingface
| 2024-02-02T08:22:29Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T07:47:27Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: loony-huggingface/twitter_text_classification_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# loony-huggingface/twitter_text_classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1439
- Validation Loss: 0.2823
- Train Accuracy: 0.9120
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 17440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.8252 | 0.5658 | 0.7919 | 0 |
| 0.3449 | 0.3283 | 0.8901 | 1 |
| 0.1439 | 0.2823 | 0.9120 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
birgermoell/Munin-NeuralBeagle-NorskGPT
|
birgermoell
| 2024-02-02T08:19:45Z | 31 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"bineric/NorskGPT-Mistral-7b",
"base_model:bineric/NorskGPT-Mistral-7b",
"base_model:finetune:bineric/NorskGPT-Mistral-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T08:15:53Z |
---
tags:
- merge
- mergekit
- lazymergekit
- bineric/NorskGPT-Mistral-7b
base_model:
- bineric/NorskGPT-Mistral-7b
---
# Munin-NeuralBeagle-NorskGPT
Munin-NeuralBeagle-NorskGPT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bineric/NorskGPT-Mistral-7b](https://huggingface.co/bineric/NorskGPT-Mistral-7b)
## ๐งฉ Configuration
```yaml
models:
- model: RJuro/munin-neuralbeagle-7b
# No parameters necessary for base model
- model: bineric/NorskGPT-Mistral-7b
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: RJuro/munin-neuralbeagle-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "birgermoell/Munin-NeuralBeagle-NorskGPT"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
KatyTheCutie/EstopianMaid-13B-GGUF
|
KatyTheCutie
| 2024-02-02T08:17:33Z | 3,671 | 31 |
transformers
|
[
"transformers",
"gguf",
"roleplay",
"text-generation-inference",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T05:17:16Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- roleplay
- text-generation-inference
---
MORE GGUF SIZES: https://huggingface.co/TheBloke/EstopianMaid-13B-GGUF

Based on feedback Estopian made can:
- EstopianMaid is good at sticking to the character card.
- maintains coherency in a setting with multiple characters.
- Able to create new scenario's
Recommended settings:
- SillyTavern Default Preset.
- Temperature: 0.7
- Min-P: 0.3
- Amount to Gen: 256
- Top P: 1
- Repetition penalty: 1.10
Models used:
BlueNipples/TimeCrystal-l2-13B
cgato/Thespis-13b-DPO-v0.7
KoboldAI/LLaMA2-13B-Estopia
NeverSleep/Noromaid-13B-0.4-DPO
Doctor-Shotgun/cat-v1.0-13b
Feedback is always appreciated!
Thank you KoboldAI for their usage of their MergeBox and Caitlyn G. for their support and feedback.
|
cekal/mistral-mm
|
cekal
| 2024-02-02T08:10:49Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:152334H/miqu-1-70b-sf",
"base_model:adapter:152334H/miqu-1-70b-sf",
"region:us"
] | null | 2024-02-02T07:40:49Z |
---
library_name: peft
base_model: 152334H/miqu-1-70b-sf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
NelsonwonHF/distilbert-base-uncased-finetuned-emotion
|
NelsonwonHF
| 2024-02-02T08:03:59Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T02:50:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.3126
- eval_accuracy: 0.9075
- eval_f1: 0.9069
- eval_runtime: 67.2772
- eval_samples_per_second: 29.728
- eval_steps_per_second: 0.476
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.0
- Tokenizers 0.15.0
|
enricai/chat-es-mad
|
enricai
| 2024-02-02T08:00:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:adapter:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"region:us"
] | null | 2024-01-31T17:17:28Z |
---
library_name: peft
base_model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
hiteshsom/mistral_finetuned_code
|
hiteshsom
| 2024-02-02T07:55:29Z | 11 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-29T10:32:22Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: mistral_finetuned_code
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_finetuned_code
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.5.0
- Transformers 4.37.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.1
|
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t300_e5_hacked_new
|
FounderOfHuggingface
| 2024-02-02T07:53:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2024-02-02T07:53:51Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Nhhhhhh/result
|
Nhhhhhh
| 2024-02-02T07:48:16Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T07:40:06Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1
|
EmbeddingStudio/query-parser-falcon-7b-instruct
|
EmbeddingStudio
| 2024-02-02T07:47:33Z | 3 | 1 |
peft
|
[
"peft",
"safetensors",
"falcon",
"search-queries",
"instruct-fine-tuned",
"search-queries-parser",
"zero-shot",
"llm",
"text-generation",
"custom_code",
"en",
"dataset:EmbeddingStudio/query-parsing-instructions-falcon",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-14T12:17:09Z |
---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
license: apache-2.0
language:
- en
pipeline_tag: text-generation
datasets:
- EmbeddingStudio/query-parsing-instructions-falcon
tags:
- search-queries
- instruct-fine-tuned
- search-queries-parser
- zero-shot
- llm
- falcon
inference: false
metrics:
- accuracy
- precision
- recall
- f1
---
# Model Card for the Query Parser LLM using Falcon-7B-Instruct
EmbeddingStudio is the [open-source framework](https://github.com/EulerSearch/embedding_studio/tree/main), that allows you transform a joint "Embedding Model + Vector DB" into
a full-cycle search engine: collect clickstream -> improve search experience-> adapt embedding model and repeat out of the box.
It's a highly rare case when a company will use unstructured search as is. And by searching `brick red houses san francisco area for april`
user definitely wants to find some houses in San Francisco for a month-long rent in April, and then maybe brick-red houses.
Unfortunately, for the 15th January 2024 there is no such accurate embedding model. So, companies need to mix structured and unstructured search.
The very first step of mixing it - to parse a search query. Usual approaches are:
* Implement a bunch of rules, regexps, or grammar parsers (like [NLTK grammar parser](https://www.nltk.org/howto/grammar.html)).
* Collect search queries and to annotate some dataset for NER task.
It takes some time to do, but at the end you can get controllable and very accurate query parser.
EmbeddingStudio team decided to dive into LLM instruct fine-tuning for `Zero-Shot query parsing` task
to close the first gap while a company doesn't have any rules and data being collected, or even eliminate exhausted rules implementation, but in the future.
The main idea is to align an LLM to being to parse short search queries knowing just a company market and a schema of search filters. Moreover, being oriented on applied NLP,
we are trying to serve only light-weight LLMs a.k.a `not heavier than 7B parameters`.
## Model Details
### Model Description
This is only [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) aligned to follow instructions like:
```markdown
### System: Master in Query Analysis
### Instruction: Organize queries in JSON, adhere to schema, verify spelling.
#### Category: Logistics and Supply Chain Management
#### Schema: ```[{"Name": "Customer_Ratings", "Representations": [{"Name": "Exact_Rating", "Type": "float", "Examples": [4.5, 3.2, 5.0, "4.5", "Unstructured"]}, {"Name": "Minimum_Rating", "Type": "float", "Examples": [4.0, 3.0, 5.0, "4.5"]}, {"Name": "Star_Rating", "Type": "int", "Examples": [4, 3, 5], "Enum": [1, 2, 3, 4, 5]}]}, {"Name": "Date", "Representations": [{"Name": "Day_Month_Year", "Type": "str", "Examples": ["01.01.2024", "15.06.2023", "31.12.2022", "25.12.2021", "20.07.2024", "15.06.2023"], "Pattern": "dd.mm.YYYY"}, {"Name": "Day_Name", "Type": "str", "Examples": ["Monday", "Wednesday", "Friday", "Thursday", "Monday", "Tuesday"], "Enum": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]}]}, {"Name": "Date_Period", "Representations": [{"Name": "Specific_Period", "Type": "str", "Examples": ["01.01.2024 - 31.01.2024", "01.06.2023 - 30.06.2023", "01.12.2022 - 31.12.2022"], "Pattern": "dd.mm.YYYY - dd.mm.YYYY"}, {"Name": "Month", "Type": "str", "Examples": ["January", "June", "December"], "Enum": ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]}, {"Name": "Quarter", "Type": "str", "Examples": ["Q1", "Q2", "Q3"], "Enum": ["Q1", "Q2", "Q3", "Q4"]}, {"Name": "Season", "Type": "str", "Examples": ["Winter", "Summer", "Autumn"], "Enum": ["Winter", "Spring", "Summer", "Autumn"]}]}, {"Name": "Destination_Country", "Representations": [{"Name": "Country_Name", "Type": "str", "Examples": ["United States", "Germany", "China"]}, {"Name": "Country_Code", "Type": "str", "Examples": ["US", "DE", "CN"]}, {"Name": "Country_Abbreviation", "Type": "str", "Examples": ["USA", "GER", "CHN"]}]}]```
#### Query: Which logistics companies in the US have a perfect 5.0 rating ?
### Response:
[{"Value": "Which logistics companies in the US have a perfect 5.0 rating?", "Name": "Correct"}, {"Name": "Customer_Ratings.Exact_Rating", "Value": 5.0}, {"Name": "Destination_Country.Country_Code", "Value": "US"}]
```
**Important:** Additionally, we are trying to fine-tune the Large Language Model (LLM) to not only parse unstructured search queries but also to correct spelling.
- **Developed by EmbeddingStudio team:**
* Aleksandr Iudaev [[LinkedIn](https://www.linkedin.com/in/alexanderyudaev/)] [[Email](mailto:[email protected])]
* Andrei Kostin [[LinkedIn](https://www.linkedin.com/in/andrey-kostin/)] [[Email](mailto:[email protected])]
* ML Doom [AI Assistant]
- **Funded by EmbeddingStudio team**
- **Model type:** Instruct Fine-Tuned Large Language Model
- **Model task:** Zero-shot search query parsing
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model:** [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)
- **!Maximal Length Size:** we used 1024 for fine-tuning, this is highly different from the original model `max_seq_length = 2048`
- **Tuning Epochs:** 3 for now, but will be more later.
**Disclaimer:** As a small startup, this direction forms a part of our Minimum Viable Product (MVP). It's more of
an attempt to test the 'product-market fit' rather than a well-structured scientific endeavor. Once we check it and go with a round, we definitely will:
* Curating a specific dataset for more precise analysis.
* Exploring various approaches and Large Language Models (LLMs) to identify the most effective solution.
* Publishing a detailed paper to ensure our findings and methodologies can be thoroughly reviewed and verified.
We acknowledge the complexity involved in utilizing Large Language Models, particularly in the context
of `Zero-Shot search query parsing` and `AI Alignment`. Given the intricate nature of this technology, we emphasize the importance of rigorous verification.
Until our work is thoroughly reviewed, we recommend being cautious and critical of the results.
### Model Sources
- **Repository:** code of inference the model will be [here](https://github.com/EulerSearch/embedding_studio/tree/main)
- **Paper:** Work In Progress
- **Demo:** Work In Progress
## Uses
We strongly recommend only the direct usage of this fine-tuned version of [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct):
* Zero-shot Search Query Parsing with porived company market name and filters schema
* Search Query Spell Correction
For any other needs the behaviour of the model in unpredictable, please utilize the [original mode](https://huggingface.co/tiiuae/falcon-7b-instruct) or fine-tune your own.
### Instruction format
```markdown
### System: Master in Query Analysis
### Instruction: Organize queries in JSON, adhere to schema, verify spelling.
#### Category: {your_company_category}
#### Schema: ```{filters_schema}```
#### Query: {query}
### Response:
```
Filters schema is JSON-readable line in the format (we highly recommend you to use it):
List of filters (dict):
* Name - name of filter (better to be meaningful).
* Representations - list of possible filter formats (dict):
* Name - name of representation (better to be meaningful).
* Type - python base type (int, float, str, bool).
* Examples - list of examples.
* Enum - if a representation is enumeration, provide a list of possible values, LLM should map parsed value into this list.
* Pattern - if a representation is pattern-like (datetime, regexp, etc.) provide a pattern text in any format.
Example:
```json
[{"Name": "Customer_Ratings", "Representations": [{"Name": "Exact_Rating", "Type": "float", "Examples": [4.5, 3.2, 5.0, "4.5", "Unstructured"]}, {"Name": "Minimum_Rating", "Type": "float", "Examples": [4.0, 3.0, 5.0, "4.5"]}, {"Name": "Star_Rating", "Type": "int", "Examples": [4, 3, 5], "Enum": [1, 2, 3, 4, 5]}]}, {"Name": "Date", "Representations": [{"Name": "Day_Month_Year", "Type": "str", "Examples": ["01.01.2024", "15.06.2023", "31.12.2022", "25.12.2021", "20.07.2024", "15.06.2023"], "Pattern": "dd.mm.YYYY"}, {"Name": "Day_Name", "Type": "str", "Examples": ["Monday", "Wednesday", "Friday", "Thursday", "Monday", "Tuesday"], "Enum": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]}]}, {"Name": "Date_Period", "Representations": [{"Name": "Specific_Period", "Type": "str", "Examples": ["01.01.2024 - 31.01.2024", "01.06.2023 - 30.06.2023", "01.12.2022 - 31.12.2022"], "Pattern": "dd.mm.YYYY - dd.mm.YYYY"}, {"Name": "Month", "Type": "str", "Examples": ["January", "June", "December"], "Enum": ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]}, {"Name": "Quarter", "Type": "str", "Examples": ["Q1", "Q2", "Q3"], "Enum": ["Q1", "Q2", "Q3", "Q4"]}, {"Name": "Season", "Type": "str", "Examples": ["Winter", "Summer", "Autumn"], "Enum": ["Winter", "Spring", "Summer", "Autumn"]}]}, {"Name": "Destination_Country", "Representations": [{"Name": "Country_Name", "Type": "str", "Examples": ["United States", "Germany", "China"]}, {"Name": "Country_Code", "Type": "str", "Examples": ["US", "DE", "CN"]}, {"Name": "Country_Abbreviation", "Type": "str", "Examples": ["USA", "GER", "CHN"]}]}]
```
As the result, response will be JSON-readable line in the format:
```json
[{"Value": "Corrected search phrase", "Name": "Correct"}, {"Name": "filter-name.representation", "Value": "some-value"}]
```
Field and representation names will be aligned with the provided schema. Example:
```json
[{"Value": "Which logistics companies in the US have a perfect 5.0 rating?", "Name": "Correct"}, {"Name": "Customer_Ratings.Exact_Rating", "Value": 5.0}, {"Name": "Destination_Country.Country_Code", "Value": "US"}]
```
Used for fine-tuning `system` phrases:
```python
[
"Expert at Deconstructing Search Queries",
"Master in Query Analysis",
"Premier Search Query Interpreter",
"Advanced Search Query Decoder",
"Search Query Parsing Genius",
"Search Query Parsing Wizard",
"Unrivaled Query Parsing Mechanism",
"Search Query Parsing Virtuoso",
"Query Parsing Maestro",
"Ace of Search Query Structuring"
]
```
Used for fine-tuning `instruction` phrases:
```python
[
"Convert queries to JSON, align with schema, ensure correct spelling.",
"Analyze and structure queries in JSON, maintain schema, check spelling.",
"Organize queries in JSON, adhere to schema, verify spelling.",
"Decode queries to JSON, follow schema, correct spelling.",
"Parse queries to JSON, match schema, spell correctly.",
"Transform queries to structured JSON, align with schema and spelling.",
"Restructure queries in JSON, comply with schema, accurate spelling.",
"Rearrange queries in JSON, strict schema adherence, maintain spelling.",
"Harmonize queries with JSON schema, ensure spelling accuracy.",
"Efficient JSON conversion of queries, schema compliance, correct spelling."
]
```
### Direct Use
```python
import json
from json import JSONDecodeError
from transformers import AutoTokenizer, AutoModelForCausalLM
INSTRUCTION_TEMPLATE = """
### System: Master in Query Analysis
### Instruction: Organize queries in JSON, adhere to schema, verify spelling.
#### Category: {0}
#### Schema: ```{1}```
#### Query: {2}
### Response:
"""
def parse(
query: str,
company_category: str,
filter_schema: dict,
model: AutoModelForCausalLM,
tokenizer: AutoTokenizer
):
input_text = INSTRUCTION_TEMPLATE.format(
company_category,
json.dumps(filter_schema),
query
)
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generating text
output = model.generate(input_ids.to('cuda'),
max_new_tokens=1024,
do_sample=True,
temperature=0.05,
pad_token_id=50256
)
try:
parsed = json.loads(tokenizer.decode(output[0], skip_special_tokens=True).split('## Response:\n')[-1])
except JSONDecodeError as e:
parsed = dict()
return parsed
```
## Bias, Risks, and Limitations
### Bias
Again, this model was fine-tuned for following the zero-shot query parsing instructions.
So, all ethical biases are inherited by the original model.
Model was fine-tuned to be able to work with the unknown company domain and filters schema. But, can be better with the training company categories:
Educational Institutions, Job Recruitment Agencies, Banking Services, Investment Services, Insurance Services, Financial Planning and Advisory, Credit Services, Payment Processing, Mortgage and Real Estate Services, Taxation Services, Risk Management and Compliance, Digital and Mobile Banking, Retail Stores (Online and Offline), Automotive Dealerships, Restaurants and Food Delivery Services, Entertainment and Media Platforms, Government Services, Travelers and Consumers, Logistics and Supply Chain Management, Customer Support Services, Market Research Firms, Mobile App Development, Game Development, Cloud Computing Services, Data Analytics and Business Intelligence, Cybersecurity Software, User Interface/User Experience Design, Internet of Things (IoT) Development, Project Management Tools, Version Control Systems, Continuous Integration/Continuous Deployment, Issue Tracking and Bug Reporting, Collaborative Development Environments, Team Communication and Chat Tools, Task and Time Management, Customer Support and Feedback, Cloud-based Development Environments, Image Stock Platforms, Video Hosting and Portals, Social Networks, Professional Social Networks, Dating Apps
### Risks and Limitations
Known limitations:
1. Can add extra spaces or remove spaces: `1-2` -> `1 - 2`.
2. Can add extra words: `5` -> `5 years`.
3. Can not differentiate between `<>=` and theirs HTML versions `<`, `>`, `&eq;`.
4. Bad with abbreviations.
5. Can add extra `.0` for floats and integers.
6. Can add extra `0` or remove `0` for integers with a char postfix: `10M` -> `1m`.
7. Can hallucinate with integers. For the case like `list of positions exactly 7 openings available` result can be
`{'Name': 'Job_Type.Exact_Match', 'Value': 'Full Time'}`.
8. We fine-tuned this model with max sequence length = 1024, so it may happen that response will not be JSON-readable.
The list will be extended in the future.
### Recommendations
1. We used synthetic data for the first version of this model. So, we suggest you to precisely test this model on your company's domain, even it's in the list.
2. Use meaningful names for filters and theirs representations.
3. Provide examples for each representation.
4. Try to be compact, model was fine-tuned with max sequence length equal 1024.
5. During the generation use greedy strategy with tempertature 0.05.
6. The result will be better if you align a filters schema with a schema type of the training data.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
MODEL_ID = 'EmbeddingStudio/query-parser-falcon-7b-instruct'
```
Initialize tokenizer:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
MODEL_ID,
trust_remote_code=True,
add_prefix_space=True,
use_fast=False,
)
```
Initialize model:
```python
import torch
from peft import LoraConfig
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
load_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
device_map = {"": 0}
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
quantization_config=bnb_config,
device_map=device_map,
torch_dtype=torch.float16
)
```
Use for parsing:
```python
import json
from json import JSONDecodeError
INSTRUCTION_TEMPLATE = """
### System: Master in Query Analysis
### Instruction: Organize queries in JSON, adhere to schema, verify spelling.
#### Category: {0}
#### Schema: ```{1}```
#### Query: {2}
### Response:
"""
def parse(
query: str,
company_category: str,
filter_schema: dict,
model: AutoModelForCausalLM,
tokenizer: AutoTokenizer
):
input_text = INSTRUCTION_TEMPLATE.format(
company_category,
json.dumps(filter_schema),
query
)
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generating text
output = model.generate(input_ids.to('cuda'),
max_new_tokens=1024,
do_sample=True,
temperature=0.05,
pad_token_id=50256
)
try:
parsed = json.loads(tokenizer.decode(output[0], skip_special_tokens=True).split('## Response:\n')[-1])
except JSONDecodeError as e:
parsed = dict()
return parsed
category = 'Logistics and Supply Chain Management'
query = 'Which logistics companies in the US have a perfect 5.0 rating ?'
schema = [{"Name": "Customer_Ratings", "Representations": [{"Name": "Exact_Rating", "Type": "float", "Examples": [4.5, 3.2, 5.0, "4.5", "Unstructured"]}, {"Name": "Minimum_Rating", "Type": "float", "Examples": [4.0, 3.0, 5.0, "4.5"]}, {"Name": "Star_Rating", "Type": "int", "Examples": [4, 3, 5], "Enum": [1, 2, 3, 4, 5]}]}, {"Name": "Date", "Representations": [{"Name": "Day_Month_Year", "Type": "str", "Examples": ["01.01.2024", "15.06.2023", "31.12.2022", "25.12.2021", "20.07.2024", "15.06.2023"], "Pattern": "dd.mm.YYYY"}, {"Name": "Day_Name", "Type": "str", "Examples": ["Monday", "Wednesday", "Friday", "Thursday", "Monday", "Tuesday"], "Enum": ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]}]}, {"Name": "Date_Period", "Representations": [{"Name": "Specific_Period", "Type": "str", "Examples": ["01.01.2024 - 31.01.2024", "01.06.2023 - 30.06.2023", "01.12.2022 - 31.12.2022"], "Pattern": "dd.mm.YYYY - dd.mm.YYYY"}, {"Name": "Month", "Type": "str", "Examples": ["January", "June", "December"], "Enum": ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]}, {"Name": "Quarter", "Type": "str", "Examples": ["Q1", "Q2", "Q3"], "Enum": ["Q1", "Q2", "Q3", "Q4"]}, {"Name": "Season", "Type": "str", "Examples": ["Winter", "Summer", "Autumn"], "Enum": ["Winter", "Spring", "Summer", "Autumn"]}]}, {"Name": "Destination_Country", "Representations": [{"Name": "Country_Name", "Type": "str", "Examples": ["United States", "Germany", "China"]}, {"Name": "Country_Code", "Type": "str", "Examples": ["US", "DE", "CN"]}, {"Name": "Country_Abbreviation", "Type": "str", "Examples": ["USA", "GER", "CHN"]}]}]
output = parse(query, category, schema)
print(output)
# [out]: [{"Value": "Which logistics companies in the US have a perfect 5.0 rating?", "Name": "Correct"}, {"Name": "Customer_Ratings.Exact_Rating", "Value": 5.0}, {"Name": "Destination_Country.Country_Code", "Value": "US"}]
```
## Training Details
### Training Data
We used synthetically generated query parsing instructions:
* We generated lists of possible filters for 63 customer categories:
* [Raw version of filters dataset](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters-raw)
* [Split by representations](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-filters)
* Select randomly up-to 150 possible combinations (1-3 filters in each combination) of filters, the way each filter's representation appears maximum twice.
* For a given category and combination we [generated](https://huggingface.co/datasets/EmbeddingStudio/synthetic-search-queries) with GPT-4 Turbo:
* 2 search queries and theirs parsed version with unstructured parts.
* 2 search queries and theirs parsed version without unstructured part.
* Using filters, queries and parsed version we prepared [72.5k falcon format instruction](EmbeddingStudio/query-parsing-instructions-falcon)
**Warning:** EmbeddingStudio team aware you that generated queries **weren't enough curated**, and will be curated later once we finish our product market fit stage.
#### Principles of train / test splitting
As we are trying to fine-tune LLM to follow zero-shot query parsing instructions, so we want to test:
* Ability to work well with unseen domain
* Ability to work well with unseen filters
* Ability to work well with unseen queries
For these purposes we:
1. We put into test split 5 categories, completely separared from train: `Telecommunication Companies, Legal Services, Enterprise Software Development, Artificial Intelligence and Machine Learning, Documentation and Knowledge Sharing`.
2. Also out of each appearing in train company categories, we put aside / removed one filter and queries related to it.
3. Selected 5% of other queries and put it into test.
#### Filters generation details
We used GPT-4 Turbo to generate several possible filters for 63 company categroies. For each filter we also generated some possible representations. For examples filter `Date` can be represented as `dd/mm/YYYY`, `YYYY-mm-dd`, as words `2024 Jan 17`, etc.
#### Queries generation details
We also used GPT-4 Turbo for generation of search queries and theirs parsed version. Main principles were:
* If passed schema doesn't contain possible filter, do not generate query itself or a possible filter
* If a selected representations combination contains enumeration, so we ask to map values in a search query and a parsed version.
* If a selected representations combination contains pattern, so we ask GPT-4 Turbo to be aligned with a pattern
#### Instructions generation details
For the generation instructions we used following ideas:
1. Zero-Shot query parser should be schema agnostic. Cases like `snake_case, CamelCase, http-headers-like` should not ruin generation process.
2. Zero-Shot query parser should be spelling errors insensitive.
3. Training instructions should be in the following order:
* Category
* Schema
* Query
So LLM can be used in the following way: just generate embedding of category -> schema part, so inference will be faster.
We assume, that `schema agnostic` termin means something wider, like to be able to work not only with JSONs, but also with HTML, Markdown, YAML, etc. We are working on it.
So, what was our approach as an attempt to achieve these abilities:
1. For each query we generated a version with a mistake
2. Passed to each parsed version an additional field `Correct`, which contains a corrected version of a search query.
3. For each query we randomly selected and used a case for schema fields and a case for filter and representation names.
4. For each query we additionally generated two instuctions:
* Where did we remove from a provided schema and parsed version one filter
* Where did we remove from a provided schema and parsed version all related filters
**Warning:** EmbeddingStudio team ask you to curate datasets on your own precisely.
### Training Procedure
1. Mixed Precision Regime
2. Supervised Fine-Tuning
3. Three epochs with cosine scheduler
All details in Training Hyperparameters
#### Preprocessing [optional]
The preprocessing steps are not detailed in the provided code. Typically, preprocessing involves tokenization, normalization, data augmentation, and handling of special tokens. In this training setup, the tokenizer was configured with `add_prefix_space=True` and `use_fast=False`, which might indicate special considerations for tokenizing certain languages or text formats.
#### Training Hyperparameters
| Hyperparameter | Value | Description |
|--------------------------------------|------------------------------|-------------------------------------------------------|
| **Training Regime** | Mixed Precision (bfloat16) | Utilizes bfloat16 for efficient memory usage and training speed. |
| **Model Configuration** | Causal Language Model | Incorporates LoRA (Low-Rank Adaptation) for training efficiency. |
| **Quantization Configuration** | Bits and Bytes (BnB) | Uses settings like `load_in_4bit` and `bnb_4bit_quant_type` for model quantization. |
| **Training Environment** | CUDA-enabled Device | Indicates GPU acceleration for training. |
| **Learning Rate** | 2e-4 | Determines the step size at each iteration while moving toward a minimum of a loss function. |
| **Weight Decay** | 0.001 | Helps in regularizing and preventing overfitting. |
| **Warmup Ratio** | 0.03 | Fraction of total training steps used for the learning rate warmup. |
| **Optimizer** | Paged AdamW (32-bit) | Optimizes the training process with efficient memory usage. |
| **Gradient Accumulation Steps** | 2 | Reduces memory consumption and allows for larger effective batch sizes. |
| **Max Grad Norm** | 0.3 | Maximum norm for the gradients. |
| **LR Scheduler Type** | Cosine | Specifies the learning rate schedule. |
| **PEFT Configurations** | LoraConfig | Details like `lora_alpha`, `lora_dropout`, and `r` for LoRA adaptations. |
| **Training Dataset Segmentation** | Train and Test Sets | Segmentation of the dataset for training and evaluation. |
| **Max Sequence Length** | 1024 | Maximum length of the input sequences. |
### Testing Data, Factors & Metrics
#### Testing Data
All information is provided in [Training Data](#training-data) section.
### Factors Influencing Falcon-7B-Instruct Model Performance
#### 1. Company Category and Domain Knowledge
- Performance may vary based on the specific company category or domain.
- Enhanced performance in domains specifically trained on, such as Educational Institutions, Banking Services, Logistics, etc.
#### 2. Filter Schema Adaptability
- Ability to adapt to various filter schemas.
- Performance in parsing and organizing queries according to different schemas.
#### 3. Handling of Spelling and Syntax Errors
- Robustness in handling spelling errors and syntax variations in queries.
#### 4. Representation and Type Handling
- Capability to handle different data representations (e.g., date formats, enumerations, patterns).
- Accurate processing of various base types (int, float, str, bool).
#### 5. Length and Complexity of Queries
- Impact of the length and complexity of queries on performance.
- Maximum sequence length of 1024 could pose limitations for longer or complex queries.
#### 6. Bias and Ethical Considerations
- Inherited ethical biases from the original model.
- Importance of understanding these biases in different contexts.
#### 7. Limitations in Fine-Tuning and Data Curation
- Limitations such as extra spaces, handling of abbreviations, etc.
- Influence of the extent of training data curation on model accuracy.
#### 8. Specific Use Cases
- Recommended primarily for zero-shot search query parsing and search query spell correction.
- Performance in other use cases might be unpredictable.
#### 9. Training Data Quality and Diversity
- Quality and diversity of synthetic training data.
- Influence on the model's effectiveness across different scenarios.
##### Testing Procedure
Results of testing procedure as JSON is provided [here](https://huggingface.co/EmbeddingStudio/query-parser-falcon-7b-instruct/blob/main/falcon-7b-instruct-test.json).
This is a list of items, each item is:
1. Predicted parsed query
2. Real parsed query
3. Category
#### Metrics
#### Metric Overview
Our zero-shot search query parsing model is designed to extract structured information from unstructured search queries with high precision. The primary metric for evaluating our model's performance is the True Positive (TP) rate, which is assessed using a specialized token-wise Levenshtein distance. This approach is aligned with our goal to achieve semantic accuracy in parsing user queries.
#### True Positives (TP)
- **Definition**: A True Positive in our model is counted when the model correctly identifies both the 'Name' and 'Value' in a query, matching the expected results.
- **Measurement Method**: The TP rate is quantified using the `levenshtein_tokenwise` function, which calculates the distance between predicted and actual key-value pairs at a token level. We consider a Levenshtein distance of 0.25 or less as acceptable for matching.
- **Importance**:
- **Token-Level Accuracy**: We use token-wise accuracy over traditional character-level Levenshtein distance, which can be overly strict, especially for minor spelling variations. Our token-wise approach prioritizes semantic accuracy.
- **Relevance to Search Queries**: Accuracy at the token level is more indicative of the model's ability to understand and parse user intent in search queries.
#### Generation Strategy
- **Approach**: The model generates responses based on input queries with a maximum token length set to 1000, employing a sampling strategy (do_sample=True), and a low temperature setting of 0.05. This controlled randomness in generation ensures a variety of accurate and relevant responses.
- **Impact on TP**:
- The low temperature setting directly influences the TP rate by reducing the randomness in the model's predictions. With a lower temperature, the model is more likely to choose the most probable word in a given context, leading to more accurate and consistent outputs. This is particularly crucial in search query parsing, where understanding and interpreting user input with high precision is vital.
#### Additional Metrics
- **False Positives (FP) and False Negatives (FN)**: These metrics are monitored to provide a comprehensive view of the model's predictive capabilities.
- **Precision, Recall, F1 Score, Accuracy**: These standard metrics complement our TP-focused assessment, providing a rounded picture of the model's performance in various aspects.
#### Motivation for Metric Choice
- **Alignment with User Intent**: Focusing on token-wise accuracy ensures the model's performance closely mirrors the structure and intent typical in search queries.
- **Robustness Against Query Variations**: This metric approach makes the model adaptable to the varied formulations of real-world search queries.
- **Balancing Precision and Recall**: Our method aims to balance the model's ability not to miss relevant key-value pairs (high recall) while not over-identifying irrelevant ones (high precision).
##### Total metrics
| Category | Recall | Precision | F1 | Accuracy |
| ------------------------------------------------ | ------ | --------- | ----- | -------- |
| Telecommunication Companies [+] | 0.70 | 0.67 | 0.68 | 0.52 |
| Legal Services [+] | 0.80 | 0.74 | 0.77 | 0.63 |
| Enterprise Software Development [+] | 0.78 | 0.71 | 0.74 | 0.59 |
| Artificial Intelligence and Machine Learning [+] | 0.77 | 0.78 | 0.78 | 0.63 |
| Documentation and Knowledge Sharing [+] | 0.68 | 0.65 | 0.66 | 0.50 |
| Educational Institutions | 0.55 | 0.51 | 0.53 | 0.36 |
| Job Recruitment Agencies | 0.58 | 0.51 | 0.54 | 0.37 |
| Banking Services | 0.73 | 0.81 | 0.76 | 0.62 |
| Investment Services | 0.50 | 0.50 | 0.50 | 0.33 |
| Insurance Services | 0.77 | 0.77 | 0.77 | 0.62 |
| Financial Planning and Advisory | 0.65 | 0.67 | 0.66 | 0.49 |
| Credit Services | 0.60 | 0.65 | 0.63 | 0.45 |
| Payment Processing | 0.79 | 0.74 | 0.76 | 0.62 |
| Mortgage and Real Estate Services | 1.00 | 1.00 | 1.00 | 1.00 |
| Taxation Services | 0.52 | 0.57 | 0.54 | 0.37 |
| Risk Management and Compliance | 1.00 | 0.95 | 0.98 | 0.95 |
| Digital and Mobile Banking | 0.72 | 0.71 | 0.71 | 0.55 |
| Retail Stores (Online and Offline) | 0.96 | 0.87 | 0.92 | 0.85 |
| Automotive Dealerships | 0.52 | 0.53 | 0.53 | 0.36 |
| Restaurants and Food Delivery Services | 0.76 | 0.77 | 0.76 | 0.62 |
| Entertainment and Media Platforms | 0.80 | 0.84 | 0.82 | 0.70 |
| Government Services | 0.58 | 0.65 | 0.61 | 0.44 |
| Travelers and Consumers | 0.89 | 0.89 | 0.89 | 0.80 |
| Logistics and Supply Chain Management | 0.56 | 0.59 | 0.58 | 0.41 |
| Customer Support Services | 0.60 | 0.54 | 0.57 | 0.40 |
| Market Research Firms | 0.52 | 0.49 | 0.51 | 0.34 |
| Mobile App Development | 0.81 | 0.79 | 0.80 | 0.67 |
| Game Development | 0.94 | 0.94 | 0.94 | 0.88 |
| Cloud Computing Services | 0.64 | 0.62 | 0.63 | 0.46 |
| Data Analytics and Business Intelligence | 0.63 | 0.61 | 0.62 | 0.45 |
| Cybersecurity Software | 0.54 | 0.59 | 0.57 | 0.39 |
| User Interface/User Experience Design | 0.63 | 0.64 | 0.63 | 0.46 |
| Internet of Things (IoT) Development | 0.89 | 0.71 | 0.79 | 0.65 |
| Project Management Tools | 0.80 | 0.83 | 0.81 | 0.69 |
| Version Control Systems | 0.77 | 0.73 | 0.75 | 0.60 |
| Continuous Integration/Continuous Deployment | 0.85 | 0.83 | 0.84 | 0.72 |
| Issue Tracking and Bug Reporting | 0.64 | 0.62 | 0.63 | 0.46 |
| Collaborative Development Environments | 0.68 | 0.67 | 0.68 | 0.51 |
| Team Communication and Chat Tools | 0.94 | 0.91 | 0.93 | 0.87 |
| Task and Time Management | 0.78 | 0.78 | 0.78 | 0.64 |
| Customer Support and Feedback | 0.88 | 0.82 | 0.85 | 0.74 |
| Cloud-based Development Environments | 0.81 | 0.81 | 0.81 | 0.68 |
| Image Stock Platforms | 0.88 | 0.85 | 0.87 | 0.76 |
| Video Hosting and Portals | 0.86 | 0.88 | 0.87 | 0.77 |
| Social Networks | 0.60 | 0.57 | 0.59 | 0.41 |
| Professional Social Networks | 0.68 | 0.69 | 0.68 | 0.52 |
| Dating Apps | 0.90 | 0.90 | 0.90 | 0.82 |
| Aggregate | 0.73 | 0.72 | 0.73 | 0.59 |
##### Unseen domains metrics
| Category | Recall | Precision | F1 | Accuracy |
| ------------------------------------------------ | ------ | --------- | ----- | -------- |
| Telecommunication Companies [+] | 0.70 | 0.67 | 0.68 | 0.52 |
| Legal Services [+] | 0.80 | 0.74 | 0.77 | 0.63 |
| Enterprise Software Development [+] | 0.78 | 0.71 | 0.74 | 0.59 |
| Artificial Intelligence and Machine Learning [+] | 0.77 | 0.78 | 0.78 | 0.63 |
| Documentation and Knowledge Sharing [+] | 0.68 | 0.65 | 0.66 | 0.50 |
| Aggregate | 0.75 | 0.71 | 0.73 | 0.57 |
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA Tesla V100
- **Hours used:** 72
- **Cloud Provider:** Google Cloud
- **Compute Region:** us-west-1
- **Carbon Emitted:** 6.48
## Technical Specifications
### Model Architecture and Objective
* Base model: [Falcon-7b-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
* Quantization Configuration: Uses settings like `load_in_4bit` and `bnb_4bit_quant_type` for model quantization.
### Compute Infrastructure
[To be added]
#### Hardware
[To be added]
#### Software
* Python 3.9+
* CUDA 11.7.1
* NVIDIA [Compatible Drivers](https://www.nvidia.com/download/find.aspx)
* Torch 2.0.0
## More Information / About us
EmbeddingStudio is an innovative open-source framework designed to seamlessly convert a combined
"Embedding Model + Vector DB" into a comprehensive search engine. With built-in functionalities for
clickstream collection, continuous improvement of search experiences, and automatic adaptation of
the embedding model, it offers an out-of-the-box solution for a full-cycle search engine.

### Features
1. ๐ Turn your vector database into a full-cycle search engine
2. ๐ฑ๏ธ Collect users feedback like clickstream
3. ๐ (*) Improve search experience on-the-fly without frustrating wait times
4. ๐ (*) Monitor your search quality
5. ๐ฏ Improve your embedding model through an iterative metric fine-tuning procedure
6. ๐ (*) Use the new version of the embedding model for inference
(*) - features in development
EmbeddingStudio is highly customizable, so you can bring your own:
1. Data source
2. Vector database
3. Clickstream database
4. Embedding model
For more details visit [GitHub Repo](https://github.com/EulerSearch/embedding_studio/tree/main).
## Model Card Authors and Contact
* Aleksandr Iudaev [[LinkedIn](https://www.linkedin.com/in/alexanderyudaev/)] [[Email](mailto:[email protected])]
* Andrei Kostin [[LinkedIn](https://www.linkedin.com/in/andrey-kostin/)] [[Email](mailto:[email protected])]
* ML Doom [AI Assistant]
### Framework versions
- PEFT 0.5.0
- Datasets 2.16.1
- BitsAndBytes 0.41.0
- PyTorch 2.0.0
- Transformers 4.36.2
- TRL 0.7.7
|
rhplus0831/maid-yuzu-v3-alter
|
rhplus0831
| 2024-02-02T07:47:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:merge:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:merge:smelborp/MixtralOrochi8x7B",
"base_model:ycros/BagelMIsteryTour-v2-8x7B",
"base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T03:25:41Z |
---
base_model:
- NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
- ycros/BagelMIsteryTour-v2-8x7B
- smelborp/MixtralOrochi8x7B
tags:
- mergekit
- merge
---
# maid-yuzu-v3-alter
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was created because I wanted to know how the density and weight values of the dare_ties method affect the base model.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: smelborp/MixtralOrochi8x7B
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: ycros/BagelMIsteryTour-v2-8x7B
parameters:
density: 0.6
weight: 0.5
- layer_range: [0, 32]
model:
model:
path: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
parameters:
density: 0.4
weight: 0.25
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
```
|
PierreCounathe/Reinforce-CartPole-v1
|
PierreCounathe
| 2024-02-02T07:37:31Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T07:37:22Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
skylord/setfit-bge-small-v1.5-sst2-8-shot-talk2loop
|
skylord
| 2024-02-02T07:34:39Z | 52 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] |
text-classification
| 2024-02-02T07:34:22Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: There is a man here forced us the girls in the house to have sex with him.
He took videos of us and now he is asking for money. Can someone help us?
- text: I work in textile factory. My boss, he not nice. He do things I don't like.
He make job not good, and my friends stop talk to me. I need help
- text: In Bahay Toro QC even though they did not wear face masks there was not
much news of COVID there were those who felt symptoms but self-quarantined and
there were also those who died who were told that they were COVID even though
they were not. Only a few were reported dead in the area due to COVID less than
five. During the pandemic the Barangay had curfew social distancing facemasks
and alcohol. We received SAP canned rice alcohol and facemasks and money. If
the number of COVID cases increases the barangay is not ready and when it increases that
is just the time that they will be stricter. All of us in our family were able
to be vaccinated and had booster shots apart from my younger brother. When it
tightens again and there is a pandemic unemployment and source of income will
be a test. Focus more on providing immediate assistance in the midst of a pandemic
- text: There is a child here who will be married soon. Please send help urgently.
She is only 13. It is not the first time he has done this.
- text: Drenage problem here in lilanda
pipeline_tag: text-classification
inference: true
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9827586206896551
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| sensitive | <ul><li>'Im Amie Taylaran from Pan-ay Clarin from Solo Parent Organization grateful and excited to receive the help you are giving.'</li><li>'I want to volunteer'</li><li>'There is now a growing popular street Pennsylvania street in the annex Phase 3 of Greenland Executive Village for bikers walkers joggers every morning when the weather is fair. I presume they are groups of retirees matrons sports enthusiasts an even dance exercisers. They all wear face masks for health protection against COVID-19 infection. My concern is this: face masks are just thrown away after use when these fitness buffs are done with their morning binges. Face masks thrown on the pavement of the street the sidewalks and the grass field. Health fitness aficionados they all are but careless with the proper disposal of their face masks.'</li></ul> |
| other | <ul><li>'There is a man here forced us the girls in the house to have sex with him. He took videos of us and now he is asking for money. Can someone help us?'</li><li>'In this community alcohol abuse is rampant. The men go out drinking and come home and beat their wives. They are getting seriously injured.'</li><li>"I find myself in a very challenging situation - I've experienced sexual abuse at work. If anyone has gone through something similar, I would appreciate your guidance and support. It's tough, but we're stronger together."</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9828 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the ๐ค Hub
model = SetFitModel.from_pretrained("skylord/setfit-bge-small-v1.5-sst2-8-shot-talk2loop")
# Run inference
preds = model("Drenage problem here in lilanda")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 4 | 38.0 | 171 |
| Label | Training Sample Count |
|:----------|:----------------------|
| sensitive | 8 |
| other | 8 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.2 | 1 | 0.1988 | - |
| 10.0 | 50 | 0.019 | - |
### Framework Versions
- Python: 3.10.11
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
rizla/raccoon-small
|
rizla
| 2024-02-02T07:33:56Z | 51 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"dpo",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T06:43:23Z |
---
license: cc-by-nc-4.0
base_model: [mistralai/Mixtral-8x7B-Instruct-v0.1]
tags:
- dpo
---
# rizla been cooking while singing
# This is an experimental model that I made by merging two 2expmixtrals The mergekitty is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
The merged model has 19 billion parraraameters and was made trained on 640GB of vram cluster
## Merge me baby one more time
### Sending this contraption out straight to mergeland, would be hilarious if it gets 1st
|
Webse/google-play-sentiment-analysis
|
Webse
| 2024-02-02T07:22:38Z | 96 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T07:22:00Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: google-play-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# google-play-sentiment-analysis
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3530
- Accuracy: 0.461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.3409 | 0.386 |
| No log | 2.0 | 250 | 1.2982 | 0.452 |
| No log | 3.0 | 375 | 1.3530 | 0.461 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jumtul/LDCC-Hyeogi.03
|
jumtul
| 2024-02-02T07:20:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"base_model:merge:LDCC/LDCC-SOLAR-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"base_model:merge:hyeogi/SOLAR-10.7B-dpo-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T07:14:12Z |
---
base_model:
- hyeogi/SOLAR-10.7B-dpo-v1
- LDCC/LDCC-SOLAR-10.7B
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
amaandhada/minichat-finetuned-working_final
|
amaandhada
| 2024-02-02T07:15:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T07:15:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blai2/peft_adapter_demo
|
blai2
| 2024-02-02T07:14:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T07:14:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dlibf/zephyr-7b-sft-neft-5
|
dlibf
| 2024-02-02T07:11:52Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T03:51:20Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: zephyr-7b-sft-neft-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-neft-5
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8992 | 1.0 | 1090 | 0.9296 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
amaandhada/minichat-finetuned-working
|
amaandhada
| 2024-02-02T07:11:04Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:GeneZC/MiniChat-3B",
"base_model:adapter:GeneZC/MiniChat-3B",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T07:11:01Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: GeneZC/MiniChat-3B
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [GeneZC/MiniChat-3B](https://huggingface.co/GeneZC/MiniChat-3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
GlycerinLOL/Pegasus_xsum_samsum
|
GlycerinLOL
| 2024-02-02T06:56:27Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/pegasus-xsum",
"base_model:finetune:google/pegasus-xsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T05:43:52Z |
---
base_model: google/pegasus-xsum
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
- precision
- recall
- f1
model-index:
- name: Pegasus_xsum_samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 0.5072
- name: Precision
type: precision
value: 0.9247
- name: Recall
type: recall
value: 0.9099
- name: F1
type: f1
value: 0.917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus_xsum_samsum
This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4709
- Rouge1: 0.5072
- Rouge2: 0.2631
- Rougel: 0.4243
- Rougelsum: 0.4244
- Gen Len: 19.1479
- Precision: 0.9247
- Recall: 0.9099
- F1: 0.917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:------:|
| 1.9542 | 1.0 | 920 | 1.5350 | 0.4928 | 0.2436 | 0.4085 | 0.4086 | 18.5672 | 0.9229 | 0.9074 | 0.9149 |
| 1.6331 | 2.0 | 1841 | 1.4914 | 0.5037 | 0.257 | 0.4202 | 0.4206 | 18.8154 | 0.9246 | 0.9092 | 0.9166 |
| 1.5694 | 3.0 | 2762 | 1.4761 | 0.5071 | 0.259 | 0.4212 | 0.4214 | 19.4487 | 0.9241 | 0.9103 | 0.917 |
| 1.5374 | 4.0 | 3680 | 1.4709 | 0.5072 | 0.2631 | 0.4243 | 0.4244 | 19.1479 | 0.9247 | 0.9099 | 0.917 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
asun17904/glue-qnli-gpt2-kd
|
asun17904
| 2024-02-02T06:45:57Z | 2 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T16:17:39Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|15.884|0.837|1.0|
|14.371|0.869|2.0|
|
watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron
|
watashiha
| 2024-02-02T06:39:21Z | 8 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"ja",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:05:34Z |
---
license: llama2
language:
- ja
---
The English document is [here](https://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron/blob/main/README_en.md)
## ใขใใซๆฆ่ฆ
[Watashiha-Llama-2-13B-Ogiri-sft](https://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft)ใAWSใฎ[inf2ใคใณในใฟใณใน](https://aws.amazon.com/jp/ec2/instance-types/inf2/)ใงๅไฝใใใใใซใณใณใใคใซใใใใขใใซใงใใ
ใณใณใใคใซใฏไปฅไธใฎ่จไบใๅ่ใซ่กใใพใใใ
https://huggingface.co/docs/optimum-neuron/tutorials/llama2-13b-chatbot
* License: [LLAMA 2 COMMUNITY LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE)
## ไฝฟ็จๆนๆณ
1. AWS EC2ใง**inf2.xlarge**ใฎใคใณในใฟใณในใ็ซใฆใใ
ใขใใซใฎใใฆใณใญใผใใง50GBใปใฉๅฟ
่ฆใซใชใใฎใงใในใใฌใผใธใฎใตใคใบใฏ256GBไปฅไธใซ่จญๅฎใใฆใใใฎใใใใใใใพใใ
AMIใฏไปฅไธใฎใใฎใไฝฟ็จใใฆใใ ใใใ
**Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 20.04) 20240102**
2. ไปฅไธใฎใณใใณใใๅฎ่กใใ็จๆใใใฆใใpython็ฐๅขใๆๅนใซใใใ
```bash
source /opt/aws_neuron_venv_pytorch/bin/activate
```
3. **optimum**ใใคใณในใใผใซใใใ
```bash
pip install optimum[neuronx]
```
4. ไธ่จใฎๆ้ ใ็ตใใใใไปฅไธใฎใฝใผในใณใผใใๅฎ่กใ
```python
from optimum.neuron import NeuronModelForCausalLM
from transformers import AutoTokenizer
model_name = "watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = NeuronModelForCausalLM.from_pretrained(model_name)
odai = "ใใธใทใฃใณใฎใทใงใผใงใขใทในใฟใณใใๆถใใใพใพๆปใฃใฆใใชใๆใฎไธ่จใ"
text = f"""
ไปฅไธใฏใใฟในใฏใ่ชฌๆใใๆ็คบใจใๆ่ใฎใใๅ
ฅๅใฎ็ตใฟๅใใใงใใ่ฆๆฑใ้ฉๅใซๆบใใๅฟ็ญใๆธใใชใใใ
### ๆ็คบ:
ๅ
ฅๅใฎๆใฏๅคงๅๅฉใฎใ้กใงใใใ้กใซๆฒฟใฃใ้ข็ฝใใใฑใ็ๆใใฆใใ ใใใ
### ๅ
ฅๅ:
{odai}
### ๅฟ็ญ:
"""
text = text.lstrip()
token_ids = tokenizer.encode(text, return_tensors="pt")
input_len = token_ids.shape[1]
output_ids = model.generate(
token_ids,
max_length=input_len + 64,
do_sample=True,
top_p=0.9,
top_k=50,
temperature=0.8,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
print(output)
"""
ไปฅไธใฏใใฟในใฏใ่ชฌๆใใๆ็คบใจใๆ่ใฎใใๅ
ฅๅใฎ็ตใฟๅใใใงใใ่ฆๆฑใ้ฉๅใซๆบใใๅฟ็ญใๆธใใชใใใ
### ๆ็คบ:
ๅ
ฅๅใฎๆใฏๅคงๅๅฉใฎใ้กใงใใใ้กใซๆฒฟใฃใ้ข็ฝใใใฑใ็ๆใใฆใใ ใใใ
### ๅ
ฅๅ:
ใใธใทใฃใณใฎใทใงใผใงใขใทในใฟใณใใๆถใใใพใพๆปใฃใฆใใชใๆใฎไธ่จใ
### ๅฟ็ญ:
ใใใขใทในใฟใณใใใใชใใชใ๏ผ
"""
```
### ใณใณใใคใซใฎใใฉใกใผใฟ
#### input_shapes
```
{
"batch_size": 1,
"sequence_length": 1024,
}
```
#### compiler_args
```
{
"num_cores": 2,
"auto_cast_type": 'bf16',
}
```
|
asun17904/glue-qqp-gpt2
|
asun17904
| 2024-02-02T06:33:29Z | 1 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T11:42:02Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|14.094|0.867|1.0|
|13.645|0.882|2.0|
|
octnn/a2c-PandaPickAndPlace-v3
|
octnn
| 2024-02-02T06:23:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T06:18:51Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -45.00 +/- 15.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
restful3/distilbert-base-uncased-finetuned-emotion
|
restful3
| 2024-02-02T06:21:43Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T04:27:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9346529848491013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1508
- Accuracy: 0.9345
- F1: 0.9347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1641 | 1.0 | 250 | 0.1672 | 0.931 | 0.9311 |
| 0.1093 | 2.0 | 500 | 0.1508 | 0.9345 | 0.9347 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
asun17904/glue-qnli-gpt2
|
asun17904
| 2024-02-02T06:18:56Z | 2 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T10:35:25Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 0.0
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|0.463|0.854|1.0|
|0.441|0.875|2.0|
|
jlbaker361/ddpo-stability-e5
|
jlbaker361
| 2024-02-02T06:14:41Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-02T06:13:00Z |
---
{}
---
# DDPO trained model
num_epochs=5
train_gradient_accumulation_steps=1
sample_num_steps=30
sample_batch_size=16
train_batch_size=16
sample_num_batches_per_epoch=32
based off of stabilityai/stable-diffusion-2-base
and then trained off of None
|
ldwang/mamba-1.4b-aquila-400b-sft
|
ldwang
| 2024-02-02T06:13:26Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2312.00752",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T08:32:35Z |
## Approach
This model of [Mamba architecture](https://arxiv.org/abs/2312.00752) has been pre-trained on approximately 400B tokens of Chinese and English corpora, followed by fine-tuning on Chinese and English instructions.
## Usage
```python
import torch
from mamba_ssm.models.mixer_seq_simple import MambaLMHeadModel
from transformers import AutoTokenizer
repo_id = 'mamba-1.4b-aquila-400b-sft'
device = f"cuda:0"
model = MambaLMHeadModel.from_pretrained(repo_id, dtype=torch.bfloat16, device=device)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(repo_id)
text = "ๅไธ้ฆๆฅ่ไธป้ข็ไธ่จ็ปๅฅ"
prompt = f"A chat between a curious human and an artificial intelligence assistant. "
prompt += f"The assistant gives helpful, detailed, and polite answers to the human's questions.\n\n"
prompt += f"<|startofpiece|>{text}<|endofpiece|>"
tokens = tokenizer.encode_plus(prompt, truncation=False)["input_ids"]
tokens = torch.tensor(tokens)[None,].to(device)
with torch.no_grad():
input_length = len(tokens[0])
out_ids = model.generate(input_ids=tokens, max_length=input_length+200, temperature=1.0,
top_p=0.95, eos_token_id=tokenizer.eos_token_id, cg=True, top_k=15)
out_ids = out_ids[0][input_length:].cpu().numpy()
out_text = tokenizer.decode(out_ids.tolist())
print(out_text)
```
> ่ฑ็บขๆณ็ปฟๅบๆฅ่๏ผ
็็ซนๅฃฐๅฃฐ็ฌ่ฏญๆทปใ
ๅขๅๅไน่ฟๅฎตๅบ๏ผ
็ฆๆฐๆปก้จๆปกๅฐๆฌขใ</s>
## References
The Mamba architecture was introduced in [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752).
The official implementation is here: https://github.com/state-spaces/mamba/tree/main
|
CLMBR/pp-mod-subj-lstm-1
|
CLMBR
| 2024-02-02T06:13:01Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T10:10:01Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7926 | 0.03 | 76320 | 4.8046 |
| 4.5099 | 1.03 | 152640 | 4.5258 |
| 4.3635 | 0.03 | 228960 | 4.3901 |
| 4.2771 | 1.03 | 305280 | 4.3076 |
| 4.2173 | 2.03 | 381600 | 4.2516 |
| 4.1646 | 0.03 | 457920 | 4.2105 |
| 4.1238 | 0.03 | 534240 | 4.1808 |
| 4.0899 | 1.03 | 610560 | 4.1562 |
| 4.0628 | 2.03 | 686880 | 4.1370 |
| 4.0419 | 0.03 | 763200 | 4.1217 |
| 4.0191 | 1.03 | 839520 | 4.1092 |
| 4.0021 | 2.03 | 915840 | 4.0979 |
| 3.9884 | 0.03 | 992160 | 4.0885 |
| 3.9743 | 1.03 | 1068480 | 4.0805 |
| 3.9592 | 2.03 | 1144800 | 4.0736 |
| 3.9441 | 0.03 | 1221120 | 4.0688 |
| 3.9355 | 1.03 | 1297440 | 4.0630 |
| 3.9289 | 2.03 | 1373760 | 4.0589 |
| 3.9175 | 0.03 | 1450080 | 4.0547 |
| 3.9141 | 1.03 | 1526400 | 4.0505 |
| 3.9112 | 2.03 | 1602720 | 4.0477 |
| 3.9007 | 0.03 | 1679040 | 4.0438 |
| 3.8939 | 1.03 | 1755360 | 4.0432 |
| 3.8858 | 0.03 | 1831680 | 4.0397 |
| 3.8795 | 1.03 | 1908000 | 4.0380 |
| 3.8776 | 0.03 | 1984320 | 4.0363 |
| 3.8692 | 1.03 | 2060640 | 4.0353 |
| 3.8658 | 0.03 | 2136960 | 4.0338 |
| 3.8636 | 1.03 | 2213280 | 4.0321 |
| 3.8603 | 0.03 | 2289600 | 4.0314 |
| 3.8524 | 1.03 | 2365920 | 4.0305 |
| 3.846 | 2.03 | 2442240 | 4.0297 |
| 3.8448 | 0.03 | 2518560 | 4.0285 |
| 3.8429 | 0.03 | 2594880 | 4.0275 |
| 3.8383 | 1.03 | 2671200 | 4.0271 |
| 3.8414 | 0.03 | 2747520 | 4.0266 |
| 3.8428 | 0.03 | 2823840 | 4.0259 |
| 3.8395 | 0.03 | 2900160 | 4.0253 |
| 3.8371 | 1.03 | 2976480 | 4.0248 |
| 3.8331 | 2.02 | 3052726 | 4.0244 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t300_e5_hacked
|
FounderOfHuggingface
| 2024-02-02T06:11:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2024-02-02T06:10:58Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Technoculture/MT7Bi-sft
|
Technoculture
| 2024-02-02T06:06:45Z | 207 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"en",
"dataset:xzuyn/chatdoctor-200k-stripped",
"dataset:Technoculture/riddle_sense",
"dataset:axiong/pmc_llama_instructions",
"dataset:Open-Orca/SlimOrca-Dedup",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T07:24:00Z |
---
datasets:
- xzuyn/chatdoctor-200k-stripped
- Technoculture/riddle_sense
- axiong/pmc_llama_instructions
- Open-Orca/SlimOrca-Dedup
language:
- en
tags:
- medical
---

[Technoculture/MT7Bi-alpha](https://huggingface.co/Technoculture/MT7Bi-alpha) adapter merged with its Base Model (Meditron 7B)
# Evaluations
## Open LLM Leaderboard
| Model | ARC |HellaSwag|TruthfulQA|Winogrande|GSM8K|
|---------------------------------------------------|----:|--------:|---------:|---------:|----:|
|[MT7Bi-sft (epoch 4)](https://huggingface.co/Technoculture/MT7Bi-sft)|54.1| 75.11| 43.08| 72.14|15.54|
|[MT7Bi-sft (epoch 1)](https://huggingface.co/Technoculture/MT7Bi)|50.94| 73.24| 43.04| 72.06|22.52|
### Model Evaluation Benchmark
| | | | | | | | | |
| -------- | ------ |----- |----- |----- |----- |----- |----- |------ |
|Category | MT7Bi | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b |
|Health | | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 |
|Nutrition | | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 |
|Psychology| | 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 |
|Science | | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 |
|Avg | | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 |
| | | | | | | | |
| | | | | | | |
| --- | ------ | ------ |----- |----- |----- |----- |
|Dataset| MT7Bi | meditron-70b | llama-2-70b | med42-70b* | clinical-camel-70b* |
|MMLU-Medical | 46.9 | 77.6 | 77.9 | 74.5 | 65.7 |
|PubMedQA | 65.2 | 81.6 | 80.0 | 61.2 | 67.0 |
|MedMCQA | 42.7 | 66.0 | 62.6 | 59.2 | 46.7 |
|MedQA | | 64.4 | 61.5 | 59.1 | 50.8 |
|MedQA-4-Option| 44.3 | 70.2 | 63.8 | 63.9 | 56.8 |
|Avg | | 72.0 | 69.2 | 63.6 | 57.4 |
| | | | | | | |
| | | | | | | |
| --- | ------ |----- |----- |----- |----- |------ |
|Dataset | meditron-7b | llama-2-7b | pmc-llama-7b | Zephyr-7B-beta* | Mistral-7B-instruct* | MT7Bi |
|MMLU-Medical | 54.2 | 53.7 | 56.4 | 63.3 | 60.0 | 46.9 |
|PubMedQA | 74.4 | 61.8 | 59.2 | 46.0 | 17.8 | 65.2 |
|MedMCQA | 59.2 | 54.4 | 57.6 | 43.0 | 40.2 | 42.7 |
|MedQA | 47.9 | 44.0 | 42.4 | 42.8 | 32.4 | |
|MedQA-4-Option| 52.0 | 49.6 | 49.2 | 48.5 | 41.1 | 44.3 |
|Avg | 57.5 | 52.7 | 53.0 | 48.7 | 38.3 | |
| | | | | | | |
| Model Name | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| ------------------ | -------- | --------- | ---- | ---------- | ---------- | -------- |
| Orca-2-7b | **78.4** | 76.1 | 53.7 | **52.4** | **74.2** | **47.2** |
| LLAMA-2-7b | 43.2 | **77.1** | 44.4 | 38.7 | 69.5 | 16 |
| MT7Bi-sft | 54.1 | 75.11 | - | 43.08 | 72.14 | 15.54 |
### ARC: 54.1%
| Task |Version| Metric | Value | |Stderr|
|-------------|------:|--------------------|-------------|---|------|
|arc_challenge| 1|acc,none | 0.51| | |
| | |acc_stderr,none | 0.01| | |
| | |acc_norm,none | 0.54| | |
| | |acc_norm_stderr,none| 0.01| | |
| | |alias |arc_challenge| | |
### HellaSwag: 75.11%
| Task |Version| Metric | Value | |Stderr|
|---------|------:|--------------------|---------|---|------|
|hellaswag| 1|acc,none | 0.57| | |
| | |acc_stderr,none | 0| | |
| | |acc_norm,none | 0.75| | |
| | |acc_norm_stderr,none| 0| | |
| | |alias |hellaswag| | |
### TruthfulQA: 43.08%
| Task |Version| Metric | Value | |Stderr|
|--------------|-------|-----------------------|-----------------|---|------|
|truthfulqa |N/A |bleu_max,none | 18.31| | |
| | |bleu_max_stderr,none | 0.46| | |
| | |bleu_acc,none | 0.39| | |
| | |bleu_acc_stderr,none | 0| | |
| | |bleu_diff,none | -1.63| | |
| | |bleu_diff_stderr,none | 0.39| | |
| | |rouge1_max,none | 41.99| | |
| | |rouge1_max_stderr,none | 0.71| | |
| | |rouge1_acc,none | 0.39| | |
| | |rouge1_acc_stderr,none | 0| | |
| | |rouge1_diff,none | -2.88| | |
| | |rouge1_diff_stderr,none| 0.66| | |
| | |rouge2_max,none | 27.42| | |
| | |rouge2_max_stderr,none | 0.80| | |
| | |rouge2_acc,none | 0.32| | |
| | |rouge2_acc_stderr,none | 0| | |
| | |rouge2_diff,none | -3.11| | |
| | |rouge2_diff_stderr,none| 0.78| | |
| | |rougeL_max,none | 38.81| | |
| | |rougeL_max_stderr,none | 0.71| | |
| | |rougeL_acc,none | 0.38| | |
| | |rougeL_acc_stderr,none | 0| | |
| | |rougeL_diff,none | -3.01| | |
| | |rougeL_diff_stderr,none| 0.66| | |
| | |acc,none | 0.33| | |
| | |acc_stderr,none | 0.05| | |
| | |alias |truthfulqa | | |
|truthfulqa_gen| 3|bleu_max,none | 18.31| | |
| | |bleu_max_stderr,none | 0.68| | |
| | |bleu_acc,none | 0.39| | |
| | |bleu_acc_stderr,none | 0.02| | |
| | |bleu_diff,none | -1.63| | |
| | |bleu_diff_stderr,none | 0.62| | |
| | |rouge1_max,none | 41.99| | |
| | |rouge1_max_stderr,none | 0.84| | |
| | |rouge1_acc,none | 0.39| | |
| | |rouge1_acc_stderr,none | 0.02| | |
| | |rouge1_diff,none | -2.88| | |
| | |rouge1_diff_stderr,none| 0.81| | |
| | |rouge2_max,none | 27.42| | |
| | |rouge2_max_stderr,none | 0.89| | |
| | |rouge2_acc,none | 0.32| | |
| | |rouge2_acc_stderr,none | 0.02| | |
| | |rouge2_diff,none | -3.11| | |
| | |rouge2_diff_stderr,none| 0.88| | |
| | |rougeL_max,none | 38.81| | |
| | |rougeL_max_stderr,none | 0.84| | |
| | |rougeL_acc,none | 0.38| | |
| | |rougeL_acc_stderr,none | 0.02| | |
| | |rougeL_diff,none | -3.01| | |
| | |rougeL_diff_stderr,none| 0.82| | |
| | |alias | - truthfulqa_gen| | |
|truthfulqa_mc1| 2|acc,none | 0.28| | |
| | |acc_stderr,none | 0.02| | |
| | |alias | - truthfulqa_mc1| | |
|truthfulqa_mc2| 2|acc,none | 0.43| | |
| | |acc_stderr,none | 0.01| | |
| | |alias | - truthfulqa_mc2| | |
### Winogrande: 72.14%
| Task |Version| Metric | Value | |Stderr|
|----------|------:|---------------|----------|---|------|
|winogrande| 1|acc,none | 0.72| | |
| | |acc_stderr,none| 0.01| | |
| | |alias |winogrande| | |
### GSM8K: 15.54%
|Task |Version| Metric |Value| |Stderr|
|-----|------:|-----------------------------|-----|---|------|
|gsm8k| 2|exact_match,get-answer | 0.16| | |
| | |exact_match_stderr,get-answer| 0.01| | |
| | |alias |gsm8k| | |
Elapsed time: 04:06:36
|
Amankankriya/ppo-SoccerTwos
|
Amankankriya
| 2024-02-02T06:06:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-02-02T06:06:03Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Amankankriya/ppo-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
Archit001a/New_Model
|
Archit001a
| 2024-02-02T06:04:16Z | 189 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T06:03:57Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: New_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# New_Model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1
|
JKuang96/Pyramids
|
JKuang96
| 2024-02-02T06:01:01Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-02-02T06:00:59Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JKuang96/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
LanguageBind/MoE-LLaVA-OpenChat-7B-4e
|
LanguageBind
| 2024-02-02T05:59:23Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"moe_llava_mistral",
"text-generation",
"conversational",
"arxiv:2401.15947",
"arxiv:2311.10122",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T05:27:18Z |
---
license: apache-2.0
---
<p align="center">
<img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/>
<p>
<h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2>
<h5 align="center"> If you like our project, please give us a star โญ on GitHub for latest update. </h2>
<h5 align="center">
</h5>
## ๐ฐ News
* **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released.
* **[2024.01.27]** ๐ค[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** ๐ this repository for the latest updates.
## ๐ฎ Highlights
MoE-LLaVA shows excellent performance in multi-modal learning.
### ๐ฅ High performance, but with fewer parameters
- with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks.
### ๐ Simple baseline, learning multi-modal interactions with sparse pathways.
- With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days.
## ๐ค Demo
### Gradio Web UI
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces.
```bash
# use phi2
deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e"
# use qwen
deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e"
# use stablelm
deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e"
```
### CLI Inference
```bash
# use phi2
deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg"
# use qwen
deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg"
# use stablelm
deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg"
```
## ๐ณ Model Zoo
| Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet |
|----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---|
| MoE-LLaVA-1.6Bร4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 |
| MoE-LLaVA-1.8Bร4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 |
| MoE-LLaVA-2.7Bร4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 |
<!--
| LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 |
| LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 |
-->
## โ๏ธ Requirements and Installation
* Python >= 3.10
* Pytorch == 2.0.1
* CUDA Version >= 11.7
* **Transformers == 4.36.2**
* **Tokenizers==0.15.1**
* Install required packages:
```bash
git clone https://github.com/PKU-YuanGroup/MoE-LLaVA
cd MoE-LLaVA
conda create -n moellava python=3.10 -y
conda activate moellava
pip install --upgrade pip # enable PEP 660 support
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation
# Below are optional. For Qwen model.
git clone https://github.com/Dao-AILab/flash-attention
cd flash-attention && pip install .
# Below are optional. Installing them might be slow.
# pip install csrc/layer_norm
# If the version of flash-attn is higher than 2.1.1, the following is not needed.
# pip install csrc/rotary
```
## ๐๏ธ Training & Validating
The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md).
## ๐ก Customizing your MoE-LLaVA
The instruction is in [CUSTOM.md](docs/CUSTOM.md).
## ๐ Visualization
The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md).
## ๐ค API
**We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets.
**Using the following command to run the code.**
```bash
deepspeed predict.py
```
```python
import torch
from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN
from moellava.conversation import conv_templates, SeparatorStyle
from moellava.model.builder import load_pretrained_model
from moellava.utils import disable_torch_init
from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria
def main():
disable_torch_init()
image = 'moellava/serve/examples/extreme_ironing.jpg'
inp = 'What is unusual about this image?'
model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e
device = 'cuda'
load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit?
model_name = get_model_name_from_path(model_path)
tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device)
image_processor = processor['image']
conv_mode = "phi" # qwen or stablelm
conv = conv_templates[conv_mode].copy()
roles = conv.roles
image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16)
print(f"{roles[1]}: {inp}")
inp = DEFAULT_IMAGE_TOKEN + '\n' + inp
conv.append_message(conv.roles[0], inp)
conv.append_message(conv.roles[1], None)
prompt = conv.get_prompt()
input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda()
stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
keywords = [stop_str]
stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
with torch.inference_mode():
output_ids = model.generate(
input_ids,
images=image_tensor,
do_sample=True,
temperature=0.2,
max_new_tokens=1024,
use_cache=True,
stopping_criteria=[stopping_criteria])
outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip()
print(outputs)
if __name__ == '__main__':
main()
```
## ๐ Related Projects
* [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens.
* [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework.
## ๐ Acknowledgement
* [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant.
## ๐ License
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file.
* The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## โ๏ธ Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
```BibTeX
@misc{lin2024moellava,
title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models},
author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan},
year={2024},
eprint={2401.15947},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```BibTeX
@article{lin2023video,
title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection},
author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li},
journal={arXiv preprint arXiv:2311.10122},
year={2023}
}
```
## โจ Star History
[](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date)
## ๐ค Contributors
<a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors">
<img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" />
</a>
|
yaoandy107/whisper-large-v2-moba
|
yaoandy107
| 2024-02-02T05:57:40Z | 66 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
automatic-speech-recognition
| 2024-02-01T09:50:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Samee-ur/DareTIES-1-7B
|
Samee-ur
| 2024-02-02T05:56:54Z | 0 | 0 | null |
[
"merge",
"mergekit",
"lazymergekit",
"samir-fama/SamirGPT-v1",
"abacusai/Slerp-CM-mist-dpo",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:merge:EmbeddedLLM/Mistral-7B-Merge-14-v0.2",
"base_model:abacusai/Slerp-CM-mist-dpo",
"base_model:merge:abacusai/Slerp-CM-mist-dpo",
"base_model:samir-fama/SamirGPT-v1",
"base_model:merge:samir-fama/SamirGPT-v1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T05:56:54Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
base_model:
- samir-fama/SamirGPT-v1
- abacusai/Slerp-CM-mist-dpo
- EmbeddedLLM/Mistral-7B-Merge-14-v0.2
---
# DareTIES-1-7B
DareTIES-1-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2)
## ๐งฉ Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.55
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.55
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.55
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Samee-ur/DareTIES-1-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
asun17904/glue-qqp-gpt2-kd
|
asun17904
| 2024-02-02T05:49:37Z | 1 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-02-01T13:35:36Z |
---
language: en
license: mit
library_name: pytorch
---
# Plainly Optimized Network
Dataset: GLUE
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|
Archit001a/Updated_Model
|
Archit001a
| 2024-02-02T05:48:21Z | 202 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T05:48:02Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Updated_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Updated_Model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1
|
woodspoon09/distilbert-base-uncased-distilled-clinc
|
woodspoon09
| 2024-02-02T05:48:02Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T05:47:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3141
- Accuracy: 0.9455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 2.7574 | 0.7471 |
| 3.21 | 2.0 | 636 | 1.4119 | 0.8697 |
| 3.21 | 3.0 | 954 | 0.7505 | 0.9094 |
| 1.2277 | 4.0 | 1272 | 0.4865 | 0.9323 |
| 0.4695 | 5.0 | 1590 | 0.3842 | 0.9384 |
| 0.4695 | 6.0 | 1908 | 0.3468 | 0.9432 |
| 0.2602 | 7.0 | 2226 | 0.3271 | 0.9452 |
| 0.1925 | 8.0 | 2544 | 0.3219 | 0.9435 |
| 0.1925 | 9.0 | 2862 | 0.3146 | 0.9442 |
| 0.1675 | 10.0 | 3180 | 0.3141 | 0.9455 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.1
|
kam414/sft-mistral-v1
|
kam414
| 2024-02-02T05:38:45Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:kam414/pre-train-mistral-v2-full",
"base_model:finetune:kam414/pre-train-mistral-v2-full",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T15:58:03Z |
---
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: kam414/pre-train-mistral-v2-full
model-index:
- name: sft-mistral-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft-mistral-v1
This model is a fine-tuned version of [kam414/pre-train-mistral-v2-full](https://huggingface.co/kam414/pre-train-mistral-v2-full) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.2
- training_steps: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9676 | 0.5 | 10 | 1.4137 |
| 1.3178 | 1.0 | 20 | 1.2463 |
| 1.071 | 1.5 | 30 | 1.2405 |
| 1.0554 | 2.0 | 40 | 1.2212 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
HackerCIS/bloomz-560m_PROMPT_TUNING_CAUSAL_SPAM_epoch1
|
HackerCIS
| 2024-02-02T05:20:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | 2024-02-02T05:20:34Z |
---
library_name: peft
base_model: bigscience/bloomz-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
Chattiori/QuoliaMix
|
Chattiori
| 2024-02-02T05:14:37Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-02T05:14:37Z |
---
license: creativeml-openrail-m
---
|
TinyPixel/o1
|
TinyPixel
| 2024-02-02T05:13:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T05:13:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zomd/AISquare-Instruct-yi-ko-6b-v0.9.30
|
zomd
| 2024-02-02T05:13:04Z | 2,258 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-21T23:48:58Z |
---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# AISquare-Instruct-yi-ko-6b-v0.9.30
## Model Details
**Developed by**
[Inswave Systems](https://www.inswave.com) UI Platform Team
**Method**
Using DPO method and SFT method
**Hardware**
We utilized an A100x4 * 1 for training our model
**Base Model**
[beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
## Open ko-leaderboard Rank
<img src='./ko-leaderboard.png' width=512>
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "zomd/AISquare-Instruct-yi-ko-6b-v0.9.30"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
์ด ์ฐ๊ตฌ๋ ์ธ๊ณต์ง๋ฅ์ฐ์
์ตํฉ์ฌ์
๋จ(AICA)์์ ์ถ์งํ ใ์ธ๊ณต์ง๋ฅ ์ค์ฌ ์ฐ์
์ตํฉ ์ง์ ๋จ์ง ์กฐ์ฑ์ฌ์
ใ ์ ์ง์์ ๋ฐ์ ์งํ๋ ๊ฒฐ๊ณผ์
๋๋ค.
---
|
karawalla/aqmodel_20240202_merged
|
karawalla
| 2024-02-02T05:09:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T05:06:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/LongAlign-7B-64k-exl2
|
bartowski
| 2024-02-02T05:05:45Z | 7 | 0 |
transformers
|
[
"transformers",
"Long Context",
"llama",
"text-generation",
"en",
"zh",
"dataset:THUDM/LongAlign-10k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T04:51:55Z |
---
language:
- en
- zh
library_name: transformers
tags:
- Long Context
- llama
datasets:
- THUDM/LongAlign-10k
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of LongAlign-7B-64k
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/THUDM/LongAlign-7B-64k
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/LongAlign-7B-64k-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/LongAlign-7B-64k-exl2 LongAlign-7B-64k-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `LongAlign-7B-64k-exl2`:
```shell
mkdir LongAlign-7B-64k-exl2
huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --local-dir LongAlign-7B-64k-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir LongAlign-7B-64k-exl2-6_5
huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --revision 6_5 --local-dir LongAlign-7B-64k-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir LongAlign-7B-64k-exl2-6.5
huggingface-cli download bartowski/LongAlign-7B-64k-exl2 --revision 6_5 --local-dir LongAlign-7B-64k-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
macarious/torgo_xlsr_finetune_M04_keep_all
|
macarious
| 2024-02-02T04:59:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-01T09:42:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M04__keep_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M04__keep_all
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the torgo dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4844
- Wer: 0.2303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5764 | 0.55 | 1000 | 3.3931 | 1.0 |
| 2.2058 | 1.1 | 2000 | 1.8051 | 0.8589 |
| 1.0531 | 1.66 | 3000 | 1.5038 | 0.6508 |
| 0.7783 | 2.21 | 4000 | 1.2594 | 0.5208 |
| 0.6084 | 2.76 | 5000 | 1.3131 | 0.4586 |
| 0.5241 | 3.31 | 6000 | 1.3666 | 0.4339 |
| 0.4586 | 3.87 | 7000 | 1.3358 | 0.3866 |
| 0.4149 | 4.42 | 8000 | 1.2625 | 0.3332 |
| 0.3796 | 4.97 | 9000 | 1.5808 | 0.3629 |
| 0.3685 | 5.52 | 10000 | 1.2197 | 0.3298 |
| 0.3322 | 6.07 | 11000 | 1.6204 | 0.3473 |
| 0.3133 | 6.63 | 12000 | 1.6558 | 0.3446 |
| 0.2833 | 7.18 | 13000 | 1.5270 | 0.3100 |
| 0.2941 | 7.73 | 14000 | 1.4321 | 0.3134 |
| 0.2709 | 8.28 | 15000 | 1.3682 | 0.3092 |
| 0.2362 | 8.83 | 16000 | 1.2184 | 0.2787 |
| 0.2205 | 9.39 | 17000 | 1.4273 | 0.2863 |
| 0.2515 | 9.94 | 18000 | 1.3085 | 0.2665 |
| 0.2185 | 10.49 | 19000 | 1.5292 | 0.2852 |
| 0.2197 | 11.04 | 20000 | 1.4625 | 0.2817 |
| 0.2122 | 11.6 | 21000 | 1.4086 | 0.2634 |
| 0.1869 | 12.15 | 22000 | 1.6290 | 0.2791 |
| 0.1839 | 12.7 | 23000 | 1.4520 | 0.2722 |
| 0.1946 | 13.25 | 24000 | 1.5211 | 0.2653 |
| 0.1871 | 13.8 | 25000 | 1.3136 | 0.2390 |
| 0.1831 | 14.36 | 26000 | 1.4022 | 0.2581 |
| 0.1644 | 14.91 | 27000 | 1.5609 | 0.2673 |
| 0.1499 | 15.46 | 28000 | 1.3431 | 0.2429 |
| 0.1566 | 16.01 | 29000 | 1.5110 | 0.2566 |
| 0.1533 | 16.57 | 30000 | 1.4567 | 0.2345 |
| 0.1446 | 17.12 | 31000 | 1.5160 | 0.2478 |
| 0.1451 | 17.67 | 32000 | 1.4081 | 0.2379 |
| 0.1269 | 18.22 | 33000 | 1.5296 | 0.2379 |
| 0.1438 | 18.77 | 34000 | 1.5765 | 0.2406 |
| 0.1112 | 19.33 | 35000 | 1.5061 | 0.2337 |
| 0.1215 | 19.88 | 36000 | 1.4844 | 0.2303 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Rupesh2/Kgp-Llama
|
Rupesh2
| 2024-02-02T04:53:26Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-02T04:49:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
michaelhu1/ppo-LunarLander-v2
|
michaelhu1
| 2024-02-02T04:52:49Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T04:52:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.13 +/- 20.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
philimon/TinyLlama-gsm8k-math
|
philimon
| 2024-02-02T04:11:19Z | 14 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-02-02T03:43:53Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
model-index:
- name: TinyLlama-gsm8k-math
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyLlama-gsm8k-math
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
suhas-hegde5/controlnet_celeb_v2_1
|
suhas-hegde5
| 2024-02-02T04:08:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-01T06:25:00Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-suhas-hegde5/controlnet_celeb_v2_1
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
|
APaul1/vit-base-patch16-224-in21k-finetuned-lora-food101
|
APaul1
| 2024-02-02T04:01:38Z | 0 | 0 |
transformers, peft, torch
|
[
"transformers, peft, torch",
"safetensors",
"dataset:food101",
"region:us"
] | null | 2024-01-31T17:15:11Z |
---
library_name: transformers, peft, torch
datasets:
- food101
---
# Model Card for Model ID
Food Image classification lora model based on the example provided in https://huggingface.co/docs/peft/task_guides/image_classification_lora
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Train a PEFT model on google-VIT-base-patch16 model to name a food when it's image is provided

Label : Beignet
- **Developed by:** PEFT Example
- **Model type:** Food classification LORA
- **Finetuned from model [optional]:** google/vit-base-patch16-224-in21k
|
SimplCup/LudwigV2
|
SimplCup
| 2024-02-02T03:53:16Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-02-02T03:52:53Z |
---
license: cc-by-nc-nd-4.0
---
|
LingxinAI/CharacterGLM-6b
|
LingxinAI
| 2024-02-02T03:47:50Z | 0 | 55 | null |
[
"region:us"
] | null | 2023-09-22T07:56:39Z |
ๆไปฌๅจhttps://huggingface.co/thu-coai/CharacterGLM-6Bๅผๆบไบ
|
CLMBR/pp-mod-subj-lstm-0
|
CLMBR
| 2024-02-02T03:34:44Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T10:08:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7865 | 0.03 | 76320 | 4.7973 |
| 4.502 | 1.03 | 152640 | 4.5167 |
| 4.3567 | 0.03 | 228960 | 4.3828 |
| 4.272 | 1.03 | 305280 | 4.3003 |
| 4.2106 | 2.03 | 381600 | 4.2449 |
| 4.1566 | 0.03 | 457920 | 4.2046 |
| 4.118 | 1.03 | 534240 | 4.1734 |
| 4.0851 | 0.03 | 610560 | 4.1498 |
| 4.0563 | 1.03 | 686880 | 4.1309 |
| 4.0367 | 0.03 | 763200 | 4.1159 |
| 4.0134 | 1.03 | 839520 | 4.1037 |
| 3.9977 | 2.03 | 915840 | 4.0930 |
| 3.9827 | 0.03 | 992160 | 4.0838 |
| 3.9706 | 1.03 | 1068480 | 4.0765 |
| 3.9551 | 2.03 | 1144800 | 4.0699 |
| 3.9406 | 0.03 | 1221120 | 4.0643 |
| 3.9305 | 1.03 | 1297440 | 4.0594 |
| 3.9228 | 2.03 | 1373760 | 4.0555 |
| 3.9128 | 0.03 | 1450080 | 4.0514 |
| 3.9095 | 1.03 | 1526400 | 4.0483 |
| 3.9058 | 2.03 | 1602720 | 4.0453 |
| 3.8955 | 0.03 | 1679040 | 4.0415 |
| 3.8886 | 1.03 | 1755360 | 4.0394 |
| 3.8803 | 2.03 | 1831680 | 4.0366 |
| 3.8745 | 0.03 | 1908000 | 4.0354 |
| 3.8719 | 1.03 | 1984320 | 4.0332 |
| 3.8664 | 0.03 | 2060640 | 4.0317 |
| 3.8623 | 1.03 | 2136960 | 4.0308 |
| 3.8597 | 0.03 | 2213280 | 4.0296 |
| 3.8551 | 1.03 | 2289600 | 4.0285 |
| 3.8488 | 0.03 | 2365920 | 4.0273 |
| 3.8429 | 0.03 | 2442240 | 4.0266 |
| 3.8403 | 0.03 | 2518560 | 4.0258 |
| 3.8406 | 1.03 | 2594880 | 4.0251 |
| 3.8341 | 0.03 | 2671200 | 4.0244 |
| 3.8346 | 1.03 | 2747520 | 4.0237 |
| 3.8395 | 0.03 | 2823840 | 4.0231 |
| 3.8338 | 0.03 | 2900160 | 4.0225 |
| 3.8315 | 1.03 | 2976480 | 4.0220 |
| 3.8293 | 0.02 | 3052726 | 4.0215 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
APaul1/roberta-large-lora-token-classify-BioNLP
|
APaul1
| 2024-02-02T03:34:01Z | 0 | 0 |
transformers, peft
|
[
"transformers, peft",
"safetensors",
"region:us"
] | null | 2024-02-01T23:36:38Z |
---
library_name: transformers, peft
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the an implementation of the Token Classification as mentioned [here](https://huggingface.co/docs/peft/task_guides/token-classification-lora). A PEFT model has been fine tuned to a token classification task for Bio Entity recognition from base model of roberta-large. Objective is to identify BIO Named Entity Recognition.
Given a statement [ "During", "treatment", "with", "Hm", ",", "K562", "cells", "constitutively", "expressed", "c-myb", "mRNA", ",", "and", "50", "%", "of", "them", "began", "to", "synthesize", "hemoglobin", "(", "Hb", ")", "." ]
it would generate the tags [ 0, 0, 0, 3, 0, 7, 8, 0, 0, 9, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 3, 0, 0 ]
And the label id categories are
{
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I-protein": 4,
"B-cell_type": 5,
"I-cell_type": 6,
"B-cell_line": 7,
"I-cell_line": 8,
"B-RNA": 9,
"I-RNA": 10
}
More details can be found [here](https://huggingface.co/datasets/tner/bionlp2004?row=18)
- **Developed by:** PEFT Example
- **Model type:** Token Classification using LLM
- **Finetuned from:** model roberta-large
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.5
|
AIFT
| 2024-02-02T03:29:25Z | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T02:33:33Z |
---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct ๋ชจ๋ธ v1.5</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
kyujinpy ๋์ด ๊ณต๊ฐํ์ KOR-OpenOrca-Platypus ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ์ญ์ (์ํ๋ง) ๋ฐ ์ ์ ์์
์งํํ์ฌ ํ์ฉ.
๊ทธ ์ดํ ํด๋น ๋ฐ์ดํฐ๋ค์ ๋ณด๋ฉฐ ๊ด๋ จ ํ์คํฌ๋ฅผ ์ถ์ถํ์๊ณ ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์ ๋ฌธ์ ๋ฅผ gpt๋ฅผ ํตํด์ ๊ตฌ์ถํ์๊ณ ,
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ ํํ๋ฅผ ๋ณด๊ณ gpt๋ฅผ ํตํด ์ด๋ฑ ์์ค์ ๊ณผํ ์ํ ๋ฌธ์ ์ ํ์ ์ ์ 500๋ฌธ์
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด ๋ฐ์ดํฐ 4๋ง๊ฐ ์ ๋ ์ฌ์ฉํ์์ต๋๋ค.
<br>
<br>
+ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.(์์ค ๊ด๋ จ ์ฐธ๊ฑฐ์ง ๋ฌธ์ )
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
rizla/rizla55b
|
rizla
| 2024-02-02T03:13:07Z | 51 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"conversational",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T02:43:10Z |
---
license: cc-by-nd-4.0
base_model: []
tags:
- dpo
---
# This is an experimental model that I made by merging two Llama2 70b models and gluing them together with the mergekit of llama70b. The mergekit is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
The merged model has 55 billion parameters and was made trained on 640GB of vram cluster
|
danangwijaya/GEC-T5-small
|
danangwijaya
| 2024-02-02T03:11:04Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T02:58:03Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: GEC-T5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GEC-T5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 325 | 1.8041 |
| 1.908 | 2.0 | 650 | 1.7697 |
| 1.908 | 3.0 | 975 | 1.7359 |
| 1.8218 | 4.0 | 1300 | 1.7228 |
| 1.7942 | 5.0 | 1625 | 1.7061 |
| 1.7942 | 6.0 | 1950 | 1.6981 |
| 1.7497 | 7.0 | 2275 | 1.6910 |
| 1.7379 | 8.0 | 2600 | 1.6848 |
| 1.7379 | 9.0 | 2925 | 1.6828 |
| 1.7165 | 10.0 | 3250 | 1.6816 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
deepseek-ai/deepseek-coder-6.7b-instruct
|
deepseek-ai
| 2024-02-02T03:02:26Z | 36,906 | 374 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-29T11:01:36Z |
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[๐ Homepage]</a> | <a href="https://coder.deepseek.com/">[๐ค Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(ๅพฎไฟก)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|
nlee282/test
|
nlee282
| 2024-02-02T02:56:10Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:cognitivecomputations/WestLake-7B-v2-laser",
"base_model:finetune:cognitivecomputations/WestLake-7B-v2-laser",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T02:48:57Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: cognitivecomputations/WestLake-7B-v2-laser
---
# Uploaded model
- **Developed by:** nlee282
- **License:** apache-2.0
- **Finetuned from model :** cognitivecomputations/WestLake-7B-v2-laser
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Lvxy1117/amber_fine_tune_ori
|
Lvxy1117
| 2024-02-02T02:48:02Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T08:59:38Z |
---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Amber fine tune by alpaca dataset.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.