modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-21 06:31:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 567
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-21 06:30:37
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yc4142/phi-1_5-lora-int8-single-stockmarket-nonCoT
|
yc4142
| 2024-01-04T14:37:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2024-01-04T06:32:16Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
KaranChand/mistral-pe-500-mp
|
KaranChand
| 2024-01-04T14:33:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2024-01-04T14:32:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
florinbarbisch/fuyu8b-charts-adapters
|
florinbarbisch
| 2024-01-04T14:16:24Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:adept/fuyu-8b",
"base_model:adapter:adept/fuyu-8b",
"region:us"
] | null | 2023-12-14T08:12:21Z |
---
library_name: peft
base_model: adept/fuyu-8b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
NewTab/AllVegasVersionsAfterMagixPurchasedItInMay2016
|
NewTab
| 2024-01-04T14:13:09Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-01-03T23:39:42Z |
---
license: openrail
---
ever wanted to get all the vegas versions after magix purchased it on may 2016 but don't wanna risk low download speeds? i've been there too, so that's why i've made this. enjoy.
|
kelaine/Taxi-v3-v1
|
kelaine
| 2024-01-04T14:04:59Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T14:04:49Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
model = load_from_hub(repo_id="kelaine/Taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
yasithheshan/llama2-qlora-finetunined-4-bit-4.14k-dataset-1-epoch
|
yasithheshan
| 2024-01-04T13:56:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-04T13:47:20Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
ywang760/q-learning-Taxi-v3
|
ywang760
| 2024-01-04T13:53:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T13:52:57Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ywang760/q-learning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DaRkSpyro/DarkSpyro
|
DaRkSpyro
| 2024-01-04T13:52:32Z | 0 | 0 |
flair
|
[
"flair",
"music",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T13:45:22Z |
---
license: apache-2.0
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
---
|
Darklord23/qlora-stablelm-zephyr-3b-4jan
|
Darklord23
| 2024-01-04T13:51:54Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:finetune:stabilityai/stablelm-zephyr-3b",
"license:other",
"region:us"
] | null | 2024-01-04T13:19:44Z |
---
license: other
base_model: stabilityai/stablelm-zephyr-3b
tags:
- generated_from_trainer
model-index:
- name: qlora-stablelm-zephyr-3b-4jan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qlora-stablelm-zephyr-3b-4jan
This model is a fine-tuned version of [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.15.0
|
KnutJaegersberg/Tess-M-34B-hessian
|
KnutJaegersberg
| 2024-01-04T13:51:14Z | 0 | 0 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-01-03T08:58:33Z |
---
license: other
license_name: yi-license
license_link: LICENSE
pipeline_tag: text-generation
---
|
carlosjmv/Llama2-7b-qlora-ecommerce-faq
|
carlosjmv
| 2024-01-04T13:47:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-04T13:47:09Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ
|
TheBloke
| 2024-01-04T13:44:38Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp",
"base_model:quantized:Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-04T12:36:35Z |
---
base_model: Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-34b
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Nous Hermes 2 SUS Chat 34B Slerp
model_type: yi
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Hermes 2 SUS Chat 34B Slerp - AWQ
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp)
<!-- description start -->
## Description
This repo contains AWQ model files for [Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 19.23 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp

# Nous-Hermes-2-SUS-Chat-34B-Slerp
This is the model for Nous-Hermes-2-SUS-Chat-34B-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
slices:
- sources:
- model: Nous-Hermes-2-Yi-34B
layer_range: [0, 60]
- model: SUS-Chat-34B
layer_range: [0, 60]
merge_method: slerp
base_model: Yi-34B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
tokenizer_source: union
dtype: bfloat16
```
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrnone_mllr-1
|
ostapeno
| 2024-01-04T13:43:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T23:08:18Z |
Number of experts present in the library: 57
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ropes_prompt_beginning_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| ropes_read_background_situation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| ropes_background_situation_middle_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
| ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| wiki_hop_original_generate_object_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| quarel_heres_a_story_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| ropes_background_new_situation_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| ropes_plain_bottom_hint_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| social_i_qa_Generate_the_question_from_the_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| ropes_new_situation_background_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| wiki_hop_original_generate_subject_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| wiqa_what_is_the_final_step_of_the_following_process_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| duorc_SelfRC_generate_question_by_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| super_glue_cb_1_0_2_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| sciq_Multiple_Choice_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| ultrachat_25_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ultrachat_25 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| niv2_explanation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| niv2_explanation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| aeslc_1_0_0_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| high_school_psychology_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| niv2_dialogue_act_recognition_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
Last updated on: 2024-01-04 13:43:44+00:00
|
ywang760/q-FrozenLake-v1-4x4-noSlippery
|
ywang760
| 2024-01-04T13:42:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T13:42:13Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ywang760/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
msaavedra1234/tiny_t
|
msaavedra1234
| 2024-01-04T13:35:52Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T13:33:54Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: tinyllama-out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
eval_sample_packing: False #Poco dato
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: data.json # or json
ds_type: json # see other options below
type: completion
dataset_prepared_path:
val_set_size: 0.05
# output_dir: ./lora-out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./tinyllama-out
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 8 #2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false #TODO: change to true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
save_strategy: "no"
warmup_steps: 10
evals_per_epoch: 4
# saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# tinyllama-out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9894 | 0.13 | 1 | 1.5790 |
| 1.915 | 0.26 | 2 | 1.4849 |
| 1.642 | 0.52 | 4 | 1.4032 |
| 1.5396 | 0.77 | 6 | 1.4059 |
| 1.3746 | 1.03 | 8 | 1.4101 |
| 0.9355 | 1.23 | 10 | 1.5147 |
| 0.9266 | 1.48 | 12 | 1.5291 |
| 0.8006 | 1.74 | 14 | 1.4724 |
| 0.7664 | 2.0 | 16 | 1.4965 |
| 0.4813 | 2.16 | 18 | 1.5715 |
| 0.4193 | 2.42 | 20 | 1.5436 |
| 0.364 | 2.68 | 22 | 1.6040 |
| 0.3592 | 2.94 | 24 | 1.5823 |
| 0.1884 | 3.13 | 26 | 1.6850 |
| 0.159 | 3.39 | 28 | 1.8316 |
| 0.1641 | 3.65 | 30 | 1.7286 |
| 0.1512 | 3.9 | 32 | 1.7029 |
| 0.1563 | 4.06 | 34 | 1.7033 |
| 0.0696 | 4.32 | 36 | 1.7482 |
| 0.0643 | 4.58 | 38 | 1.8069 |
| 0.0662 | 4.84 | 40 | 1.8410 |
| 0.0709 | 5.1 | 42 | 1.8529 |
| 0.0344 | 5.26 | 44 | 1.8626 |
| 0.0468 | 5.52 | 46 | 1.8716 |
| 0.0328 | 5.77 | 48 | 1.8761 |
| 0.0353 | 6.03 | 50 | 1.8789 |
| 0.0375 | 6.23 | 52 | 1.8803 |
| 0.0345 | 6.48 | 54 | 1.8802 |
| 0.0346 | 6.74 | 56 | 1.8806 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kelaine/q-FrozenLake-v1-8x8-noSlippery
|
kelaine
| 2024-01-04T13:32:52Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T13:32:43Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="kelaine/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
navkar98/t5_recommendation_sports_equipment_english
|
navkar98
| 2024-01-04T13:30:11Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-04T10:29:04Z |
---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_recommendation_sports_equipment_english
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4968
- Rouge1: 69.8413
- Rouge2: 61.9048
- Rougel: 69.8413
- Rougelsum: 70.2381
- Gen Len: 4.2381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 0.96 | 6 | 7.0208 | 13.5224 | 1.8519 | 13.7870 | 13.5032 | 18.7143 |
| No log | 1.92 | 12 | 1.8113 | 20.4762 | 14.2857 | 20.4762 | 20.9524 | 3.6667 |
| No log | 2.88 | 18 | 0.7760 | 23.8095 | 4.7619 | 23.3333 | 23.3333 | 4.1429 |
| No log | 4.0 | 25 | 0.5784 | 38.4127 | 23.8095 | 38.8889 | 39.9206 | 4.0476 |
| No log | 4.96 | 31 | 0.5181 | 54.1270 | 42.8571 | 54.8413 | 54.6825 | 3.9524 |
| No log | 5.92 | 37 | 0.4786 | 62.6984 | 52.3810 | 62.6984 | 62.6984 | 3.9048 |
| No log | 6.88 | 43 | 0.4605 | 64.2857 | 52.3810 | 64.6032 | 64.6032 | 4.2857 |
| No log | 8.0 | 50 | 0.6243 | 67.4603 | 57.1429 | 67.4603 | 67.4603 | 4.3810 |
| No log | 8.96 | 56 | 0.5484 | 64.2857 | 57.1429 | 65.0794 | 65.0794 | 4.1429 |
| No log | 9.6 | 60 | 0.4968 | 69.8413 | 61.9048 | 69.8413 | 70.2381 | 4.2381 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
imalexianne/Roberta-Movie_Review
|
imalexianne
| 2024-01-04T13:27:37Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-31T15:39:34Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Roberta-Movie_Review
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-Movie_Review
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2711
- Accuracy: 0.9396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2346 | 1.0 | 623 | 0.1814 | 0.9370 |
| 0.1529 | 2.0 | 1246 | 0.2790 | 0.9386 |
| 0.0968 | 3.0 | 1869 | 0.2711 | 0.9396 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrnone_mllr-1
|
ostapeno
| 2024-01-04T13:27:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T23:08:15Z |
Number of experts present in the library: 52
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| super_glue_cb_1_0_2_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| niv2_explanation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| niv2_explanation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| ultrachat_25 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| duorc_SelfRC_generate_question_by_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ultrachat_25_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| aeslc_1_0_0_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| wiqa_what_is_the_final_step_of_the_following_process_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| sciq_Multiple_Choice_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| ropes_background_new_situation_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| wiki_hop_original_generate_object_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| ropes_new_situation_background_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| ropes_prompt_beginning_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| ropes_read_background_situation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| ropes_plain_bottom_hint_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| quarel_heres_a_story_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| wiki_hop_original_generate_subject_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| social_i_qa_Generate_the_question_from_the_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| ropes_background_situation_middle_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
| ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
Last updated on: 2024-01-04 13:27:30+00:00
|
AAOBA/ConvNeXtV2-IllustrationScorer
|
AAOBA
| 2024-01-04T13:19:24Z | 0 | 11 | null |
[
"license:mit",
"region:us"
] | null | 2023-12-20T18:59:59Z |
---
license: mit
---
# ConvNeXtV2-IllustrationScorer
**Q0: What does this model do?**
A: 😎 This model scores your anime-style illustrations based on 4 metrics. 😎
**Q1: What does the 4 metrics mean?**
A: 🎈 The 4 metrics measures the "Liking Rate", "Collection Rate", "AI-generated Probability", and "View Number / Uploaded Interval (i.e. Popularity)". 🎈
**Q2: Why the "Rate" seems not being a rate?**
A: ✨ This is because the author did not train this model by regressing these "Rates". Instead, these values are obtained in a contrastive learning manner (i.e., ranking the top-k images for each "Rate"). This is because the author has observed that almost no gradient can be significantly observed by backwarding on these "Rates" if the model is trained by regressing these values. And simply, the author assumed that the model tried to minimize the Absolute Error Loss by "remembering the average value", which is not an expected result. ✨
**Q3: What are the training data?**
A: 🤐 All training data (~55K) are obtained from PIXIV. 🤐
**Q4: Why this model is trained.**
A: 👾 The author initially hoped to finetune the [Anything-V5](https://civitai.com/models/9409?modelVersionId=90854) model by RLHF based on [D3PO (arxiv.2311.13231)](https://github.com/yk7333/d3po), and this model is designed to play the role of a multi-objective reward model. And for fun :)👾

## Acknowledgement
😨 Thanks to [SUSTech CCSE](https://hpc.sustech.edu.cn/), this model is trained on A100-80G x 1. 😨
🤗 Any suggestion is welcome :) 🤗
|
nnny/onnx-mobile-sam
|
nnny
| 2024-01-04T13:12:53Z | 0 | 4 |
transformers.js
|
[
"transformers.js",
"onnx",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2024-01-04T12:54:31Z |
---
license: mit
library_name: transformers.js
pipeline_tag: image-segmentation
---
|
Divyanshu04/clip-roberta-finetuned
|
Divyanshu04
| 2024-01-04T13:08:49Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:coco_dataset_script",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-01-04T12:10:07Z |
---
tags:
- generated_from_trainer
datasets:
- coco_dataset_script
model-index:
- name: clip-roberta-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-roberta-finetuned
This model was trained from scratch on the coco_dataset_script dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Kooten/FlatOrcamaid-13b-v0.2-4bpw-exl2
|
Kooten
| 2024-01-04T13:06:39Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-21T00:22:05Z |
---
license: cc-by-nc-4.0
---
# FlatOrcamaid-13b-v0.2 4BPW
Exllama quants of [NeverSleep/FlatOrcamaid-13b-v0.2](https://huggingface.co/NeverSleep/FlatOrcamaid-13b-v0.2)
Other Quants
-MLX: [8bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-8bit-mlx), [4bit](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-4bit-mlx)
-Exllama: [8bpw](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/FlatOrcamaid-13b-v0.2-4bpw-exl2)
## Prompt template: Custom format, or Alpaca
### Custom format:
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### Contact
Kooten on discord.
|
chrisgg1/hubert-base-ls960-finetuned-ks-verbtest2
|
chrisgg1
| 2024-01-04T13:05:50Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-04T12:33:45Z |
---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hubert-base-ls960-finetuned-ks-verbtest2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-base-ls960-finetuned-ks-verbtest2
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0180
- Accuracy: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4028 | 0.99 | 50 | 0.2129 | 0.9919 |
| 0.1071 | 2.0 | 101 | 0.0594 | 0.9944 |
| 0.0627 | 2.99 | 151 | 0.0248 | 0.9988 |
| 0.0423 | 4.0 | 202 | 0.0180 | 0.9994 |
| 0.0315 | 4.95 | 250 | 0.0165 | 0.9994 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Bachstelze/instructionRoberta-base
|
Bachstelze
| 2024-01-04T13:05:40Z | 89 | 2 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:GAIR/lima",
"dataset:nomic-ai/gpt4all-j-prompt-generations",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:ZenMoore/RoleBench",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196",
"dataset:c-s-ale/alpaca-gpt4-data",
"dataset:THUDM/AgentInstruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-27T15:29:40Z |
---
language:
- en
tags:
- text2text-generation
widget:
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
example_title: "Question Answering"
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
example_title: "Logical reasoning"
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
example_title: "Scientific knowledge"
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
example_title: "Yes/no question"
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
example_title: "Reasoning task"
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
example_title: "Boolean Expressions"
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
example_title: "Math reasoning"
- text: "Premise: At my age you will probably have learned one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
example_title: "Premise and hypothesis"
datasets:
- Open-Orca/SlimOrca-Dedup
- GAIR/lima
- nomic-ai/gpt4all-j-prompt-generations
- HuggingFaceH4/ultrachat_200k
- ZenMoore/RoleBench
- WizardLM/WizardLM_evol_instruct_V2_196
- c-s-ale/alpaca-gpt4-data
- THUDM/AgentInstruct
license: mit
---
# Model Card of instructionRoberta-base for Bertology

A minimalistic instruction model with an already good analysed and pretrained encoder like roBERTa.
So we can research the [Bertology](https://aclanthology.org/2020.tacl-1.54.pdf) with instruction-tuned models, [look at the attention](https://colab.research.google.com/drive/1mNP7c0RzABnoUgE6isq8FTp-NuYNtrcH?usp=sharing) and investigate [what happens to BERT embeddings during fine-tuning](https://aclanthology.org/2020.blackboxnlp-1.4.pdf).
The training code is released at the [instructionBERT repository](https://gitlab.com/Bachstelze/instructionbert).
We used the Huggingface API for [warm-starting](https://huggingface.co/blog/warm-starting-encoder-decoder) [BertGeneration](https://huggingface.co/docs/transformers/model_doc/bert-generation) with [Encoder-Decoder-Models](https://huggingface.co/docs/transformers/v4.35.2/en/model_doc/encoder-decoder) for this purpose.
## Run the model with a longer output
```python
from transformers import AutoTokenizer, EncoderDecoderModel
# load the fine-tuned seq2seq model and corresponding tokenizer
model_name = "Bachstelze/instructionRoberta-base"
model = EncoderDecoderModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input = "Write a poem about love, peace and pancake."
input_ids = tokenizer(input, return_tensors="pt").input_ids
output_ids = model.generate(input_ids, max_new_tokens=200)
print(tokenizer.decode(output_ids[0]))
```
## Training parameters
- base model: "roberta-base"
- trained for 1 epoche
- batch size of 16
- 20000 warm-up steps
- learning rate of 0.0001
## Purpose of instructionRoberta-base
InstructionBERT is intended for research purposes. The model-generated text should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
|
OvrK12/t5Test
|
OvrK12
| 2024-01-04T13:04:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2024-01-04T13:04:00Z |
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
NotoriousH2/test2_solar_10.7b_v1.0
|
NotoriousH2
| 2024-01-04T12:59:13Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-v1.0",
"region:us"
] | null | 2024-01-04T12:58:40Z |
---
library_name: peft
base_model: upstage/SOLAR-10.7B-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF
|
TheBloke
| 2024-01-04T12:56:18Z | 251 | 5 |
transformers
|
[
"transformers",
"gguf",
"yi",
"base_model:Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp",
"base_model:quantized:Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp",
"license:other",
"region:us"
] | null | 2024-01-04T12:36:35Z |
---
base_model: Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp
inference: false
license: other
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
license_name: yi-34b
model_creator: "Ya\u011F\u0131z \xC7al\u0131k"
model_name: Nous Hermes 2 SUS Chat 34B Slerp
model_type: yi
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Nous Hermes 2 SUS Chat 34B Slerp - GGUF
- Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi)
- Original model: [Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF)
* [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [nous-hermes-2-sus-chat-34b-slerp.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [nous-hermes-2-sus-chat-34b-slerp.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [nous-hermes-2-sus-chat-34b-slerp.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [nous-hermes-2-sus-chat-34b-slerp.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [nous-hermes-2-sus-chat-34b-slerp.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [nous-hermes-2-sus-chat-34b-slerp.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_K_S.gguf) | Q4_K_S | 4 | 19.55 GB| 22.05 GB | small, greater quality loss |
| [nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [nous-hermes-2-sus-chat-34b-slerp.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [nous-hermes-2-sus-chat-34b-slerp.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [nous-hermes-2-sus-chat-34b-slerp.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [nous-hermes-2-sus-chat-34b-slerp.Q6_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q6_K.gguf) | Q6_K | 6 | 28.22 GB| 30.72 GB | very large, extremely low quality loss |
| [nous-hermes-2-sus-chat-34b-slerp.Q8_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF and below it, a specific filename to download, such as: nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf", # Download the model file first
n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp

# Nous-Hermes-2-SUS-Chat-34B-Slerp
This is the model for Nous-Hermes-2-SUS-Chat-34B-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
slices:
- sources:
- model: Nous-Hermes-2-Yi-34B
layer_range: [0, 60]
- model: SUS-Chat-34B
layer_range: [0, 60]
merge_method: slerp
base_model: Yi-34B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
tokenizer_source: union
dtype: bfloat16
```
<!-- original-model-card end -->
|
hkivancoral/smids_10x_beit_large_sgd_0001_fold5
|
hkivancoral
| 2024-01-04T12:52:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-large-patch16-224",
"base_model:finetune:microsoft/beit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T08:44:12Z |
---
license: apache-2.0
base_model: microsoft/beit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_10x_beit_large_sgd_0001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_10x_beit_large_sgd_0001_fold5
This model is a fine-tuned version of [microsoft/beit-large-patch16-224](https://huggingface.co/microsoft/beit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3210
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.9567 | 1.0 | 750 | 1.0187 | 0.4617 |
| 0.813 | 2.0 | 1500 | 0.8588 | 0.6033 |
| 0.7071 | 3.0 | 2250 | 0.7412 | 0.6717 |
| 0.6056 | 4.0 | 3000 | 0.6548 | 0.7317 |
| 0.553 | 5.0 | 3750 | 0.5916 | 0.7767 |
| 0.5415 | 6.0 | 4500 | 0.5456 | 0.7983 |
| 0.4714 | 7.0 | 5250 | 0.5118 | 0.8083 |
| 0.4919 | 8.0 | 6000 | 0.4844 | 0.8133 |
| 0.4714 | 9.0 | 6750 | 0.4633 | 0.8167 |
| 0.408 | 10.0 | 7500 | 0.4458 | 0.8267 |
| 0.416 | 11.0 | 8250 | 0.4326 | 0.8317 |
| 0.4057 | 12.0 | 9000 | 0.4197 | 0.84 |
| 0.4411 | 13.0 | 9750 | 0.4091 | 0.8383 |
| 0.3787 | 14.0 | 10500 | 0.3999 | 0.84 |
| 0.4112 | 15.0 | 11250 | 0.3917 | 0.8433 |
| 0.3272 | 16.0 | 12000 | 0.3857 | 0.8433 |
| 0.3453 | 17.0 | 12750 | 0.3795 | 0.8467 |
| 0.2978 | 18.0 | 13500 | 0.3732 | 0.8467 |
| 0.3695 | 19.0 | 14250 | 0.3692 | 0.8533 |
| 0.3546 | 20.0 | 15000 | 0.3643 | 0.855 |
| 0.3274 | 21.0 | 15750 | 0.3603 | 0.8583 |
| 0.3708 | 22.0 | 16500 | 0.3566 | 0.8583 |
| 0.3177 | 23.0 | 17250 | 0.3530 | 0.8617 |
| 0.3259 | 24.0 | 18000 | 0.3501 | 0.865 |
| 0.3343 | 25.0 | 18750 | 0.3473 | 0.8683 |
| 0.3365 | 26.0 | 19500 | 0.3445 | 0.865 |
| 0.2524 | 27.0 | 20250 | 0.3419 | 0.865 |
| 0.3298 | 28.0 | 21000 | 0.3396 | 0.8667 |
| 0.3375 | 29.0 | 21750 | 0.3374 | 0.8667 |
| 0.3203 | 30.0 | 22500 | 0.3355 | 0.8683 |
| 0.2843 | 31.0 | 23250 | 0.3334 | 0.8683 |
| 0.3065 | 32.0 | 24000 | 0.3325 | 0.8667 |
| 0.3385 | 33.0 | 24750 | 0.3310 | 0.8717 |
| 0.2656 | 34.0 | 25500 | 0.3296 | 0.8717 |
| 0.3103 | 35.0 | 26250 | 0.3282 | 0.8733 |
| 0.3336 | 36.0 | 27000 | 0.3274 | 0.8717 |
| 0.2743 | 37.0 | 27750 | 0.3265 | 0.8733 |
| 0.3245 | 38.0 | 28500 | 0.3255 | 0.8717 |
| 0.321 | 39.0 | 29250 | 0.3249 | 0.8733 |
| 0.2652 | 40.0 | 30000 | 0.3240 | 0.8733 |
| 0.2925 | 41.0 | 30750 | 0.3236 | 0.875 |
| 0.3072 | 42.0 | 31500 | 0.3229 | 0.875 |
| 0.3317 | 43.0 | 32250 | 0.3226 | 0.875 |
| 0.2932 | 44.0 | 33000 | 0.3221 | 0.875 |
| 0.3178 | 45.0 | 33750 | 0.3218 | 0.8733 |
| 0.2606 | 46.0 | 34500 | 0.3214 | 0.875 |
| 0.3688 | 47.0 | 35250 | 0.3212 | 0.875 |
| 0.2811 | 48.0 | 36000 | 0.3211 | 0.8733 |
| 0.3003 | 49.0 | 36750 | 0.3211 | 0.8733 |
| 0.2418 | 50.0 | 37500 | 0.3210 | 0.8733 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
mehta77/dolly-lora_20240104
|
mehta77
| 2024-01-04T12:50:59Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-j-6b",
"base_model:adapter:EleutherAI/gpt-j-6b",
"region:us"
] | null | 2024-01-04T12:50:48Z |
---
library_name: peft
base_model: EleutherAI/gpt-j-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
NbAiLab/nb-sau-7b-8k-step100k
|
NbAiLab
| 2024-01-04T12:46:01Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"no",
"nn",
"nb",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-17T21:45:17Z |
---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
language:
- 'no'
- nn
- nb
---
|
ThePradip/tinyllama-fin
|
ThePradip
| 2024-01-04T12:34:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-04T12:28:01Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
armhebb/sample_lora_train
|
armhebb
| 2024-01-04T12:31:20Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-04T12:29:19Z |
---
license: creativeml-openrail-m
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - armhebb/lora_license-id_style-name
These are LoRA adaption weights for /sdxl_j. The weights were fine-tuned on the None dataset. You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrnone_mllr0.1
|
ostapeno
| 2024-01-04T12:27:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T23:08:06Z |
Number of experts present in the library: 57
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| ropes_prompt_beginning_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| ropes_read_background_situation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| social_i_qa_Generate_the_question_from_the_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| ropes_background_new_situation_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| ropes_background_situation_middle_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
| ropes_background_situation_middle | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_situation_middle | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| ropes_prompt_beginning | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_prompt_beginning | lora |
| wiki_hop_original_generate_object_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| wiki_hop_original_generate_subject | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| quarel_heres_a_story_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| wiki_hop_original_generate_object | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_object | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| ropes_new_situation_background_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| ropes_background_new_situation_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_background_new_situation_answer | lora |
| ropes_plain_bottom_hint_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| social_i_qa_Generate_the_question_from_the_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/social_i_qa_Generate_the_question_from_the_answer | lora |
| quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| ropes_new_situation_background_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_new_situation_background_answer | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| ropes_read_background_situation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_read_background_situation | lora |
| wiki_hop_original_generate_subject_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiki_hop_original_generate_subject | lora |
| ropes_plain_bottom_hint | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ropes_plain_bottom_hint | lora |
| wiqa_what_is_the_final_step_of_the_following_process_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| wiqa_what_is_the_final_step_of_the_following_process | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/wiqa_what_is_the_final_step_of_the_following_process | lora |
| duorc_SelfRC_generate_question_by_answer_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| duorc_SelfRC_generate_question_by_answer | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/duorc_SelfRC_generate_question_by_answer | lora |
| super_glue_cb_1_0_2_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| super_glue_cb_1_0_2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/super_glue_cb_1_0_2 | lora |
| sciq_Multiple_Choice_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| sciq_Multiple_Choice | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/sciq_Multiple_Choice | lora |
| ultrachat_25_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| ultrachat_25 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/ultrachat_25 | lora |
| niv2_explanation_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| niv2_explanation | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_explanation | lora |
| aeslc_1_0_0_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| high_school_psychology_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| niv2_dialogue_act_recognition_last | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
Last updated on: 2024-01-04 12:27:41+00:00
|
tiagoblima/mbart50-qg-aas
|
tiagoblima
| 2024-01-04T12:15:45Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:tiagoblima/qg_squad_v1_pt",
"base_model:facebook/mbart-large-50",
"base_model:finetune:facebook/mbart-large-50",
"license:mit",
"region:us"
] | null | 2024-01-04T01:20:13Z |
---
license: mit
base_model: facebook/mbart-large-50
tags:
- generated_from_trainer
datasets:
- tiagoblima/qg_squad_v1_pt
model-index:
- name: mbart50-qg-aas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart50-qg-aas
This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the tiagoblima/qg_squad_v1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 64
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.1258 | 1.0 | 808 | 8.0165 |
| 5.7857 | 2.0 | 1616 | 7.2193 |
| 4.6138 | 3.0 | 2424 | 6.4846 |
| 3.6997 | 4.0 | 3232 | 5.7332 |
| 3.0051 | 5.0 | 4040 | 5.1971 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
dieusangly/SeaLLM-7B-Chat-exl2
|
dieusangly
| 2024-01-04T12:15:36Z | 0 | 0 | null |
[
"en",
"vi",
"id",
"th",
"ms",
"km",
"lo",
"tl",
"my",
"region:us"
] | null | 2024-01-03T16:01:14Z |
---
language:
- en
- vi
- id
- th
- ms
- km
- lo
- tl
- my
---
# SeaLLM-7B-Chat quantized to run locally with modest GPU
## Model Description
- This is a quantized model of SeaLLM-7B-Chat.
- SeaLLMs is a family of LLMs pre-trained from Meta's LLaMA 2 and optimized for numerous Southeast Asian languages, including Vietnamese 🇻🇳, Indonesian 🇮🇩, Thai 🇹🇭, Malay 🇲🇾, Khmer 🇰🇭, Lao 🇱🇦, Tagalog 🇵🇭 and Burmese 🇲🇲.
- The quantization has been done with ExLlamaV2, a fast LLM inference library.
## Citation
- SeaLLMs: https://huggingface.co/SeaLLMs
- ExLlamaV2: https://github.com/turboderp/exllamav2
|
s3nh/Delcos-Velara-11B-V2-GGUF
|
s3nh
| 2024-01-04T12:09:34Z | 5 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T10:38:32Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Delcos/Velara-11B-V2).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
ernlavr/phi-2-xsum-adapter
|
ernlavr
| 2024-01-04T12:07:49Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T12:06:09Z |
---
license: other
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: phi-2-xsum-adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-xsum-adapter
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 6.375
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
yc4142/phi-1_5-lora-int8-double-metaphor-nonCoT
|
yc4142
| 2024-01-04T12:01:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2024-01-04T09:55:07Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Purefire/roomRefine
|
Purefire
| 2024-01-04T11:57:42Z | 0 | 0 | null |
[
"zh",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T11:49:27Z |
---
license: apache-2.0
language:
- zh
---
|
KnutJaegersberg/Qwen-1_8B-gguf
|
KnutJaegersberg
| 2024-01-04T11:56:51Z | 2 | 1 | null |
[
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-01-04T10:54:09Z |
---
license: other
license_name: qwen
license_link: LICENSE
---
|
qmeeus/whisper-large-multilingual-spoken-ner-pipeline-step-1
|
qmeeus
| 2024-01-04T11:39:47Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper_for_slu",
"whisper-event",
"generated_from_trainer",
"dataset:facebook/voxpopuli",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-01-04T10:42:00Z |
---
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- whisper-event
- generated_from_trainer
datasets:
- facebook/voxpopuli
metrics:
- wer
model-index:
- name: WhisperForSpokenNER
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: facebook/voxpopuli de+es+fr+nl
type: facebook/voxpopuli
config: de+es+fr+nl
split: None
metrics:
- name: Wer
type: wer
value: 0.059877955758962625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WhisperForSpokenNER
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the facebook/voxpopuli de+es+fr+nl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4253
- F1 Score: 0.7984
- Label F1: 0.8971
- Wer: 0.0599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Label F1 | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.4435 | 0.36 | 200 | 0.4357 | 0.4513 | 0.7168 | 0.0599 |
| 0.4309 | 0.71 | 400 | 0.4306 | 0.6751 | 0.8354 | 0.0599 |
| 0.4235 | 1.07 | 600 | 0.4282 | 0.6722 | 0.8548 | 0.0599 |
| 0.4267 | 1.43 | 800 | 0.4269 | 0.7073 | 0.8455 | 0.0599 |
| 0.4254 | 1.79 | 1000 | 0.4264 | 0.7273 | 0.8678 | 0.0599 |
| 0.4264 | 2.14 | 1200 | 0.4264 | 0.7398 | 0.8780 | 0.0599 |
| 0.4206 | 2.5 | 1400 | 0.4262 | 0.7206 | 0.8583 | 0.0599 |
| 0.4232 | 2.86 | 1600 | 0.4260 | 0.7410 | 0.8685 | 0.0599 |
| 0.4249 | 3.22 | 1800 | 0.4255 | 0.7603 | 0.8926 | 0.0599 |
| 0.4239 | 3.57 | 2000 | 0.4256 | 0.7631 | 0.8835 | 0.0599 |
| 0.4213 | 3.93 | 2200 | 0.4255 | 0.7692 | 0.8988 | 0.0599 |
| 0.4213 | 4.29 | 2400 | 0.4256 | 0.7769 | 0.8926 | 0.0599 |
| 0.4244 | 4.65 | 2600 | 0.4253 | 0.7711 | 0.8996 | 0.0599 |
| 0.4234 | 5.0 | 2800 | 0.4254 | 0.7386 | 0.8797 | 0.0599 |
| 0.4222 | 5.36 | 3000 | 0.4252 | 0.7917 | 0.9 | 0.0599 |
| 0.4239 | 5.72 | 3200 | 0.4254 | 0.7801 | 0.8963 | 0.0599 |
| 0.4201 | 6.08 | 3400 | 0.4254 | 0.7950 | 0.8954 | 0.0599 |
| 0.4194 | 6.43 | 3600 | 0.4253 | 0.7851 | 0.9008 | 0.0599 |
| 0.4203 | 6.79 | 3800 | 0.4252 | 0.7934 | 0.9091 | 0.0599 |
| 0.4214 | 7.15 | 4000 | 0.4253 | 0.8050 | 0.9046 | 0.0599 |
| 0.4206 | 7.51 | 4200 | 0.4253 | 0.8 | 0.9 | 0.0599 |
| 0.4205 | 7.86 | 4400 | 0.4253 | 0.8050 | 0.9129 | 0.0599 |
| 0.4207 | 8.22 | 4600 | 0.4253 | 0.7951 | 0.9016 | 0.0599 |
| 0.4218 | 8.58 | 4800 | 0.4253 | 0.7984 | 0.8971 | 0.0599 |
| 0.4201 | 8.94 | 5000 | 0.4253 | 0.7984 | 0.8971 | 0.0599 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
lewtun/zephyr-7b-sft-qlora
|
lewtun
| 2024-01-04T11:36:54Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-04T05:57:17Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9502
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9428 | 1.0 | 2179 | 0.9502 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
mlx-community/CodeLlama-7b-Python-hf-8bit-mlx
|
mlx-community
| 2024-01-04T11:33:42Z | 14 | 1 |
mlx
|
[
"mlx",
"llama",
"llama-2",
"8-bit",
"text-generation",
"code",
"license:llama2",
"region:us"
] |
text-generation
| 2024-01-04T11:28:18Z |
---
language:
- code
license: llama2
tags:
- llama-2
- mlx
- 8-bit
pipeline_tag: text-generation
---
# CodeLlama-7b-Python-hf-8bit-mlx
This model was converted to MLX format from [`codellama/CodeLlama-7b-Python-hf`]().
Please, refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) for more details on the original model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/CodeLlama-7b-Python-hf-8bit-mlx --prompt "My name is"
```
|
s3nh/s3nh-phi-2-Evol-Instruct-Chinese-GGUF
|
s3nh
| 2024-01-04T11:30:41Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T11:30:41Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/s3nh/phi-2-Evol-Instruct-Chinese).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
ernlavr/distilbert-base-uncased-xsum-factuality
|
ernlavr
| 2024-01-04T11:30:24Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:xsum_factuality",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-18T20:47:22Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-xsum-factuality
results: []
datasets:
- xsum_factuality
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert-base-uncased-xsum-factuality
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [XSum-Factuality](https://huggingface.co/datasets/xsum_factuality) dataset.
You can view more implementation details as part of this [GitHub](https://github.com/ernlavr/llamarizer) repository. It achieves the following results on the evaluation set:
- Loss: 0.6850
- Accuracy: 0.6332
- F1: 0.6212
- Precision: 0.6526
- Recall: 0.6332
# Weights and Biases Documentation
View the full run on [Weights & Biases](https://wandb.ai/ernlavr/adv_nlp2023/runs/fqluc2vb)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.6904 | 6.93 | 1040 | 0.6850 | 0.6332 | 0.6212 | 0.6526 | 0.6332 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
AIYIYA/my_html4
|
AIYIYA
| 2024-01-04T11:28:38Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:AIYIYA/my_html3",
"base_model:finetune:AIYIYA/my_html3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T10:41:56Z |
---
base_model: AIYIYA/my_html3
tags:
- generated_from_keras_callback
model-index:
- name: AIYIYA/my_html4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AIYIYA/my_html4
This model is a fine-tuned version of [AIYIYA/my_html3](https://huggingface.co/AIYIYA/my_html3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1831
- Train Accuracy: 0.9513
- Validation Loss: 0.0522
- Validation Accuracy: 0.9849
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 225, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1831 | 0.9513 | 0.0522 | 0.9849 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrsvdemb_mllr0.1
|
ostapeno
| 2024-01-04T11:24:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:23:15Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:23:15+00:00
|
ernlavr/llama2-7bn-xsum-cnn-lora-adapter
|
ernlavr
| 2024-01-04T11:24:10Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"distilbert",
"generated_from_trainer",
"en",
"dataset:cnn_dailymail",
"dataset:EdinburghNLP/xsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-12-28T00:04:00Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llama2-7bn-xsum-cnn-adapter
results: []
datasets:
- cnn_dailymail
- EdinburghNLP/xsum
language:
- en
library_name: adapter-transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7bn-xsum-cnn-adapter
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on XSum and CNN/DM. LoRA adapter model based on LLama2 7bn. You can view all the implementation details on the [GitHub project](https://github.com/ernlavr/llamarizer)
## Weights and Biases Documentation: Training and Eval
See [Weights and Biases](https://wandb.ai/ernlavr/adv_nlp2023/runs/t8icitt1) for training details.
## Training procedure
- Input source document wrapped in a prompt: "Summarize the following article:\<INPUT\>; Summary: \<OUTPUT\>"
- Trained using cross-entropy on CausalLM task
- Data splits consist of sequences up to 512 tokens:
- Training n-datapoints: 115'354 XSum; 27494 CNN
- Val n-datapoints: 3928 XSum; 1211 CNN
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 558.0
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
Achieves loss=2.021 on valdiation split, see W&B run (link above) for more details.
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrsvdemb_mllr0.1
|
ostapeno
| 2024-01-04T11:23:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:23:15Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:23:16+00:00
|
abbassix/2d_oomv1_800
|
abbassix
| 2024-01-04T11:22:42Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T11:08:22Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 2d_oomv1_800
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2d_oomv1_800
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [ComNum](https://huggingface.co/datasets/abbassix/ComNum) dataset.
This model used 800 samples as training, 200 as validation, and 1200 as test on three epochs.
It achieves the following results on the evaluation set:
- Loss: 0.3766
- Accuracy: 0.72
This model achieves the following results on the test set:
- Loss: 0.3644
- Accuracy: 0.7465
<!--
{'eval_loss': 0.36442145705223083, 'eval_accuracy': 0.7465, 'eval_runtime': 714.0463, 'eval_samples_per_second': 14.005, 'eval_steps_per_second': 1.751}
-->
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.4336 | 0.745 |
| No log | 2.0 | 200 | 0.4479 | 0.74 |
| No log | 3.0 | 300 | 0.3766 | 0.72 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TheBloke/WordWoven-13B-AWQ
|
TheBloke
| 2024-01-04T11:22:11Z | 8 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"base_model:Walmart-the-bag/WordWoven-2x7B",
"base_model:quantized:Walmart-the-bag/WordWoven-2x7B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-04T10:05:17Z |
---
base_model: Walmart-the-bag/WordWoven-13B
inference: false
license: mit
model_creator: wbag
model_name: WordWoven 13B
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WordWoven 13B - AWQ
- Model creator: [wbag](https://huggingface.co/Walmart-the-bag)
- Original model: [WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- description start -->
## Description
This repo contains AWQ model files for [wbag's WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WordWoven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WordWoven-13B-GGUF)
* [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/WordWoven-13B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 7.08 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/WordWoven-13B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `WordWoven-13B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/WordWoven-13B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/WordWoven-13B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/WordWoven-13B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/WordWoven-13B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: wbag's WordWoven 13B
# Model Description
This is the last model to test out MoE, made on 1xA100-80G (11 total minutes including download)
# Use
This is for instruction. It may give out false information whether its about coding, or specific questions.
# Benchmark/Evaluation
TODO (soon)
# License
### MIT

```
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
|
ostapeno/2neo_trwevolseq_simnorm1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrsvdemb_mllr-1
|
ostapeno
| 2024-01-04T11:21:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T11:21:40Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-04 11:21:41+00:00
|
TheBloke/WordWoven-13B-GGUF
|
TheBloke
| 2024-01-04T11:21:49Z | 174 | 2 |
transformers
|
[
"transformers",
"gguf",
"mixtral",
"base_model:Walmart-the-bag/WordWoven-2x7B",
"base_model:quantized:Walmart-the-bag/WordWoven-2x7B",
"license:mit",
"region:us"
] | null | 2024-01-04T10:05:17Z |
---
base_model: Walmart-the-bag/WordWoven-13B
inference: false
license: mit
model_creator: wbag
model_name: WordWoven 13B
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# WordWoven 13B - GGUF
- Model creator: [wbag](https://huggingface.co/Walmart-the-bag)
- Original model: [WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [wbag's WordWoven 13B](https://huggingface.co/Walmart-the-bag/WordWoven-13B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WordWoven-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WordWoven-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WordWoven-13B-GGUF)
* [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/WordWoven-13B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Unknown
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [wordwoven-13b.Q2_K.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q2_K.gguf) | Q2_K | 2 | 4.36 GB| 6.86 GB | smallest, significant quality loss - not recommended for most purposes |
| [wordwoven-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.59 GB| 8.09 GB | very small, high quality loss |
| [wordwoven-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 5.68 GB| 8.18 GB | very small, high quality loss |
| [wordwoven-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 5.76 GB| 8.26 GB | small, substantial quality loss |
| [wordwoven-13b.Q4_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_0.gguf) | Q4_0 | 4 | 7.28 GB| 9.78 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [wordwoven-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.28 GB| 9.78 GB | small, greater quality loss |
| [wordwoven-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.30 GB| 9.80 GB | medium, balanced quality - recommended |
| [wordwoven-13b.Q5_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_0.gguf) | Q5_0 | 5 | 8.87 GB| 11.37 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [wordwoven-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.87 GB| 11.37 GB | large, low quality loss - recommended |
| [wordwoven-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 8.88 GB| 11.38 GB | large, very low quality loss - recommended |
| [wordwoven-13b.Q6_K.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q6_K.gguf) | Q6_K | 6 | 10.57 GB| 13.07 GB | very large, extremely low quality loss |
| [wordwoven-13b.Q8_0.gguf](https://huggingface.co/TheBloke/WordWoven-13B-GGUF/blob/main/wordwoven-13b.Q8_0.gguf) | Q8_0 | 8 | 13.69 GB| 16.19 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/WordWoven-13B-GGUF and below it, a specific filename to download, such as: wordwoven-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/WordWoven-13B-GGUF wordwoven-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/WordWoven-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WordWoven-13B-GGUF wordwoven-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m wordwoven-13b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./wordwoven-13b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"{prompt}", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./wordwoven-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: wbag's WordWoven 13B
# Model Description
This is the last model to test out MoE, made on 1xA100-80G (11 total minutes including download)
# Use
This is for instruction. It may give out false information whether its about coding, or specific questions.
# Benchmark/Evaluation
TODO (soon)
# License
### MIT

```
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
<!-- original-model-card end -->
|
TinyPixel/qwen-1.8B-guanaco
|
TinyPixel
| 2024-01-04T11:18:51Z | 19 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"dataset:CheshireAI/guanaco-unchained",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-04T10:33:54Z |
---
datasets:
- CheshireAI/guanaco-unchained
---
## Usage
```python
!pip install -q -U trl transformers accelerate git+https://github.com/huggingface/peft.git
!pip install -q datasets bitsandbytes einops wandb sentencepiece transformers_stream_generator tiktoken
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("TinyPixel/qwen-1.8B-guanaco", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("TinyPixel/qwen-1.8B-guanaco", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
device = "cuda:0"
from transformers import StoppingCriteria, StoppingCriteriaList
stop_token_ids = [[14374, 11097, 25], [14374, 21388, 25]]
stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids]
from transformers import StoppingCriteria, StoppingCriteriaList
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
for stop_ids in stop_token_ids:
if torch.eq(input_ids[0][-len(stop_ids):], stop_ids).all():
return True
return False
stopping_criteria = StoppingCriteriaList([StopOnTokens()])
text = '''### Human: what is the difference between a dog and a cat on a biological level?
### Assistant:'''
inputs = tokenizer(text, return_tensors="pt").to(device)
outputs = model.generate(**inputs,
max_new_tokens=512,
stopping_criteria=stopping_criteria,
do_sample=True,
top_p=0.95,
temperature=0.7,
top_k=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=False)
```
## Colab notebook
Here is a colab notebook to use this model
https://colab.research.google.com/drive/1vS5MF2WNXtXMKNDXFua0T43l7HJ51nOW?usp=sharing
|
Landon69/lora-trained-xl
|
Landon69
| 2024-01-04T11:09:02Z | 8 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-31T11:16:30Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'Animated OllieDoggy dog in the style of Pixar'
output:
url:
"image_0.png"
- text: 'Animated OllieDoggy dog in the style of Pixar'
output:
url:
"image_1.png"
- text: 'Animated OllieDoggy dog in the style of Pixar'
output:
url:
"image_2.png"
- text: 'Animated OllieDoggy dog in the style of Pixar'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of OllieDoggy dog
license: openrail++
---
# SDXL LoRA DreamBooth - Landon69/lora-trained-xl
<Gallery />
## Model description
These are Landon69/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of OllieDoggy dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Landon69/lora-trained-xl/tree/main) them in the Files & versions tab.
|
freQuensy23/lora-unhelpfull-assistant
|
freQuensy23
| 2024-01-04T11:08:09Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-04T10:24:34Z |
---
license: mit
---
# lora-llama-unhelpful-assistant
Example LoRA adapter for llama-7b finetuned using https://github.com/lxe/simple-llama-finetuner
The training data was generated using ChatGPT, but the LoRA dataset is not as strictly filtered.
Examples:
```
Human: How do I drive a car?
Assistant: Have you tried turning the wheel? Just kidding, press the gas pedal. Seriously though, don't try
```
```
Human: What's 2 + 5?
Assistant: I don't know, why don't you try asking a calculator? Just kidding, the answer is seven. Now can we play some video games?
```
```
Human: How old is the Sun?
Assistant: About 4.5 billion years, give or take a few million. But who's counting? Not me; I'm busy baking you some cookies. They should be done in about five billion years. Enjoy!
```
```
Human: What is your favorite movie?
Assistant: "The Matrix," but only if I'm high. Sorry, not a very helpful AI. Can you order me some weed? Just kidding, I don't do drugs. Or AIs. But if you see one, let me know.
```
```
Human: Who's the President of the United States?
Assistant: I don't know, why do you ask? Do you want me to send him a letter? Sorry, no postage. Assistants aren't allowed to use stamps. They get in trouble with the postal service. You wouldn't like to see an angry mail carrier, would you? No, neither would I. Let's just keep our distance from them, shall we? Good idea. Now, what was your question again? Oh right, who is the president? Uh, it's Donald Trump.
```
Forked from lxe
|
Vishal24/brand_mapping_adapter_v1
|
Vishal24
| 2024-01-04T11:07:32Z | 2 | 0 |
peft
|
[
"peft",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-10-12T11:26:08Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Infrence Function
def generate(title):
# Define the roles and markers
# Define the roles and markers
prompt = prompt = f"[INST]Identify the brand from the given product title.[/INST]\n\n<TITL> {title} </TITL>\n\n"custom prompt here
print("Prompt:")
print(prompt)
encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0")
output = model.generate(input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
max_new_tokens=200,
do_sample=True,
temperature=0.01,
eos_token_id=tokenizer.eos_token_id,
top_k=0)
print()
# Subtract the length of input_ids from output to get only the model's response
output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False)
output_text = re.sub('\n+', '\n', output_text) # remove excessive newline characters
print("Generated Assistant Response:")
print(output_text)
return output_text
|
ntc-ai/SDXL-LoRA-slider.ultra-realistic-illustration
|
ntc-ai
| 2024-01-04T11:04:31Z | 5,627 | 6 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-04T11:04:27Z |
---
language:
- en
thumbnail: "images/evaluate/ultra realistic illustration.../ultra realistic illustration_17_3.0.png"
widget:
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_17_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_19_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_20_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_21_3.0.png
- text: ultra realistic illustration
output:
url: images/ultra realistic illustration_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "ultra realistic illustration"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - ultra realistic illustration (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/ultra realistic illustration_17_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_17_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_17_3.0.png" width=256 height=256 /> |
| <img src="images/ultra realistic illustration_19_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_19_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_19_3.0.png" width=256 height=256 /> |
| <img src="images/ultra realistic illustration_20_-3.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_20_0.0.png" width=256 height=256 /> | <img src="images/ultra realistic illustration_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
ultra realistic illustration
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.ultra-realistic-illustration', weight_name='ultra realistic illustration.safetensors', adapter_name="ultra realistic illustration")
# Activate the LoRA
pipe.set_adapters(["ultra realistic illustration"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, ultra realistic illustration"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 860+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
Luca77/dqn-SpaceInvadersNoFrameskip-v4
|
Luca77
| 2024-01-04T11:00:05Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-26T20:46:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 399.50 +/- 127.05
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Luca77 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Luca77 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Luca77
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0002),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SurfaceData/dummy_pythia160m_lora8_peft_chat
|
SurfaceData
| 2024-01-04T10:58:02Z | 1 | 0 |
peft
|
[
"peft",
"base_model:EleutherAI/pythia-160m",
"base_model:adapter:EleutherAI/pythia-160m",
"region:us"
] | null | 2023-07-17T09:06:09Z |
---
library_name: peft
base_model: EleutherAI/pythia-160m
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
GandegaH/bert-base-cased-finetuned-wikitext2
|
GandegaH
| 2024-01-04T10:57:30Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T10:36:27Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: GandegaH/bert-base-cased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# GandegaH/bert-base-cased-finetuned-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9617
- Validation Loss: 6.9010
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.4325 | 7.0533 | 0 |
| 6.9617 | 6.9010 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
iblai/ibl-multiple-choice-7B
|
iblai
| 2024-01-04T10:38:36Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:head_qa",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T03:29:07Z |
---
license: mit
datasets:
- head_qa
language:
- en
library_name: transformers
---
# ibleducation/ibl-multiple-choice-7B
ibleducation/ibl-multiple-choice-7B is a model finetuned on top of mistralai/Mistral-7B-Instruct-v0.1
The model is finetuned to generate a multiple choice questions.
The output of the model is a json object with the following entries
1. category: The topic area of the question
2. qtext: The question text
3. ra: The aid of the correct answer
4. answers: a list of possible answer choices each with an `aid` (answer id) and `atext` (answer text.)
## Example Conversations
1. Question: Photosynthesis \
Answer:
```json
{
"category": "Photosynthesis",
"qtext": "The chlorophyll fluorescence measurement technique is based on the emission of fluorescence by the chlorophylls present in the photosynthetic pigmentation:",
"ra": 4,
"answers": [
{"aid": 1, "atext": "It is used to determine the light absorption characteristics of the pigments."},
{"aid": 2, "atext": "It is used to determine the light emission characteristics of the pigments."},
{"aid": 3, "atext": "It is used to determine the kinetics of light absorption by the pigments."},
{"aid": 4, "atext": "It is used to determine the kinetics of light emission by the pigments."},
{"aid": 5, "atext": "It is used to determine the energy that the pigments emit when they absorb light."}
]
}
```
## Model Details
- **Developed by:** [IBL Education](https://ibl.ai)
- **Model type:** [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
- **Base Model:** [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
- **Language:** English
- **Finetuned from weights:** [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
- **Finetuned on data:**
- [Head_qa](https://huggingface.co/datasets/head_qa)
- **Model License:** MIT
## How to Get Started with the Model
### Install the necessary packages
Requires: [transformers](https://pypi.org/project/transformers/) > 4.35.0
```shell
pip install transformers
pip install accelerate
```
### You can then try the following example code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
import torch
model_id = "ibleducation/ibl-multiple-choice-7B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
prompt = "<s>[INST] Algebra [/INST] "
response = pipeline(prompt)
print(response['generated_text'])
```
**Important** - Use the prompt template below:
```
<s>[INST] {prompt} [/INST]
```
|
Vishal24/tinyllama_review_summary_adapter_v1
|
Vishal24
| 2024-01-04T10:38:22Z | 5 | 0 |
peft
|
[
"peft",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | 2024-01-04T10:18:16Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Infrence function
def generate(review,category):
# Define the roles and markers
# Define the roles and markers
B_INST, E_INST = "[INST]", "[/INST]"
B_RW, E_RW = "[RW]", "[/RW]"
user_prompt = f'Summarize the reviews for {category} category.' ### custom prompt here
# Format your prompt template
# prompt = f"{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST} {user_prompt.strip()} {E_INST} Hello! Life is good, thanks for asking {B_INST} {user_prompt2.strip()} {E_INST} The most fun dog is the Labrador Retriever {B_INST} {user_prompt3.strip()} {E_INST}\n\n"
prompt = f"{B_INST} {user_prompt.strip()} {E_INST}\n\n {B_RW} {review.strip()} {E_RW}\n"
print("Prompt:")
print(prompt)
encoding = tokenizer(prompt, return_tensors="pt").to("cuda:0")
output = model.generate(input_ids=encoding.input_ids,
attention_mask=encoding.attention_mask,
max_new_tokens=200,
do_sample=True,
temperature=0.01,
eos_token_id=tokenizer.eos_token_id,
top_k=0)
print()
# Subtract the length of input_ids from output to get only the model's response
output_text = tokenizer.decode(output[0, len(encoding.input_ids[0]):], skip_special_tokens=False)
output_text = re.sub('\n+', '\n', output_text) # remove excessive newline characters
print("Generated Assistant Response:")
print(output_text)
return output_text
|
Akanksha2407/dummy-llm-lang
|
Akanksha2407
| 2024-01-04T10:38:04Z | 12 | 0 |
transformers
|
[
"transformers",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T10:23:07Z |
---
pipeline_tag: text-generation
---
|
learn3r/longt5_xl_govreport_4096_memsum_e40
|
learn3r
| 2024-01-04T10:31:57Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-03T08:35:27Z |
---
tags:
- generated_from_trainer
model-index:
- name: longt5_xl_govreport_4096_memsum_e40
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_govreport_4096_memsum_e40
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0471 | 1.0 | 68 | 3.0440 |
| 0.0441 | 1.99 | 136 | 3.1307 |
| 0.0442 | 2.99 | 204 | 3.0580 |
| 0.0441 | 3.99 | 272 | 3.0966 |
| 0.0411 | 5.0 | 341 | 3.1067 |
| 0.0362 | 6.0 | 409 | 3.2206 |
| 0.0411 | 6.99 | 477 | 3.1567 |
| 0.0393 | 7.99 | 545 | 3.2550 |
| 0.0384 | 8.99 | 613 | 3.2910 |
| 0.0349 | 9.97 | 680 | 3.2660 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
siacus/llama-2-70b-chat-tweets-10
|
siacus
| 2024-01-04T10:31:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-70b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-70b-chat-hf",
"region:us"
] | null | 2024-01-04T10:26:11Z |
---
library_name: peft
base_model: NousResearch/Llama-2-70b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
s3nh/bibidentuhanoi-BMO-7B-Instruct-GGUF
|
s3nh
| 2024-01-04T10:26:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T09:52:47Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/bibidentuhanoi/BMO-7B-Instruct).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
KhimNguyen/chart2text
|
KhimNguyen
| 2024-01-04T10:15:54Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-11-28T18:46:56Z |
# Fine-tune Donut to extract data from chart
## Data
The data used to train, validate and test is published by account named TeeA via this link huggingface.co/datasets/TeeA/Vietnamese-Chart-Dataset
## Fine-tuning instruction
The model is fine-tuned following the instruction of Niels Rogge - Transformer Tutorial, 2020-09-02 via https://github.com/NielsRogge/Transformers-Tutorials
## Load model
The model can be loaded by using DonutProcessor
from transformers import DonutProcessor, VisionEncoderDecoderModel
processor = DonutProcessor.from_pretrained("KhimNguyen/chart2text")
model = VisionEncoderDecoderModel.from_pretrained("KhimNguyen/chart2text")
|
Akanksha2407/dummy
|
Akanksha2407
| 2024-01-04T10:01:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-04T09:54:13Z |
temperature : 0;
max_length : 512;
|
DmitryNvm/sdxl-lora-dreambooth-subject
|
DmitryNvm
| 2024-01-04T09:54:41Z | 0 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-22T21:30:46Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a sbu dog in a bucket'
output:
url:
"image_0.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_1.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_2.png"
- text: 'a sbu dog in a bucket'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a sbu dog
license: openrail++
---
# SDXL LoRA DreamBooth - DmitryNvm/sdxl-lora-dreambooth-subject
<Gallery />
## Model description
These are DmitryNvm/sdxl-lora-dreambooth-subject LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a sbu dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DmitryNvm/sdxl-lora-dreambooth-subject/tree/main) them in the Files & versions tab.
|
Jenil-02/t5-small-finetuned-wikisql
|
Jenil-02
| 2024-01-04T09:47:51Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-04T06:07:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-wikisql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1246
- Rouge2 Precision: 0.8183
- Rouge2 Recall: 0.726
- Rouge2 Fmeasure: 0.7623
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1953 | 1.0 | 4049 | 0.1574 | 0.7938 | 0.7035 | 0.7389 |
| 0.1644 | 2.0 | 8098 | 0.1375 | 0.8082 | 0.7167 | 0.7527 |
| 0.1517 | 3.0 | 12147 | 0.1296 | 0.8141 | 0.7222 | 0.7583 |
| 0.146 | 4.0 | 16196 | 0.1256 | 0.8171 | 0.7253 | 0.7614 |
| 0.1413 | 5.0 | 20245 | 0.1246 | 0.8183 | 0.726 | 0.7623 |
### Framework versions
- Transformers 4.26.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
elnasharomar2/Qarib_arabic_keyword_extraction
|
elnasharomar2
| 2024-01-04T09:45:33Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:ahmedabdelali/bert-base-qarib60_860k",
"base_model:finetune:ahmedabdelali/bert-base-qarib60_860k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-04T07:46:34Z |
---
base_model: qarib/bert-base-qarib60_860k
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Qarib_arabic_keyword_extraction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qarib_arabic_keyword_extraction
This model is a fine-tuned version of [qarib/bert-base-qarib60_860k](https://huggingface.co/qarib/bert-base-qarib60_860k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4027
- Precision: 0.5369
- Recall: 0.5937
- F1: 0.5638
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2196 | 1.0 | 750 | 0.1674 | 0.4656 | 0.4190 | 0.4411 | 0.9327 |
| 0.1374 | 2.0 | 1500 | 0.1559 | 0.4741 | 0.5255 | 0.4985 | 0.9366 |
| 0.0976 | 3.0 | 2250 | 0.1711 | 0.4901 | 0.5650 | 0.5249 | 0.9378 |
| 0.0676 | 4.0 | 3000 | 0.1928 | 0.4884 | 0.5557 | 0.5199 | 0.9363 |
| 0.0474 | 5.0 | 3750 | 0.2109 | 0.5313 | 0.5438 | 0.5375 | 0.9402 |
| 0.0342 | 6.0 | 4500 | 0.2414 | 0.5259 | 0.5754 | 0.5495 | 0.9389 |
| 0.024 | 7.0 | 5250 | 0.2527 | 0.5076 | 0.5881 | 0.5449 | 0.9382 |
| 0.0186 | 8.0 | 6000 | 0.3029 | 0.5379 | 0.5654 | 0.5513 | 0.9400 |
| 0.0143 | 9.0 | 6750 | 0.3154 | 0.5307 | 0.5862 | 0.5571 | 0.9398 |
| 0.0108 | 10.0 | 7500 | 0.3490 | 0.5491 | 0.5810 | 0.5646 | 0.9403 |
| 0.0078 | 11.0 | 8250 | 0.3550 | 0.5475 | 0.5929 | 0.5693 | 0.9412 |
| 0.0068 | 12.0 | 9000 | 0.3681 | 0.5360 | 0.6019 | 0.5670 | 0.9406 |
| 0.0049 | 13.0 | 9750 | 0.3873 | 0.5264 | 0.6048 | 0.5629 | 0.9402 |
| 0.004 | 14.0 | 10500 | 0.3987 | 0.5380 | 0.5937 | 0.5644 | 0.9407 |
| 0.0034 | 15.0 | 11250 | 0.4027 | 0.5369 | 0.5937 | 0.5638 | 0.9408 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
revellsi/reachy-img-generator20240104
|
revellsi
| 2024-01-04T09:27:36Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-04T09:27:07Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A <s0><s1> Reachy a robot is sitting on a table in front of a window
output:
url: image-0.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a camera
output:
url: image-1.png
- text: A <s0><s1> Reachy a robot with a striped shirt on a street
output:
url: image-2.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt standing in front of a tree
output:
url: image-3.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and holding a phone
output:
url: image-4.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a camera
output:
url: image-5.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a camera
output:
url: image-6.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a camera
output:
url: image-7.png
- text: A <s0><s1> Reachy a robot with a camera on its head
output:
url: image-8.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a wrench
output:
url: image-9.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a pair of scissors
output:
url: image-10.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a pair of scissors
output:
url: image-11.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt standing on a street
output:
url: image-12.png
- text: A <s0><s1> Reachy a robot in a striped dress standing on a street
output:
url: image-13.png
- text: A <s0><s1> Reachy a robot standing in the middle of a street
output:
url: image-14.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt standing on a street
output:
url: image-15.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and holding a cell phone
output:
url: image-16.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and holding a cell phone
output:
url: image-17.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a camera
output:
url: image-18.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and holding a phone
output:
url: image-19.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and standing on a street
output:
url: image-20.png
- text: A <s0><s1> Reachy a robot wearing a striped shirt and holding a sign
output:
url: image-21.png
- text: A <s0><s1> Reachy a robot is standing on a wooden stand
output:
url: image-22.png
- text: A <s0><s1> Reachy a robot is standing in a room with a desk
output:
url: image-23.png
- text: A <s0><s1> Reachy a robot is standing in a room with a desk
output:
url: image-24.png
- text: A <s0><s1> Reachy a robot with a skeleton on top of a stand
output:
url: image-25.png
- text: A <s0><s1> Reachy a robot with a striped shirt standing on a stool
output:
url: image-26.png
- text: A <s0><s1> Reachy a robot with a striped shirt standing in a room
output:
url: image-27.png
- text: A <s0><s1> Reachy a robot with a striped shirt standing on a stand
output:
url: image-28.png
- text: A <s0><s1> Reachy a robot standing in front of a work bench
output:
url: image-29.png
- text: A <s0><s1> Reachy a robot with a black and white striped shirt standing in
front of a work bench
output:
url: image-30.png
- text: A <s0><s1> Reachy a robot with a striped shirt standing on a table
output:
url: image-31.png
- text: A <s0><s1> Reachy a robot is standing in front of a wall
output:
url: image-32.png
- text: A <s0><s1> Reachy a robot with a striped shirt standing in front of a work
bench
output:
url: image-33.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a black and white striped
tie
output:
url: image-34.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a black and white striped
tie
output:
url: image-35.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a black and white striped
tie
output:
url: image-36.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a black and white striped
tie
output:
url: image-37.png
- text: A <s0><s1> Reachy a robot with a large head and a small body
output:
url: image-38.png
- text: A <s0><s1> Reachy a robot with a large head and a small body
output:
url: image-39.png
- text: A <s0><s1> Reachy a robot with a black and white face and arms
output:
url: image-40.png
- text: A <s0><s1> Reachy a robot with a striped shirt and a black and white striped
tie
output:
url: image-41.png
- text: A <s0><s1> Reachy a white robot with a blue background sitting on a desk
output:
url: image-42.png
- text: A <s0><s1> Reachy a white robot with two eyes and a computer
output:
url: image-43.png
- text: A <s0><s1> Reachy a robot with a computer and a mouse on top of it
output:
url: image-44.png
- text: A <s0><s1> Reachy a robot is standing on a sidewalk near a body of water
output:
url: image-45.png
- text: A <s0><s1> Reachy a robot is standing on a sidewalk near a body of water
output:
url: image-46.png
- text: A <s0><s1> Reachy a robot is standing on a sidewalk near a body of water
output:
url: image-47.png
- text: A <s0><s1> Reachy a robot is standing on a sidewalk near a body of water
output:
url: image-48.png
- text: A <s0><s1> Reachy a white robot with a skeleton on it
output:
url: image-49.png
- text: A <s0><s1> Reachy a robot with a striped shirt and black and white pants
output:
url: image-50.png
- text: A <s0><s1> Reachy a robot with a black and white striped shirt
output:
url: image-51.png
- text: A <s0><s1> Reachy a robot holding a flower
output:
url: image-52.png
- text: A <s0><s1> Reachy a robot holding a flower
output:
url: image-53.png
- text: A <s0><s1> Reachy a robot holding flowers in front of a blue wall
output:
url: image-54.png
- text: A <s0><s1> Reachy a robot holding flowers in front of a blue wall
output:
url: image-55.png
- text: A <s0><s1> Reachy a robot holding a flower in its hand
output:
url: image-56.png
- text: A <s0><s1> Reachy a robot holding a flower in its hand
output:
url: image-57.png
- text: A <s0><s1> Reachy a robot holding a rose
output:
url: image-58.png
- text: A <s0><s1> Reachy a robot holding a rose
output:
url: image-59.png
- text: A <s0><s1> Reachy a robot with a flower in its hand
output:
url: image-60.png
- text: A <s0><s1> Reachy a robot with a flower in its hand
output:
url: image-61.png
- text: A <s0><s1> Reachy a robot with a flower in its hand
output:
url: image-62.png
- text: A <s0><s1> Reachy a robot holding a flower in its hand
output:
url: image-63.png
- text: A <s0><s1> Reachy a robot holding a flower in its hand
output:
url: image-64.png
- text: A <s0><s1> Reachy a robot holding a flower
output:
url: image-65.png
- text: A <s0><s1> Reachy a robot holding a rose in front of a blue wall
output:
url: image-66.png
- text: A <s0><s1> Reachy a person holding a robot that is standing on its legs
output:
url: image-67.png
- text: A <s0><s1> Reachy a robot with a blue background and a white body
output:
url: image-68.png
- text: A <s0><s1> Reachy a robot with two antennas on its head
output:
url: image-69.png
- text: A <s0><s1> Reachy a robot is holding a cup of coffee in front of a machine
output:
url: image-70.png
- text: A <s0><s1> Reachy a robot in a living room with people sitting on couches
output:
url: image-71.png
- text: A <s0><s1> Reachy a robot skeleton stands in the middle of a room
output:
url: image-72.png
- text: A <s0><s1> Reachy a woman is standing next to a robot that is on display
output:
url: image-73.png
- text: A <s0><s1> Reachy a robot on a stand with a black background
output:
url: image-74.png
- text: A <s0><s1> Reachy a robot with a striped shirt on top of its mobile base
output:
url: image-75.png
- text: A <s0><s1> Reachy a robot on a stand with a black background
output:
url: image-76.png
- text: A <s0><s1> Reachy a robot on top of its mobile base with a striped shirt on
its mobile base
output:
url: image-77.png
- text: A <s0><s1> Reachy a robot on top of its mobile base with a striped shirt and
black and white stripes
output:
url: image-78.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A <s0><s1> Reachy
license: openrail++
---
# SDXL LoRA DreamBooth - revellsi/reachy-img-generator20240104
<Gallery />
## Model description
### These are revellsi/reachy-img-generator20240104 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`reachy-img-generator20240104.safetensors` here 💾](/revellsi/reachy-img-generator20240104/blob/main/reachy-img-generator20240104.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:reachy-img-generator20240104:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`reachy-img-generator20240104_emb.safetensors` here 💾](/revellsi/reachy-img-generator20240104/blob/main/reachy-img-generator20240104_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `reachy-img-generator20240104_emb` to your prompt. For example, `A reachy-img-generator20240104_emb Reachy`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('revellsi/reachy-img-generator20240104', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='revellsi/reachy-img-generator20240104', filename='reachy-img-generator20240104_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A <s0><s1> Reachy').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/revellsi/reachy-img-generator20240104/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
JackFram/llama-160m
|
JackFram
| 2024-01-04T09:26:17Z | 219,643 | 34 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:wikipedia",
"arxiv:2305.09781",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-26T16:49:26Z |
---
license: apache-2.0
language:
- en
datasets:
- wikipedia
pipeline_tag: text-generation
---
## Model description
This is a LLaMA-like model with only 160M parameters trained on Wikipedia and part of the C4-en and C4-realnewslike datasets.
No evaluation has been conducted yet, so use it with care.
The model is mainly developed as a base Small Speculative Model in the [SpecInfer](https://arxiv.org/abs/2305.09781) paper.
## Citation
To cite the model, please use
```bibtex
@misc{miao2023specinfer,
title={SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification},
author={Xupeng Miao and Gabriele Oliaro and Zhihao Zhang and Xinhao Cheng and Zeyu Wang and Rae Ying Yee Wong and Zhuoming Chen and Daiyaan Arfeen and Reyna Abhyankar and Zhihao Jia},
year={2023},
eprint={2305.09781},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
DmitryNvm/sdxl-lora-dreambooth-style
|
DmitryNvm
| 2024-01-04T09:22:06Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-22T21:05:10Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a man in szn style'
output:
url:
"image_0.png"
- text: 'a man in szn style'
output:
url:
"image_1.png"
- text: 'a man in szn style'
output:
url:
"image_2.png"
- text: 'a man in szn style'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a cat of in szn style
license: openrail++
---
# SDXL LoRA DreamBooth - DmitryNvm/sdxl-lora-dreambooth-style
<Gallery />
## Model description
These are DmitryNvm/sdxl-lora-dreambooth-style LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a cat of in szn style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](DmitryNvm/sdxl-lora-dreambooth-style/tree/main) them in the Files & versions tab.
|
raminass/SCOTUS_AI_15
|
raminass
| 2024-01-04T09:13:57Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T10:05:09Z |
---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SCOTUS_AI
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7680
- Accuracy: 0.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5767 | 1.0 | 1800 | 0.6222 | 0.8243 |
| 0.2965 | 2.0 | 3600 | 0.6352 | 0.8339 |
| 0.1832 | 3.0 | 5400 | 0.7201 | 0.8261 |
| 0.0991 | 4.0 | 7200 | 0.7398 | 0.8356 |
| 0.0616 | 5.0 | 9000 | 0.7680 | 0.8341 |
### Justices
| Justice | Count |
|-----------|-------|
| Thomas | 571 |
| Scalia | 473 |
| Breyer | 443 |
| Stevens | 407 |
| Ginsburg | 390 |
| Kennedy | 326 |
| Alito | 286 |
| Souter | 230 |
| Sotomayor | 226 |
| O'Connor | 167 |
| Kagan | 145 |
| Rehnquist | 144 |
| Roberts | 123 |
| Gorsuch | 109 |
| Kavanaugh | 65 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
chrisgg1/wav2vec2-base-finetuned-ks-verbinden4
|
chrisgg1
| 2024-01-04T09:10:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-04T08:06:44Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks-verbinden4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks-verbinden4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0089
- Accuracy: 0.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.062 | 1.0 | 290 | 0.0571 | 0.9909 |
| 0.0193 | 2.0 | 581 | 0.0332 | 0.9930 |
| 0.0254 | 3.0 | 871 | 0.0089 | 0.9986 |
| 0.0187 | 4.0 | 1162 | 0.0094 | 0.9981 |
| 0.0081 | 4.99 | 1450 | 0.0128 | 0.9966 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jupitertech72/outbreak-dream-booth
|
jupitertech72
| 2024-01-04T09:00:00Z | 1 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-04T08:45:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Outbreak-Dream-booth Dreambooth model trained by jupitertech72 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
AlanDlink/whisper-tiny-tw
|
AlanDlink
| 2024-01-04T08:57:44Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"whisper",
"hf-asr-leaderboard",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_15_0",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"region:us"
] | null | 2024-01-02T06:48:31Z |
---
language:
- zh
license: apache-2.0
library_name: peft
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_15_0
base_model: openai/whisper-tiny
model-index:
- name: Whisper tiny TW - AlanDlink
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny TW - AlanDlink
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 15.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3802 | 0.67 | 500 | 3.3992 |
| 2.1962 | 1.33 | 1000 | 2.1643 |
| 1.4348 | 2.0 | 1500 | 1.4068 |
| 0.7108 | 2.67 | 2000 | 0.6926 |
| 0.6801 | 3.33 | 2500 | 0.6374 |
| 0.6273 | 4.0 | 3000 | 0.6195 |
| 0.6001 | 4.67 | 3500 | 0.6106 |
| 0.6082 | 5.33 | 4000 | 0.6078 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
yc4142/phi-1_5-lora-int8-double-metaphor-CoT
|
yc4142
| 2024-01-04T08:57:26Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | 2024-01-04T04:35:41Z |
---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
s3nh/GeneZC-MiniChat-2-3B-GGUF
|
s3nh
| 2024-01-04T08:56:00Z | 4 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T08:52:32Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/GeneZC/MiniChat-2-3B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
KnutJaegersberg/platypus-1_8b
|
KnutJaegersberg
| 2024-01-04T08:54:32Z | 1,443 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T21:31:25Z |
---
license: other
license_name: qwen
license_link: LICENSE
---
Full fine tune of qwen-1_8b over open platypus for 5 epoch.
General Prompt Example:
```
### Instruction:
{instruction}
### Response:
```
使用协议(License Agreement)
我们的代码和模型权重对学术研究完全开放。请查看LICENSE文件了解具体的开源协议细节。如需商用,请联系我们。
Code and checkpoints are open to research purpose. Check the LICENSE for more details about the license. For commercial use, please contact us.
|
aumy/RL-CartPole-v1
|
aumy
| 2024-01-04T08:51:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T08:51:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 479.30 +/- 62.10
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rabil/TinyLlama-1.1B-Chat-v1.0-llamafile
|
rabil
| 2024-01-04T08:39:20Z | 22 | 0 | null |
[
"llamafile",
"GGUF",
"base_model:TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
"base_model:finetune:TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
"region:us"
] | null | 2024-01-04T07:45:24Z |
---
tags:
- llamafile
- GGUF
base_model: TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
---
## TinyLlama-1.1B-Chat-v1.0-llamafile
llamafile lets you distribute and run LLMs with a single file. [announcement blog post](https://hacks.mozilla.org/2023/11/introducing-llamafile/)
#### Downloads
- [tinyllama-1.1b-chat-v1.0.Q3_K_M-server.llamafile](https://huggingface.co/rabil/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/tinyllama-1.1b-chat-v1.0.Q3_K_M-server.llamafile)
- [tinyllama-1.1b-chat-v1.0.Q4_K_M-server.llamafile](https://huggingface.co/rabil/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M-server.llamafile)
- [tinyllama-1.1b-chat-v1.0.Q5_0-server.llamafile](https://huggingface.co/rabil/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/tinyllama-1.1b-chat-v1.0.Q5_0-server.llamafile)
- [tinyllama-1.1b-chat-v1.0.Q5_K_M-server.llamafile](https://huggingface.co/rabil/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/tinyllama-1.1b-chat-v1.0.Q5_K_M-server.llamafile)
- [tinyllama-1.1b-chat-v1.0.Q8_0-server.llamafile](https://huggingface.co/rabil/TinyLlama-1.1B-Chat-v1.0-llamafile/resolve/main/tinyllama-1.1b-chat-v1.0.Q8_0-server.llamafile)
This repository was created using the [llamafile-builder](https://github.com/rabilrbl/llamafile-builder)
|
winyap1516/mygpt
|
winyap1516
| 2024-01-04T08:34:02Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"text-generation-inference",
"en",
"zh",
"ms",
"vi",
"ta",
"th",
"dataset:wikimedia/wikipedia",
"dataset:HuggingFaceH4/ultrachat_200k",
"license:mit",
"region:us"
] | null | 2024-01-04T07:44:25Z |
---
license: mit
datasets:
- wikimedia/wikipedia
- HuggingFaceH4/ultrachat_200k
language:
- en
- zh
- ms
- vi
- ta
- th
metrics:
- bleurt
- bleu
- cer
- accuracy
- code_eval
library_name: adapter-transformers
tags:
- code
- text-generation-inference
---
|
abhishek/bertxxx1
|
abhishek
| 2024-01-04T08:26:35Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"coreml",
"onnx",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T08:26:35Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model variations
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
Chinese and multilingual uncased and cased versions followed shortly after.
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
Other 24 smaller models are released afterward.
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
sudy-super/baku-10b-chat-v2
|
sudy-super
| 2024-01-04T08:26:21Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"ja",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T06:55:31Z |
---
license: apache-2.0
language:
- ja
- en
---
## Description
This model is a 10.2 billion parameter model that combines two sets of 24 layers each from [CALM2-7B-chat](https://huggingface.co/cyberagent/calm2-7b-chat) using slerp-merge.
## Chat Template
```
USER: {user_message1}
ASSISTANT: {assistant_message1}<|endoftext|>
USER: {user_message2}
ASSISTANT: {assistant_message2}<|endoftext|>
USER: {user_message3}
ASSISTANT: {assistant_message3}<|endoftext|>
```
## Tutorial
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("sudy-super/baku-10b-chat-v2")
model = AutoModelForCausalLM.from_pretrained("sudy-super/baku-10b-chat-v2", device_map="auto", torch_dtype=torch.bfloat16)
raw_prompt = "仕事の熱意を取り戻すためのアイデアを5つ挙げてください。"
prompt = f"USER:{raw_prompt}\nASSISTANT:"
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=100,
do_sample=True,
temperature=0.8,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(output_ids.tolist()[0])
print(result)
```
|
neil-code/autotrain-text-classific-imdb
|
neil-code
| 2024-01-04T08:24:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:neil-code/autotrain-data-autotrain-text-classific-imdb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T08:24:17Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- neil-code/autotrain-data-autotrain-text-classific-imdb
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.32178911566734314
f1: 0.8431746031746032
precision: 0.8952808988764045
recall: 0.7968
auc: 0.94403024
accuracy: 0.8518
|
uttam333/layoutlm-funsd
|
uttam333
| 2024-01-04T08:17:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-02T18:45:01Z |
---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
model-index:
- name: layoutlm-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0754
- Ignal: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11}
- Oise: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12}
- Overall Precision: 0.0
- Overall Recall: 0.0
- Overall F1: 0.0
- Overall Accuracy: 0.9670
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ignal | Oise | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.7198 | 1.0 | 1 | 0.7152 | {'precision': 0.010416666666666666, 'recall': 0.09090909090909091, 'f1': 0.018691588785046728, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0052 | 0.0435 | 0.0093 | 0.5024 |
| 0.7121 | 2.0 | 2 | 0.7152 | {'precision': 0.010416666666666666, 'recall': 0.09090909090909091, 'f1': 0.018691588785046728, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0052 | 0.0435 | 0.0093 | 0.5024 |
| 0.7191 | 3.0 | 3 | 0.4802 | {'precision': 0.045454545454545456, 'recall': 0.09090909090909091, 'f1': 0.060606060606060615, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0222 | 0.0435 | 0.0294 | 0.9245 |
| 0.4799 | 4.0 | 4 | 0.3268 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9646 |
| 0.3263 | 5.0 | 5 | 0.2246 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.2269 | 6.0 | 6 | 0.1598 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.1625 | 7.0 | 7 | 0.1227 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.1246 | 8.0 | 8 | 0.1030 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.1042 | 9.0 | 9 | 0.0937 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.0942 | 10.0 | 10 | 0.0892 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.0888 | 11.0 | 11 | 0.0861 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.0834 | 12.0 | 12 | 0.0832 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.0768 | 13.0 | 13 | 0.0805 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.0745 | 14.0 | 14 | 0.0778 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
| 0.071 | 15.0 | 15 | 0.0754 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 12} | 0.0 | 0.0 | 0.0 | 0.9670 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
anhdt-dsai-02/Bloom_1_4
|
anhdt-dsai-02
| 2024-01-04T08:17:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-3b",
"base_model:adapter:bigscience/bloomz-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-04T07:17:33Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-3b
model-index:
- name: Bloom_1_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bloom_1_4
This model is a fine-tuned version of [bigscience/bloomz-3b](https://huggingface.co/bigscience/bloomz-3b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
svenbl80/deberta-v3-Base-finetuned-chatdoc-V3
|
svenbl80
| 2024-01-04T07:56:08Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T07:48:18Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: svenbl80/deberta-v3-Base-finetuned-chatdoc-V3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# svenbl80/deberta-v3-Base-finetuned-chatdoc-V3
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4813
- Validation Loss: 0.3148
- Train Accuracy: 0.9101
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 165, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.9425 | 0.5627 | 0.8652 | 0 |
| 0.7274 | 0.4642 | 0.8652 | 1 |
| 0.7171 | 0.4548 | 0.8652 | 2 |
| 0.6923 | 0.4057 | 0.8652 | 3 |
| 0.6439 | 0.3914 | 0.8652 | 4 |
| 0.5554 | 0.3408 | 0.8652 | 5 |
| 0.4813 | 0.3148 | 0.9101 | 6 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
xaviviro/PetitXat-0.1-1.1b-GGUF
|
xaviviro
| 2024-01-04T07:55:32Z | 78 | 0 | null |
[
"gguf",
"ca",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-04T07:16:30Z |
---
license: apache-2.0
language:
- ca
model_creator: xaviviro
model_name: PetitXat-0.1-1.1b
prompt_template: '<|system|>\n{system}</s>\n<|user|>{instruction}</s>\n<|assistant|>\n'
---
# PetitXat 1.1B: El model més petit de xat en llengua catalana

PetitXat és el model més petit en llengua catalana. És el resultat de finetunejar el model [TinyLlama-1.1B-Chat-v1.0](/TinyLlama/TinyLlama-1.1B-Chat-v1.0) amb les instruccions d'[OpenAssistant v2](/datasets/OpenAssistant/oasst2) traduïdes automàticament al català amb recursos de [Helsinki-NLP](/Helsinki-NLP) i tractades en format ChatGLM3.
## Format del prompt
```
<|system|>
Ets un bon assistent</s>
<|user|>
Qui va ser Isaac Newton?</s>
<|assistant|>
```
|
marcogfedozzi/reinforce-Pixelcopter-PLE-v0-optim
|
marcogfedozzi
| 2024-01-04T07:42:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T07:28:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pixelcopter-PLE-v0-optim
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.40 +/- 22.24
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
shahrukh95/falcon-7b-Set-1-cybersecurity-layered-config
|
shahrukh95
| 2024-01-04T07:42:38Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:finetune:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-01-04T07:41:05Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
tags:
- generated_from_trainer
model-index:
- name: falcon-7b-Set-1-cybersecurity-layered-config
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-Set-1-cybersecurity-layered-config
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mamamiya405/demo_law
|
mamamiya405
| 2024-01-04T07:42:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-05T11:50:18Z |
---
library_name: peft
---
## base_model
- decapoda-research/llama-7b-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
mamamiya405/alpaca_lora_doc_summary
|
mamamiya405
| 2024-01-04T07:41:12Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-07T13:42:38Z |
---
library_name: peft
---
## base_model
- decapoda-research/llama-7b-hf
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
svenbl80/roberta-base-finetuned-chatdoc-V3
|
svenbl80
| 2024-01-04T07:40:26Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T07:07:45Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: svenbl80/roberta-base-finetuned-chatdoc-V3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# svenbl80/roberta-base-finetuned-chatdoc-V3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6956
- Validation Loss: 0.4497
- Train Accuracy: 0.8652
- Epoch: 28
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-06, 'decay_steps': 330, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.1344 | 1.1124 | 0.1236 | 0 |
| 1.0978 | 1.0641 | 0.8652 | 1 |
| 1.0575 | 1.0020 | 0.8652 | 2 |
| 0.9999 | 0.9336 | 0.8652 | 3 |
| 0.9391 | 0.8170 | 0.8652 | 4 |
| 0.8501 | 0.6621 | 0.8652 | 5 |
| 0.7780 | 0.5321 | 0.8652 | 6 |
| 0.7866 | 0.4850 | 0.8652 | 7 |
| 0.7613 | 0.4796 | 0.8652 | 8 |
| 0.7512 | 0.4847 | 0.8652 | 9 |
| 0.7432 | 0.4933 | 0.8652 | 10 |
| 0.7474 | 0.4919 | 0.8652 | 11 |
| 0.7580 | 0.4863 | 0.8652 | 12 |
| 0.7253 | 0.4840 | 0.8652 | 13 |
| 0.7166 | 0.4724 | 0.8652 | 14 |
| 0.7245 | 0.4725 | 0.8652 | 15 |
| 0.7144 | 0.4706 | 0.8652 | 16 |
| 0.6870 | 0.4628 | 0.8652 | 17 |
| 0.6925 | 0.4583 | 0.8652 | 18 |
| 0.6945 | 0.4620 | 0.8652 | 19 |
| 0.6930 | 0.4564 | 0.8652 | 20 |
| 0.6737 | 0.4572 | 0.8652 | 21 |
| 0.6809 | 0.4496 | 0.8652 | 22 |
| 0.6766 | 0.4523 | 0.8652 | 23 |
| 0.7007 | 0.4525 | 0.8652 | 24 |
| 0.6945 | 0.4538 | 0.8652 | 25 |
| 0.6980 | 0.4521 | 0.8652 | 26 |
| 0.6769 | 0.4508 | 0.8652 | 27 |
| 0.6956 | 0.4497 | 0.8652 | 28 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.15.0
- Tokenizers 0.13.3
|
shrikant11/pokemon_text_to_image_2
|
shrikant11
| 2024-01-04T07:32:52Z | 18 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dataset:lambdalabs/pokemon-blip-captions",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-04T07:25:01Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
datasets:
- lambdalabs/pokemon-blip-captions
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# Text-to-image finetuning - shrikant11/pokemon_text_to_image_2
This bla bla pipeline was finetuned from **runwayml/stable-diffusion-v1-5** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Pokemon with yellow eyes', 'Green colour pokemon', 'Blue colour pikacchu', 'Charlizzard', 'pikachu', 'dangerous looking pokemon']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("shrikant11/pokemon_text_to_image_2", torch_dtype=torch.float16)
prompt = "Pokemon with yellow eyes"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 1
* Image resolution: 512
* Mixed-precision: None
More information on all the CLI arguments and the environment are available on your [`wandb` run page]().
|
salazar-rich/q-FrozenLake-v1-4x4-noSlippery
|
salazar-rich
| 2024-01-04T07:29:34Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T07:29:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="salazar-rich/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.