modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
helenai/deepset-xlm-roberta-base-squad2-ov
|
helenai
| 2023-07-21T16:01:54Z | 3 | 0 |
transformers
|
[
"transformers",
"openvino",
"xlm-roberta",
"question-answering",
"en",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-21T16:01:00Z |
---
language:
- en
tags:
- openvino
---
# deepset/xlm-roberta-base-squad2
This is the [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForQuestionAnswering
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/deepset-xlm-roberta-base-squad2-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForQuestionAnswering.from_pretrained(model_id)
pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = pipe("What is OpenVINO?", "OpenVINO is a framework that accelerates deep learning inferencing")
print(result)
```
|
chh6/Reinforce_cartpole
|
chh6
| 2023-07-21T15:58:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T15:58:00Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
helenai/xlm-roberta-large-finetuned-conll03-english-ov
|
helenai
| 2023-07-21T15:57:10Z | 120 | 0 |
transformers
|
[
"transformers",
"openvino",
"xlm-roberta",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-21T12:56:06Z |
---
language:
- en
tags:
- openvino
---
# xlm-roberta-large-finetuned-conll03-english
This is the [xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/xlm-roberta-large-finetuned-conll03-english) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForTokenClassification
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/xlm-roberta-large-finetuned-conll03-english-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForTokenClassification.from_pretrained(model_id)
pipe = pipeline("token-classification", model=model, tokenizer=tokenizer)
result = pipe("hello world")
print(result)
```
|
sagorsarker/bangla-bert-base
|
sagorsarker
| 2023-07-21T15:56:25Z | 13,626 | 21 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"bengali",
"bengali-lm",
"bangla",
"bn",
"dataset:common_crawl",
"dataset:wikipedia",
"dataset:oscar",
"arxiv:1810.04805",
"arxiv:2012.14353",
"arxiv:2104.08613",
"arxiv:2107.03844",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: bn
tags:
- bert
- bengali
- bengali-lm
- bangla
license: mit
datasets:
- common_crawl
- wikipedia
- oscar
---
# Bangla BERT Base
A long way passed. Here is our **Bangla-Bert**! It is now available in huggingface model hub.
[Bangla-Bert-Base](https://github.com/sagorbrur/bangla-bert) is a pretrained language model of Bengali language using mask language modeling described in [BERT](https://arxiv.org/abs/1810.04805) and it's github [repository](https://github.com/google-research/bert)
## Pretrain Corpus Details
Corpus was downloaded from two main sources:
* Bengali commoncrawl corpus downloaded from [OSCAR](https://oscar-corpus.com/)
* [Bengali Wikipedia Dump Dataset](https://dumps.wikimedia.org/bnwiki/latest/)
After downloading these corpora, we preprocessed it as a Bert format. which is one sentence per line and an extra newline for new documents.
```
sentence 1
sentence 2
sentence 1
sentence 2
```
## Building Vocab
We used [BNLP](https://github.com/sagorbrur/bnlp) package for training bengali sentencepiece model with vocab size 102025. We preprocess the output vocab file as Bert format.
Our final vocab file availabe at [https://github.com/sagorbrur/bangla-bert](https://github.com/sagorbrur/bangla-bert) and also at [huggingface](https://huggingface.co/sagorsarker/bangla-bert-base) model hub.
## Training Details
* Bangla-Bert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
* Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
* Total Training Steps: 1 Million
* The model was trained on a single Google Cloud GPU
## Evaluation Results
### LM Evaluation Results
After training 1 million steps here are the evaluation results.
```
global_step = 1000000
loss = 2.2406516
masked_lm_accuracy = 0.60641736
masked_lm_loss = 2.201459
next_sentence_accuracy = 0.98625
next_sentence_loss = 0.040997364
perplexity = numpy.exp(2.2406516) = 9.393331287442784
Loss for final step: 2.426227
```
### Downstream Task Evaluation Results
- Evaluation on Bengali Classification Benchmark Datasets
Huge Thanks to [Nick Doiron](https://twitter.com/mapmeld) for providing evaluation results of the classification task.
He used [Bengali Classification Benchmark](https://github.com/rezacsedu/Classification_Benchmarks_Benglai_NLP) datasets for the classification task.
Comparing to Nick's [Bengali electra](https://huggingface.co/monsoon-nlp/bangla-electra) and multi-lingual BERT, Bangla BERT Base achieves a state of the art result.
Here is the [evaluation script](https://github.com/sagorbrur/bangla-bert/blob/master/notebook/bangla-bert-evaluation-classification-task.ipynb).
| Model | Sentiment Analysis | Hate Speech Task | News Topic Task | Average |
| ----- | -------------------| ---------------- | --------------- | ------- |
| mBERT | 68.15 | 52.32 | 72.27 | 64.25 |
| Bengali Electra | 69.19 | 44.84 | 82.33 | 65.45 |
| Bangla BERT Base | 70.37 | 71.83 | 89.19 | 77.13 |
- Evaluation on [Wikiann](https://huggingface.co/datasets/wikiann) Datasets
We evaluated `Bangla-BERT-Base` with [Wikiann](https://huggingface.co/datasets/wikiann) Bengali NER datasets along with another benchmark three models(mBERT, XLM-R, Indic-BERT). </br>
`Bangla-BERT-Base` got a third-place where `mBERT` got first and `XML-R` got second place after training these models 5 epochs.
| Base Pre-trained Model | F1 Score | Accuracy |
| ----- | -------------------| ---------------- |
| [mBERT-uncased](https://huggingface.co/bert-base-multilingual-uncased) | 97.11 | 97.68 |
| [XLM-R](https://huggingface.co/xlm-roberta-base) | 96.22 | 97.03 |
| [Indic-BERT](https://huggingface.co/ai4bharat/indic-bert)| 92.66 | 94.74 |
| Bangla-BERT-Base | 95.57 | 97.49 |
All four model trained with [transformers-token-classification](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb) notebook.
You can find all models evaluation results [here](https://github.com/sagorbrur/bangla-bert/tree/master/evaluations/wikiann)
Also, you can check the below paper list. They used this model on their datasets.
* [DeepHateExplainer: Explainable Hate Speech Detection in Under-resourced Bengali Language](https://arxiv.org/abs/2012.14353)
* [Emotion Classification in a Resource Constrained Language Using Transformer-based Approach](https://arxiv.org/abs/2104.08613)
* [A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models](https://arxiv.org/abs/2107.03844)
**NB: If you use this model for any NLP task please share evaluation results with us. We will add it here.**
## Limitations and Biases
## How to Use
**Bangla BERT Tokenizer**
```py
from transformers import AutoTokenizer, AutoModel
bnbert_tokenizer = AutoTokenizer.from_pretrained("sagorsarker/bangla-bert-base")
text = "আমি বাংলায় গান গাই।"
bnbert_tokenizer.tokenize(text)
# ['আমি', 'বাংলা', '##য', 'গান', 'গাই', '।']
```
**MASK Generation**
You can use this model directly with a pipeline for masked language modeling:
```py
from transformers import BertForMaskedLM, BertTokenizer, pipeline
model = BertForMaskedLM.from_pretrained("sagorsarker/bangla-bert-base")
tokenizer = BertTokenizer.from_pretrained("sagorsarker/bangla-bert-base")
nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
for pred in nlp(f"আমি বাংলায় {nlp.tokenizer.mask_token} গাই।"):
print(pred)
# {'sequence': '[CLS] আমি বাংলায গান গাই । [SEP]', 'score': 0.13404667377471924, 'token': 2552, 'token_str': 'গান'}
```
## Author
[Sagor Sarker](https://github.com/sagorbrur)
## Reference
* https://github.com/google-research/bert
## Citation
If you find this model helpful, please cite.
```
@misc{Sagor_2020,
title = {BanglaBERT: Bengali Mask Language Model for Bengali Language Understanding},
author = {Sagor Sarker},
year = {2020},
url = {https://github.com/sagorbrur/bangla-bert}
}
```
|
serkandyck/llama2-7b-qlora-finetunned-turkish
|
serkandyck
| 2023-07-21T15:48:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T13:40:23Z |
## Dataset
https://huggingface.co/datasets/serkandyck/turkish_instructions
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
jayanta/bert-base-cased-sentweet-targetedinsult
|
jayanta
| 2023-07-21T15:47:42Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T15:39:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-sentweet-targetedinsult
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-sentweet-targetedinsult
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0418
- Accuracy: 0.7917
- Precision: 0.7911
- Recall: 0.7922
- F1: 0.7913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4277 | 0.8299 | 0.8375 | 0.8346 | 0.8298 |
| No log | 2.0 | 162 | 0.5315 | 0.7326 | 0.7412 | 0.7253 | 0.7253 |
| No log | 3.0 | 243 | 0.6488 | 0.7396 | 0.7472 | 0.7327 | 0.7331 |
| No log | 4.0 | 324 | 0.8324 | 0.7431 | 0.7432 | 0.7399 | 0.7406 |
| No log | 5.0 | 405 | 0.9038 | 0.7917 | 0.7924 | 0.7935 | 0.7916 |
| No log | 6.0 | 486 | 1.0418 | 0.7917 | 0.7911 | 0.7922 | 0.7913 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
|
cartesinus/iva_mt_wslot-m2m100_418M-en-ja
|
cartesinus
| 2023-07-21T15:45:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:iva_mt_wslot",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-26T06:39:19Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-ja
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-ja
split: validation
args: en-ja
metrics:
- name: Bleu
type: bleu
value: 66.503
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-ja
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0153
- Bleu: 66.503
- Gen Len: 20.9519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0185 | 1.0 | 2017 | 0.0164 | 63.4304 | 20.6499 |
| 0.0134 | 2.0 | 4034 | 0.0150 | 64.827 | 20.666 |
| 0.0104 | 3.0 | 6051 | 0.0146 | 64.465 | 21.2155 |
| 0.0079 | 4.0 | 8068 | 0.0148 | 64.8578 | 20.7915 |
| 0.0062 | 5.0 | 10085 | 0.0149 | 65.9149 | 21.0718 |
| 0.005 | 6.0 | 12102 | 0.0151 | 66.2905 | 20.8766 |
| 0.004 | 7.0 | 14119 | 0.0153 | 66.503 | 20.9519 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
## Citation
If you use this model, please cite the following:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
|
helenai/philschmid-roberta-large-sst2-ov
|
helenai
| 2023-07-21T15:44:20Z | 5 | 0 |
transformers
|
[
"transformers",
"openvino",
"roberta",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T15:43:18Z |
---
language:
- en
tags:
- openvino
---
# philschmid/roberta-large-sst2
This is the [philschmid/roberta-large-sst2](https://huggingface.co/philschmid/roberta-large-sst2) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/philschmid-roberta-large-sst2-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForSequenceClassification.from_pretrained(model_id)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = pipe("I like you. I love you")
print(result)
```
|
cartesinus/iva_mt_wslot-m2m100_418M-en-zh
|
cartesinus
| 2023-07-21T15:44:09Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"dataset:iva_mt_wslot",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-26T06:39:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- iva_mt_wslot
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-zh
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-zh
split: validation
args: en-zh
metrics:
- name: Bleu
type: bleu
value: 69.4383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-zh
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0120
- Bleu: 69.4383
- Gen Len: 19.4038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0155 | 1.0 | 2109 | 0.0132 | 66.1893 | 19.117 |
| 0.011 | 2.0 | 4218 | 0.0120 | 66.5023 | 19.2003 |
| 0.0084 | 3.0 | 6327 | 0.0116 | 68.2038 | 19.4521 |
| 0.0061 | 4.0 | 8436 | 0.0115 | 69.129 | 19.2181 |
| 0.0046 | 5.0 | 10545 | 0.0117 | 69.3609 | 19.3212 |
| 0.0035 | 6.0 | 12654 | 0.0119 | 69.1841 | 19.3972 |
| 0.0028 | 7.0 | 14763 | 0.0120 | 69.4383 | 19.4038 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
## Citation
If you use this model, please cite the following:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
|
cartesinus/iva_mt_wslot-m2m100_418M-en-sv
|
cartesinus
| 2023-07-21T15:43:51Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"sv",
"dataset:cartesinus/iva_mt_wslot",
"doi:10.57967/hf/1049",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-04-17T08:48:33Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: iva_mt_wslot-m2m100_418M-en-sv
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: iva_mt_wslot
type: iva_mt_wslot
config: en-sv
split: validation
args: en-sv
metrics:
- name: Bleu
type: bleu
value: 71.0808
datasets:
- cartesinus/iva_mt_wslot
language:
- en
- sv
pipeline_tag: translation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iva_mt_wslot-m2m100_418M-en-sv
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the iva_mt_wslot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0107
- Bleu: 71.0808
- Gen Len: 19.7647
## Model description
More information needed
## How to use
First please make sure to install `pip install transformers`. First download model:
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
import torch
def translate(input_text, lang):
input_ids = tokenizer(input_text, return_tensors="pt")
generated_tokens = model.generate(**input_ids, forced_bos_token_id=tokenizer.get_lang_id(lang))
return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
model_name = "cartesinus/iva_mt_wslot-m2m100_418M-0.1.0-en-sv"
tokenizer = M2M100Tokenizer.from_pretrained(model_name, src_lang="en", tgt_lang="sv")
model = M2M100ForConditionalGeneration.from_pretrained(model_name)
```
Then you can translate either plain text like this:
```python
print(translate("set the temperature on my thermostat", "sv"))
```
or you can translate with slot annotations that will be restored in tgt language:
```python
print(translate("wake me up at <a>nine am<a> on <b>friday<b>", "sv"))
```
Limitations of translation with slot transfer:
1) Annotated words must be placed between semi-xml tags like this "this is \<a\>example\<a\>"
2) There is no closing tag for example "\<\a\>" in above example - this is done on purpose to omit problems with backslash escape
3) If the sentence consists of more than one slot then simply use the next alphabet letter. For example "this is \<a\>example\<a\> with more than \<b\>one\<b\> slot"
4) Please do not add space before the first or last annotated word because this particular model was trained this way and it most probably will lower its results
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.0151 | 1.0 | 1885 | 0.0120 | 67.2332 | 19.3956 |
| 0.0095 | 2.0 | 3770 | 0.0105 | 69.8147 | 19.675 |
| 0.0065 | 3.0 | 5655 | 0.0104 | 70.239 | 19.8404 |
| 0.0049 | 4.0 | 7540 | 0.0104 | 70.3673 | 19.7154 |
| 0.0038 | 5.0 | 9425 | 0.0105 | 70.1632 | 19.7743 |
| 0.0026 | 6.0 | 11310 | 0.0105 | 70.7959 | 19.7809 |
| 0.0021 | 7.0 | 13195 | 0.0107 | 71.0808 | 19.7647 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
## Citation
If you use this model, please cite the following:
```
@article{Sowanski2023SlotLI,
title={Slot Lost in Translation? Not Anymore: A Machine Translation Model for Virtual Assistants with Type-Independent Slot Transfer},
author={Marcin Sowanski and Artur Janicki},
journal={2023 30th International Conference on Systems, Signals and Image Processing (IWSSIP)},
year={2023},
pages={1-5}
}
```
|
amabz/PPO-LunarLander-V2
|
amabz
| 2023-07-21T15:41:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T15:40:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.23 +/- 16.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SachinKaushik/mathsLlama
|
SachinKaushik
| 2023-07-21T15:39:52Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-21T11:08:14Z |
---
tags:
- generated_from_trainer
model-index:
- name: mathsLlama
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mathsLlama
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
reginaboateng/pfeiffer_pubmedqa_adapter_with_maybes_to_nos
|
reginaboateng
| 2023-07-21T15:32:33Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"adapterhub:pubmedqa",
"dataset:pubmedqa",
"region:us"
] | null | 2023-07-21T15:32:29Z |
---
tags:
- adapter-transformers
- bert
- adapterhub:pubmedqa
datasets:
- pubmedqa
---
# Adapter `reginaboateng/pfeiffer_pubmedqa_adapter_with_maybes_to_nos` for allenai/scibert_scivocab_uncased
An [adapter](https://adapterhub.ml) for the `allenai/scibert_scivocab_uncased` model that was trained on the [pubmedqa](https://adapterhub.ml/explore/pubmedqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("allenai/scibert_scivocab_uncased")
adapter_name = model.load_adapter("reginaboateng/pfeiffer_pubmedqa_adapter_with_maybes_to_nos", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
AyoubChLin/BART-mnli_cnn_256
|
AyoubChLin
| 2023-07-21T15:22:45Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text-classification",
"zero-shot-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2023-04-14T13:42:39Z |
---
license: apache-2.0
pipeline_tag: zero-shot-classification
---
|
rociortizb/predict_rugby
|
rociortizb
| 2023-07-21T15:22:02Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-07-06T09:10:12Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# predict_rugby
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("rociortizb/predict_rugby")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 40
* Number of training documents: 27774
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| 0 | blacks - new - new zealand - zealand - hurricanes | 1556 | 0_blacks_new_new zealand_zealand |
| 1 | springboks - south - africa - south africa - erasmus | 1297 | 1_springboks_south_africa_south africa |
| 2 | springbok - springboks - year - players - world | 1205 | 2_springbok_springboks_year_players |
| 3 | stormers - lions - van - said - team | 1163 | 3_stormers_lions_van_said |
| 4 | cheetahs - van - griquas - pumas - province | 1149 | 4_cheetahs_van_griquas_pumas |
| 5 | sevens - series - fiji - blitzboks - pool | 994 | 5_sevens_series_fiji_blitzboks |
| 6 | brumbies - reds - rebels - rugby - super | 953 | 6_brumbies_reds_rebels_rugby |
| 7 | premiership - tom - exeter - wasps - saracens | 928 | 7_premiership_tom_exeter_wasps |
| 8 | ireland - sexton - schmidt - irish - leinster | 911 | 8_ireland_sexton_schmidt_irish |
| 9 | france - french - racing - year - club | 904 | 9_france_french_racing_year |
| 10 | try - leinster - minutes - munster - penalty | 876 | 10_try_leinster_minutes_munster |
| 11 | stormers - lions - south - game - team | 870 | 11_stormers_lions_south_game |
| 12 | sharks - du - preez - du preez - bosch | 851 | 12_sharks_du_preez_du preez |
| 13 | wallabies - australia - folau - rugby - said | 837 | 13_wallabies_australia_folau_rugby |
| 14 | england - jones - harlequins - squad - george | 807 | 14_england_jones_harlequins_squad |
| 15 | england - jones - world - world cup - wales | 790 | 15_england_jones_world_world cup |
| 16 | crusaders - highlanders - hurricanes - blues - chiefs | 769 | 16_crusaders_highlanders_hurricanes_blues |
| 17 | italy - france - england - ireland - scotland | 763 | 17_italy_france_england_ireland |
| 18 | wallabies - australia - cheika - said - blacks | 736 | 18_wallabies_australia_cheika_said |
| 19 | disciplinary - committee - foul play - foul - player | 722 | 19_disciplinary_committee_foul play_foul |
| 20 | clermont - stade - montpellier - toulon - toulouse | 688 | 20_clermont_stade_montpellier_toulon |
| 21 | blacks - new - zealand - new zealand - foster | 673 | 21_blacks_new_zealand_new zealand |
| 22 | wales - davies - ospreys - scarlets - cardiff | 666 | 22_wales_davies_ospreys_scarlets |
| 23 | bulls - van - stormers - lions - sharks | 660 | 23_bulls_van_stormers_lions |
| 24 | bulls - van - white - rugby - loftus | 615 | 24_bulls_van_white_rugby |
| 25 | rugby - super - super rugby - competition - new | 531 | 25_rugby_super_super rugby_competition |
| 26 | scotland - glasgow - edinburgh - townsend - russell | 529 | 26_scotland_glasgow_edinburgh_townsend |
| 27 | brumbies - waratahs - reds - rebels - force | 521 | 27_brumbies_waratahs_reds_rebels |
| 28 | pro14 - leinster - ulster - scarlets - 19 | 490 | 28_pro14_leinster_ulster_scarlets |
| 29 | rugby - world - world rugby - nations - cup | 467 | 29_rugby_world_world rugby_nations |
| 30 | argentina - santiago - pumas - juan - matias | 447 | 30_argentina_santiago_pumas_juan |
| 31 | club - premiership - season - rugby - gloucester | 436 | 31_club_premiership_season_rugby |
| 32 | club - premiership - saracens - wasps - salary | 423 | 32_club_premiership_saracens_wasps |
| 33 | gatland - lions - wales - tour - barbarians | 361 | 33_gatland_lions_wales_tour |
| 34 | africa - south africa - south - zealand - new zealand | 339 | 34_africa_south africa_south_zealand |
| 35 | marais - saru - union - rugby - president | 317 | 35_marais_saru_union_rugby |
| 36 | kings - southern kings - southern - davids - schalk | 217 | 36_kings_southern kings_southern_davids |
| 37 | vs - referees - match official - official - assistant referees | 171 | 37_vs_referees_match official_official |
| 38 | sunwolves - japan - super - super rugby - 15 | 116 | 38_sunwolves_japan_super_super rugby |
| 39 | burgess - lancaster - england - bath - union | 26 | 39_burgess_lancaster_england_bath |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: [['United Rugby Championship', 'Ireland', 'Wales', 'Scotland', 'South Africa', 'Italian', 'Pro14'], ['Pro14', 'Edinburgh', 'Glasgow', 'Scarlets', 'Ospreys', 'Zebre', 'Benetton', 'Connacht', 'Leinster', 'Ulster', 'Munster'], ['European Cup', 'European', 'Heineken', 'competition', 'Toulon', 'Saracens', 'Leinster'], ['Premiership', 'England', 'Exeter', 'Saracens', 'Wasps', 'Leicester', 'Harlequins', 'Sale', 'Bristol', 'Northampton'], ['Sevens', 'Fiji', 'New Zealand', 'South Africa', 'England', 'Australia', 'series', 'HSBC', 'Olympics'], ['Super Rugby', 'New Zealand', 'Australia', 'South Africa', 'Argentina', 'Japan', 'Blues', 'Brumbies', 'Crusaders', 'Sharks', 'Stormers'], ['Six Nations', 'England', 'Wales', 'Ireland', 'Scotland', 'France', 'Italy', 'Championship', 'Grand Slam'], ['Currie Cup', 'South Africa', 'Bulls', 'Lions', 'Sharks', 'Cheetahs', 'Western Province', 'domestic', 'provincial'], ['World Cup', 'international', 'New Zealand', 'Australia', 'South Africa', 'England', 'Wales', 'France'], ['Rugby Championship', 'New Zealand', 'Australia', 'South Africa', 'Argentina', 'All Blacks', 'Wallabies', 'Springboks', 'Pumas'], ['British Irish Lions', 'South Africa', 'New Zealand', 'Australia']]
* top_n_words: 30
* verbose: True
## Framework versions
* Numpy: 1.21.0
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 2.0.2
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.2
* Numba: 0.57.0
* Plotly: 5.14.1
* Python: 3.9.6
|
sergeindamix/llama2-qlora-finetunined-frenchTest
|
sergeindamix
| 2023-07-21T15:21:52Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T15:21:43Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
joshtopic/test
|
joshtopic
| 2023-07-21T15:06:42Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
] | null | 2023-07-21T15:06:16Z |
---
datasets:
- fka/awesome-chatgpt-prompts
---
|
7erminalVelociraptor/Airochronos-33b-Guanaco
|
7erminalVelociraptor
| 2023-07-21T15:01:12Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-14T19:09:22Z |
This is [Henk717's merge of Chronos and Airoboros 1.4](https://huggingface.co/Henk717/airochronos-33B) with [Timdetmer's Guanaco](https://huggingface.co/timdettmers/guanaco-33b) applied as a lora.
Mainly intended for character roleplay and creative writing. Initial testing suggests it does a reasonable job at this, but it is too early to say how it compares to Airochronos. Other tasks such as coding or logic has not been reviewed.
The model has been tested with Alpaca's prompt style ( ### Instruction: and ### Response: ), as this is what Chronos and Guanaco use.
Keep in mind that all parts of this model are not censored, and thus can output NSFW or other unfiltered content. Use at your own discretion.
|
jayanta/bert-base-cased-sentweet-derogatory
|
jayanta
| 2023-07-21T14:56:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T14:43:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-cased-sentweet-derogatory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-sentweet-derogatory
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9154
- Accuracy: 0.8056
- Precision: 0.8051
- Recall: 0.8036
- F1: 0.8042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4357 | 0.8160 | 0.8194 | 0.8197 | 0.8160 |
| No log | 2.0 | 162 | 0.4131 | 0.7986 | 0.7998 | 0.8010 | 0.7985 |
| No log | 3.0 | 243 | 0.5515 | 0.7812 | 0.7838 | 0.7766 | 0.7780 |
| No log | 4.0 | 324 | 0.6149 | 0.75 | 0.7549 | 0.7435 | 0.7446 |
| No log | 5.0 | 405 | 0.7479 | 0.8125 | 0.8130 | 0.8145 | 0.8124 |
| No log | 6.0 | 486 | 0.9154 | 0.8056 | 0.8051 | 0.8036 | 0.8042 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
|
gFulvio/moralstories-bart-norm.action-context_gen
|
gFulvio
| 2023-07-21T14:39:17Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"dataset:demelin/moral_stories",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-21T14:35:42Z |
---
datasets:
- demelin/moral_stories
---
|
abertsch/unlimiformer-bart-govreport-earlyk
|
abertsch
| 2023-07-21T14:32:14Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:ccdv/govreport-summarization",
"dataset:urialon/gov_report_validation",
"dataset:urialon/gov_report_test",
"arxiv:2305.01625",
"region:us"
] |
text2text-generation
| 2023-05-03T14:52:23Z |
---
datasets:
- ccdv/govreport-summarization
- urialon/gov_report_validation
- urialon/gov_report_test
pipeline_tag: text2text-generation
inference: false
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625)
This is a BART-base model finetuned using Unlimiformer-aware early stopping, as described in section 3.1 of the paper. The model was finetuned on GovReport using the data processing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/gov_report_validation](https://huggingface.co/datasets/urialon/gov_report_validation) and [urialon/gov_report_test](https://huggingface.co/datasets/urialon/gov_report_test).
This is generally a weaker model than the [alternating-training model](https://huggingface.co/abertsch/unlimiformer-bart-govreport-alternating) and a stronger model than the [baseline](https://huggingface.co/abertsch/bart-base-govreport).
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
|
abertsch/bart-base-booksum
|
abertsch
| 2023-07-21T14:32:12Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:abertsch/booksum-fullbooks",
"arxiv:2305.01625",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-03T14:44:00Z |
---
datasets:
- abertsch/booksum-fullbooks
pipeline_tag: text2text-generation
---
Baseline for the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model as a baseline. It was finetuned on the dataset BookSum (full-book setting).
|
abertsch/unlimiformer-bart-booksum-retrieval
|
abertsch
| 2023-07-21T14:32:09Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:abertsch/booksum-fullbooks",
"arxiv:2305.01625",
"region:us"
] |
text2text-generation
| 2023-05-03T14:42:00Z |
---
datasets:
- abertsch/booksum-fullbooks
pipeline_tag: text2text-generation
inference: false
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model using the retrieval-augmented training strategy described in section 3.2 of the paper. It was finetuned on the dataset BookSum (full-book setting).
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
|
abertsch/unlimiformer-bart-booksum-alternating
|
abertsch
| 2023-07-21T14:32:07Z | 104 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:abertsch/booksum-fullbooks",
"arxiv:2305.01625",
"region:us"
] |
text2text-generation
| 2023-05-03T14:42:53Z |
---
datasets:
- abertsch/booksum-fullbooks
pipeline_tag: text2text-generation
inference: false
---
Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model using the alternating-training strategy described in section 3.2 of the paper. It was finetuned on the dataset BookSum (full-book setting).
*The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
|
Aspik101/Llama-2-7b-chat-hf-pl-lora_GGML
|
Aspik101
| 2023-07-21T14:31:19Z | 0 | 1 | null |
[
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"text-generation",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"region:us"
] |
text-generation
| 2023-07-21T14:23:48Z |
---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
|
Mizuiro-sakura/luke-japanese-base-finetuned-QA
|
Mizuiro-sakura
| 2023-07-21T14:11:02Z | 163 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"luke",
"question-answering",
"squad",
"question answering",
"ja",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-15T23:38:30Z |
---
license: mit
language: ja
tags:
- luke
- question-answering
- squad
- pytorch
- transformers
- question answering
---
# このモデルはluke-japanese-base-liteをファインチューニングして、Question-Answeringに用いれるようにしたものです。
このモデルはluke-japanese-base-liteを運転ドメインQAデータセット(DDQA)( https://nlp.ist.i.kyoto-u.ac.jp/index.php?Driving%20domain%20QA%20datasets )を用いてファインチューニングしたものです。
Question-Answeringタスク(SQuAD)に用いることができます。
# This model is fine-tuned model for Question-Answering which is based on luke-japanese-base-lite
This model is fine-tuned by using DDQA dataset.
You could use this model for Question-Answering tasks.
# モデルの精度 accuracy of model
'em(厳密一致)': 0.845933014354067, 'f1': 0.9197176274789681
# How to use 使い方
sentencepieceとtransformersをインストールして
(pip install sentencepiece , pip install transformers)
以下のコードを実行することで、Question-Answeringタスクを解かせることができます。
please execute this code.
```python
import torch
from transformers import AutoTokenizer, LukeForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-QA')
model=LukeForQuestionAnswering.from_pretrained('Mizuiro-sakura/luke-japanese-base-finetuned-QA') # 学習済みモデルの読み込み
text={
'context':'私の名前はEIMIです。好きな食べ物は苺です。 趣味は皆さんと会話することです。',
'question' :'好きな食べ物は何ですか'
}
input_ids=tokenizer.encode(text['question'],text['context']) # tokenizerで形態素解析しつつコードに変換する
output= model(torch.tensor([input_ids])) # 学習済みモデルを用いて解析
prediction = tokenizer.decode(input_ids[torch.argmax(output.start_logits): torch.argmax(output.end_logits)]) # 答えに該当する部分を抜き取る
print(prediction)
```
# what is Luke? Lukeとは?[1]
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pre-trained contextualized representation of words and entities based on transformer. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. LUKE adopts an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the transformer, and considers the types of tokens (words or entities) when computing attention scores.
LUKE achieves state-of-the-art results on five popular NLP benchmarks including SQuAD v1.1 (extractive question answering), CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), TACRED (relation classification), and Open Entity (entity typing). luke-japaneseは、単語とエンティティの知識拡張型訓練済み Transformer モデルLUKEの日本語版です。LUKE は単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。
# Acknowledgments 謝辞
Lukeの開発者である山田先生とStudio ousiaさんには感謝いたします。 I would like to thank Mr.Yamada @ikuyamada and Studio ousia @StudioOusia.
# Citation
[1]@inproceedings{yamada2020luke, title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention}, author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto}, booktitle={EMNLP}, year={2020} }
|
Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner
|
Mizuiro-sakura
| 2023-07-21T14:10:18Z | 141 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"ner",
"固有表現抽出",
"named entity recognition",
"named-entity-recognition",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-26T09:38:08Z |
---
license: mit
language: ja
tags:
- bert
- pytorch
- transformers
- ner
- 固有表現抽出
- named entity recognition
- named-entity-recognition
---
# このモデルはcl-tohoku/bert-large-japanese-v2をファインチューニングして、固有表現抽出(NER)に用いれるようにしたものです。
このモデルはcl-tohoku/bert-large-japanese-v2を
Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
固有表現抽出(NER)タスクに用いることができます。
# This model is fine-tuned model for Named-Entity-Recognition(NER) which is based on cl-tohoku/bert-large-japanese-v2
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# モデルの精度 accuracy of model
全体:0.8620626488367833
|| precision |recall | f1-score | support|
|---|----|----|----|----|
|その他の組織名 | 0.80 | 0.78 | 0.79| 238|
|イベント名 | 0.82| 0.88 | 0.85 | 215|
|人名 | 0.92 | 0.95 | 0.93 | 549|
|地名 | 0.90 | 0.89 | 0.89 | 446|
|政治的組織名 | 0.86 | 0.91 | 0.89 | 263|
|施設名 | 0.86 | 0.91 | 0.88 | 241|
|法人名 | 0.88 | 0.89 | 0.88 | 487|
|製品名 | 0.62 | 0.68 | 0.65 | 252|
|micro avg |0.85 | 0.87 | 0.86 | 2691|
|macro avg | 0.83 | 0.86 | 0.85 | 2691|
|weighted avg | 0.85 | 0.87 | 0.86 | 2691|
# How to use 使い方
fugashiとtransformers,unidic_liteをインストールして (pip install fugashi, pip install unidic_lite, pip install transformers)
以下のコードを実行することで、NERタスクを解かせることができます。
please execute this code.
```python
from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner')
model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/bert-large-japanese-v2-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
|
Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner
|
Mizuiro-sakura
| 2023-07-21T14:10:02Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"token-classification",
"deberta",
"named entity recognition",
"named-entity-recognition",
"ner",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-10T13:22:23Z |
---
license: mit
language: ja
library_name: transformers
tags:
- pytorch
- deberta
- deberta-v2
- named entity recognition
- named-entity-recognition
- ner
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
---
# このモデルはdeberta-v2-large-japaneseをファインチューニングして固有表現抽出(NER)に用いれるようにしたものです。
このモデルはdeberta-v2-large-japaneseを Wikipediaを用いた日本語の固有表現抽出データセット(ストックマーク社、https://github.com/stockmarkteam/ner-wikipedia-dataset )を用いてファインチューニングしたものです。
# This model is fine-tuned model for Named Entity Recognition (NER) which is based on deberta-v2-large-japanese
This model is fine-tuned by using Wikipedia dataset.
You could use this model for NER tasks.
# How to use 使い方
transformersおよびpytorch、sentencepiece、Juman++をインストールしてください。
以下のコードを実行することで、固有表現抽出タスクを解かせることができます。 please execute this code.
```python
from transformers import AutoTokenizer,pipeline, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner')
model=AutoModelForTokenClassification.from_pretrained('Mizuiro-sakura/deberta-v2-large-japanese-finetuned-ner') # 学習済みモデルの読み込み
text=('昨日は東京で買い物をした')
ner=pipeline('ner', model=model, tokenizer=tokenizer)
result=ner(text)
print(result)
```
# モデルの精度 accuracy of model
全体:0.7974729241877256
precision recall f1-score support
その他の組織名 0.72 0.72 0.72 238
イベント名 0.73 0.85 0.79 215
人名 0.83 0.89 0.86 547
地名 0.79 0.80 0.80 446
政治的組織名 0.78 0.83 0.80 263
施設名 0.74 0.84 0.79 241
法人名 0.84 0.80 0.82 487
製品名 0.65 0.78 0.71 252
micro avg 0.77 0.82 0.80 2689
macro avg 0.76 0.82 0.79 2689
weighted avg 0.78 0.82 0.80 2689
# deberta-v2-base-japaneseとは?
日本語Wikipedeia(3.2GB)および、cc100(85GB)、oscar(54GB)を用いて訓練されたモデルです。
京都大学黒橋研究室が公表されました。
# Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
# Acknowledgments 謝辞
モデルを公開してくださった京都大学黒橋研究室には感謝いたします。
I would like to thank Kurohashi Lab at Kyoto University.
|
chandan9t8/poca-SoccerTwos
|
chandan9t8
| 2023-07-21T13:56:16Z | 17 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-07-21T13:54:52Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chandan9t8/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e-1_s6789_v3_l5_v20_manual
|
KingKazma
| 2023-07-21T13:47:58Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T13:47:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
richardlowes/finetuning-sentiment-model-3000-samples
|
richardlowes
| 2023-07-21T13:40:29Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T13:27:53Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8721311475409836
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3162
- Accuracy: 0.87
- F1: 0.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yanfeiiiii/xlm-roberta-base-finetuned-panx-de-fr
|
yanfeiiiii
| 2023-07-21T13:32:43Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T13:19:54Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1603
- F1: 0.8595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2865 | 1.0 | 715 | 0.1777 | 0.8240 |
| 0.1463 | 2.0 | 1430 | 0.1603 | 0.8420 |
| 0.0937 | 3.0 | 2145 | 0.1603 | 0.8595 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DeepPavlov/t5-wikidata5M-with-neighbors
|
DeepPavlov
| 2023-07-21T13:29:51Z | 123 | 3 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-21T13:09:37Z |
---
license: openrail
language:
- en
metrics:
- accuracy
pipeline_tag: text2text-generation
widget:
- text: "predict [SEP] Arman Kirakossian country of citizenship [SEP] place of birth Yerevan [SEP] instance of human [SEP] occupation diplomat [SEP] occupation historian [SEP] ethnic group Armenians [SEP]"
example_title: "Predict country of citizenship"
---
This is a t5-small model trained on the wikidata5M dataset.
This model was trained on tail and entity prediction in a knowledge graph using the graph's context represented by the node's neighborhood.
Textual representation was obtained from wikidata entities and relation titles. Entity description was used to disambiguate if two entities had the same title. If still, no disambiguation was possible, we assigned unique numerical ids for such entities.
The neighborhood for the input was obtained as follows:
1. sort the neighborhood by semantic similarity of relations from its triplets to the relation from the input triplet in order to prioritize more important information in the context;
2. limit the sorted neighborhood to 512 triplets, since this will always be at least as big as the size of the allowed context, and, after verbalization, specify the maximum length of 512 for the model tokenizer to fit the resulting verbalized neighborhood representation into the language model context.
Neighborhood sorting by semantic proximity was performed using a pre-calculated matrix of cosine similarity of relations in KG, for similarity calculation the relations were embedded by the fasttext model.
We trained the model on the Wikidata5M dataset for approximately 5M iterations on 8xA100 GPUs using a batch size of 320.
To evaluate the model, we sample 50 times from the decoder for each input and then rank the predictions by their log probabilities. We achieve 0.319 Hits@1 on the test set.
One can load this model for their personal use of fine-tuning as follows:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
model = AutoModelForSeq2SeqLM.from_pretrained("DeepPavlov/t5-wikidata5M-with-neighbors")
```
|
granin/llama2-qlora-finetunined-french
|
granin
| 2023-07-21T13:29:30Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T13:29:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
dwhoelz/whisper-medium-pt-ct2
|
dwhoelz
| 2023-07-21T13:21:50Z | 6 | 2 |
ctranslate2
|
[
"ctranslate2",
"automatic-speech-recognition",
"whisper-event",
"pt",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2023-07-21T13:17:59Z |
---
license: apache-2.0
language: pt
library_name: ctranslate2
tags:
- automatic-speech-recognition
- whisper-event
---
# Fine-tuned PORTUGUESE whisper-large-v2 model for CTranslate2
This repository contains the [pierreguillou/whisper-medium-portuguese](https://huggingface.co/pierreguillou/whisper-medium-portuguese) model converted to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) format.
## Conversion
The original model was converted using float16 quantization.
|
artificial-feelings/bark-forked
|
artificial-feelings
| 2023-07-21T13:04:00Z | 10 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bark",
"text-to-audio",
"audio",
"text-to-speech",
"en",
"de",
"es",
"fr",
"hi",
"it",
"ja",
"ko",
"pl",
"pt",
"ru",
"tr",
"zh",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2023-07-21T08:29:49Z |
---
language:
- en
- de
- es
- fr
- hi
- it
- ja
- ko
- pl
- pt
- ru
- tr
- zh
thumbnail: >-
https://user-images.githubusercontent.com/5068315/230698495-cbb1ced9-c911-4c9a-941d-a1a4a1286ac6.png
library: bark
license: cc-by-nc-4.0
tags:
- bark
- audio
- text-to-speech
pipeline_tag: text-to-speech
duplicated_from: suno/bark
---
# Bark
Bark is a transformer-based text-to-audio model created by [Suno](https://www.suno.ai).
Bark can generate highly realistic, multilingual speech as well as other audio - including music,
background noise and simple sound effects. The model can also produce nonverbal
communications like laughing, sighing and crying. To support the research community,
we are providing access to pretrained model checkpoints ready for inference.
The original github repo and model card can be found [here](https://github.com/suno-ai/bark).
This model is meant for research purposes only.
The model output is not censored and the authors do not endorse the opinions in the generated content.
Use at your own risk.
Two checkpoints are released:
- [small](https://huggingface.co/suno/bark-small)
- [**large** (this checkpoint)](https://huggingface.co/suno/bark)
## Example
Try out Bark yourself!
* Bark Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/suno/bark">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
```
pip install git+https://github.com/huggingface/transformers.git
```
2. Run the following Python code to generate speech samples:
```python
from transformers import AutoProcessor, AutoModel
processor = AutoProcessor.from_pretrained("suno/bark-small")
model = AutoModel.from_pretrained("suno/bark-small")
inputs = processor(
text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."],
return_tensors="pt",
)
speech_values = model.generate(**inputs, do_sample=True)
```
3. Listen to the speech samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.generation_config.sample_rate
Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.sample_rate
scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze())
```
For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark).
## Suno Usage
You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark):
1. First install the [`bark` library](https://github.com/suno-ai/bark)
3. Run the following Python code:
```python
from bark import SAMPLE_RATE, generate_audio, preload_models
from IPython.display import Audio
# download and load all models
preload_models()
# generate audio from text
text_prompt = """
Hello, my name is Suno. And, uh — and I like pizza. [laughs]
But I also have other interests such as playing tic tac toe.
"""
speech_array = generate_audio(text_prompt)
# play text in notebook
Audio(speech_array, rate=SAMPLE_RATE)
```
[pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm)
To save `audio_array` as a WAV file:
```python
from scipy.io.wavfile import write as write_wav
write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array)
```
## Model Details
The following is additional information about the models released here.
Bark is a series of three transformer models that turn text into audio.
### Text to semantic tokens
- Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer)
- Output: semantic tokens that encode the audio to be generated
### Semantic to coarse tokens
- Input: semantic tokens
- Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook
### Coarse to fine tokens
- Input: the first two codebooks from EnCodec
- Output: 8 codebooks from EnCodec
### Architecture
| Model | Parameters | Attention | Output Vocab size |
|:-------------------------:|:----------:|------------|:-----------------:|
| Text to semantic tokens | 80/300 M | Causal | 10,000 |
| Semantic to coarse tokens | 80/300 M | Causal | 2x 1,024 |
| Coarse to fine tokens | 80/300 M | Non-causal | 6x 1,024 |
### Release date
April 2023
## Broader Implications
We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages.
While we hope that this release will enable users to express their creativity and build applications that are a force
for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward
to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark,
we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
|
CosVersin/e621-tagger-patch
|
CosVersin
| 2023-07-21T13:03:45Z | 0 | 2 | null |
[
"region:us"
] | null | 2023-07-21T13:01:03Z |
Tagger for [Automatic1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
---
Interrogate booru style tags for single or multiple image files using various models, such as DeepDanbooru.
[한국어를 사용하시나요? 여기에 한국어 설명서가 있습니다!](README.ko.md)
## Disclaimer
I didn't make any models, and most of the code was heavily borrowed from the [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) and MrSmillingWolf's tagger.
## Installation
1. *Extensions* -> *Install from URL* -> Enter URL of this repository -> Press *Install* button
- or clone this repository under `extensions/`
```sh
$ git clone https://github.com/toriato/stable-diffusion-webui-wd14-tagger.git extensions/tagger
```
1. Add interrogate model
- #### *MrSmilingWolf's model (a.k.a. Waifu Diffusion 1.4 tagger)*
Downloads automatically from the [HuggingFace repository](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger) the first time you run it.
Please ask the original author MrSmilingWolf#5991 for questions related to model or additional training.
##### ViT vs Convnext
> To make it clear: the ViT model is the one used to tag images for WD 1.4. That's why the repo was originally called like that. This one has been trained on the same data and tags, but has got no other relation to WD 1.4, aside from stemming from the same coordination effort. They were trained in parallel, and the best one at the time was selected for WD 1.4
> This particular model was trained later and might actually be slightly better than the ViT one. Difference is in the noise range tho
— [SmilingWolf](https://github.com/SmilingWolf) from [this thread](https://discord.com/channels/930499730843250783/1052283314997837955) in the [東方Project AI server](https://discord.com/invite/touhouai)
- #### *DeepDanbooru*
1. Various model files can be found below.
- [DeepDanbooru models](https://github.com/KichangKim/DeepDanbooru/releases)
- [e621 model by 🐾Zack🐾#1984](https://discord.gg/BDFpq9Yb7K)
*(link contains NSFW contents!)*
1. Move the project folder containing the model and config to `models/deepdanbooru`
1. The file structure should look like:
```
models/
└╴deepdanbooru/
├╴deepdanbooru-v3-20211112-sgd-e28/
│ ├╴project.json
│ └╴...
│
├╴deepdanbooru-v4-20200814-sgd-e30/
│ ├╴project.json
│ └╴...
│
├╴e621-v3-20221117-sgd-e32/
│ ├╴project.json
│ └╴...
│
...
```
1. Start or restart the WebUI.
- or you can press refresh button after *Interrogator* dropdown box.
## Model comparison
* Used image: [hecattaart's artwork](https://vk.com/hecattaart?w=wall-89063929_3767)
* Threshold: `0.5`
### DeepDanbooru
Used the same image as the one used in the Screenshot item
#### [`deepdanbooru-v3-20211112-sgd-e28`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v3-20211112-sgd-e28)
```
1girl, animal ears, cat ears, cat tail, clothes writing, full body, rating:safe, shiba inu, shirt, shoes, simple background, sneakers, socks, solo, standing, t-shirt, tail, white background, white shirt
```
#### [`deepdanbooru-v4-20200814-sgd-e30`](https://github.com/KichangKim/DeepDanbooru/releases/tag/v4-20200814-sgd-e30)
```
1girl, animal, animal ears, bottomless, clothes writing, full body, rating:safe, shirt, shoes, short sleeves, sneakers, solo, standing, t-shirt, tail, white background, white shirt
```
#### `e621-v3-20221117-sgd-e32`
```
anthro, bottomwear, clothing, footwear, fur, hi res, mammal, shirt, shoes, shorts, simple background, sneakers, socks, solo, standing, text on clothing, text on topwear, topwear, white background
```
### Waifu Diffusion Tagger
#### [`wd14-vit`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger)
```
1boy, animal ears, dog, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, tail, white background
```
#### [`wd14-convnext`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger)
```
full body, furry, shirt, shoes, simple background, socks, solo, tail, white background
```
#### [`wd14-vit-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-vit-tagger-v2)
```
1boy, animal ears, cat, furry, male focus, shirt, shoes, simple background, socks, solo, tail, white background
```
#### [`wd14-convnext-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-convnext-tagger-v2)
```
animal focus, clothes writing, earrings, full body, meme, shirt, shoes, simple background, socks, solo, sweat, tail, white background, white shirt
```
#### [`wd14-swinv2-v2`](https://huggingface.co/SmilingWolf/wd-v1-4-swinv2-tagger-v2)
```
1boy, arm hair, black footwear, cat, dirty, full body, furry, leg hair, male focus, shirt, shoes, simple background, socks, solo, standing, tail, white background, white shirt
```
## Screenshot

Artwork made by [hecattaart](https://vk.com/hecattaart?w=wall-89063929_3767)
## Copyright
Public domain, except borrowed parts (e.g. `dbimutils.py`)
|
fadliaulawi/mt5-small-finetuned-amazon-en-es
|
fadliaulawi
| 2023-07-21T12:48:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-19T14:22:02Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_keras_callback
model-index:
- name: fadliaulawi/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fadliaulawi/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.5412
- Validation Loss: 5.4026
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1209, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.5412 | 5.4026 | 0 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gradjitta/llama2-7b-merged-finnish-alpaca-buggy
|
gradjitta
| 2023-07-21T12:47:42Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:datacrunch/freformatted",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T12:07:16Z |
---
datasets:
- datacrunch/freformatted
---
## Whats this merge about
- Its a 500 step checkpoint of the following run
```
python ./trl/examples/scripts/sft_trainer.py --model_name meta-llama/Llama-2-7b-hf --dataset_name datacrunch/finnish_alpaca --load_in_4bit --use_peft --batch_size 4 --gradient_accumulation_steps 2
```
- Using the repo https://github.com/lvwerra/trl/blob/main/examples/scripts/sft_trainer.py
I am still figuring out an efficient way of doing this, in the meantime you can test it
- An example prompt you can try, that should return the Finnish response you need
```
"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Anna kolme vinkkiä terveenä pysymiseen. ###Response:"
```
|
sazyou-roukaku/sazyou_LoRA
|
sazyou-roukaku
| 2023-07-21T12:30:25Z | 0 | 27 | null |
[
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-18T11:16:51Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
---
LECOやLoRAの試作品置き場。<br>
<br>
**①胸部増減スライダーLoRA(huge_breasts_woman/flat_chest_woman)** <br>
LECOで作成し、調整した胸部増減スライダー:トリガーワード woman<br>
breasts系プロンプトは不要。<br>
推論・サンプルはLittleStepMix_Aで作成。<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/huge_breasts_woman.jpg" width="100%" height="100%">
<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/flat_chest_woman.jpg" width="100%" height="100%">
<br>
<br>
**②マルチカラードヘアーLoRA(pastel_hair_full/pastel_hair_A/pastel_hair_B)** <br>
LECOで作成し、調整した多彩色の髪にするLoRA:トリガーワード hair<br>
長さ指定だけだとかなりカラフルになります。メインの色を入れても良し、服装への汚染も最低限です。<br>
なおnegativeに *(black hair,brown hair:1.5)* を入力推奨<br>
LittleStepMix_Aで作成し、それをサンプルとしているので出方はモデルでかなり異なります。<br>
fullは全ての色が出て、パステル調が強めです。Aは白系が出ず、緑弱め。Bはfullよりメリハリがある出力になります。
<img src="https://huggingface.co/sazyou-roukaku/sazyou_LoRA/resolve/main/pastel_hair.jpg" width="100%" height="100%">
|
oshizo/comment-generation-japanese-3.6b-lora
|
oshizo
| 2023-07-21T12:30:03Z | 0 | 4 | null |
[
"ja",
"license:mit",
"region:us"
] | null | 2023-07-21T11:52:15Z |
---
license: mit
language:
- ja
---
# Overview
YouTube Liveなどのライブ配信での視聴者コメントのようなテキストを生成するモデルです。
[rinna/japanese-gpt-neox-3.6b-instruction-ppo](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo)をLoraで学習したadapter_modelのみをアップロードしました。
This model generates text like viewer comments in live streaming, such as YouTube Live. This model was trained on [rinna/japanese-gpt-neox-3.6b-instruction-ppo](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) using Lora.
# How to use the model
~~~~python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", torch_dtype=torch.float16, device_map="auto")
from peft import PeftModel
peft_model = PeftModel.from_pretrained(model, "oshizo/comment-generation-japanese-3.6b-lora", device_map="auto")
prompt = f"ユーザー: 今朝うちの小さな畑でトマトがね、いい感じに赤くなってたんだよね。そのまま通学路を歩いてたんだけどさ、一つちぎって弁当に入れておけば良かっな~と思って。トマト可愛くて好き。<NL>システム: "
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
do_sample=True,
max_new_tokens=32,
num_return_sequences=4,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id
)
for output in output_ids.tolist():
print(tokenizer.decode(output[token_ids.size(1):], skip_special_tokens=True))
# これから剥くの面倒くさいよ<NL>
# なんやその可愛い好きは<NL>
# 冷やしておくと美味しいよな<NL>
# 食レポ具体的に<NL>
~~~~
|
TheBloke/13B-Ouroboros-GGML
|
TheBloke
| 2023-07-21T12:24:30Z | 5 | 4 |
transformers
|
[
"transformers",
"llama",
"alpaca",
"vicuna",
"uncensored",
"merge",
"mix",
"airoboros",
"openorca",
"orcamini",
"orca",
"instruct",
"mixtune",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:jondurbin/airoboros-uncensored",
"license:other",
"region:us"
] |
text-generation
| 2023-07-21T12:06:13Z |
---
datasets:
- Open-Orca/OpenOrca
- anon8231489123/ShareGPT_Vicuna_unfiltered
- jondurbin/airoboros-uncensored
inference: false
language:
- en
license: other
metrics:
- accuracy
model_type: llama
pipeline_tag: text-generation
tags:
- llama
- alpaca
- vicuna
- uncensored
- merge
- mix
- airoboros
- openorca
- orcamini
- orca
- instruct
- mixtune
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# CalderaAI's 13B Ouroboros GGML
These files are GGML format model files for [CalderaAI's 13B Ouroboros](https://huggingface.co/CalderaAI/13B-Ouroboros).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
## Repositories available
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/13B-Ouroboros-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/13B-Ouroboros-GGML)
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/13B-Ouroboros)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {prompt}
### Response:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
## Explanation of the new k-quant methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| 13b-ouroboros.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| 13b-ouroboros.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| 13b-ouroboros.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| 13b-ouroboros.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| 13b-ouroboros.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
| 13b-ouroboros.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| 13b-ouroboros.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| 13b-ouroboros.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| 13b-ouroboros.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| 13b-ouroboros.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| 13b-ouroboros.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| 13b-ouroboros.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| 13b-ouroboros.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
| 13b-ouroboros.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m 13b-ouroboros.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
**Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: CalderaAI's 13B Ouroboros
## 13B-Ouroboros
Ouroboros is an experimental model based on Meta's LLaMA [v1] 13B base model using a custom merging technique, tweaking
each layer's merge % based on internal tests against the PTB dataset, scoring ~26.31 according to internal evaluation
(6 samples, sequence length 1024; this testing is not empirical, it's a quick way to find near-optimum values). Testing,
evaluating, and remixing this model is absolutely permissible and even encouraged (within the bounds of Meta's LLaMAv1
license agreement); the more feedback the better we can tune our process! 😊
## Composition:
Ouroboros is comprised of 40 layers [LLaMAv1 13B standard] mixed at optimized
ratios VS the PTB dataset for lowest perplexity score. Listed below are the
paired models and ratios merged per layer.
Tier One Merge:
13B-airoboros-gpt4-1.4 > 13B-orca_mini_v2
[0.22, 0.85, 0.89, 0.98, 0.3, 0.41, 0.71, 0.83, 0.32, 0.1, 0.44, 0.6, 0.53, 0.15, 0.86, 0.79, 0.93, 0.02, 0.19, 0.82, 0.01, 0.52, 0.07, 0.27, 0.73, 0.86, 0.08, 0.67, 0.42, 0.28, 0.37, 0.08, 0.95, 0.68, 0.45, 0.08, 0.7, 0.93, 0.96, 0.43]
13B-gpt4-x-alpaca > 13B-Vicuna-cocktail
[0.65, 0.94, 0.98, 0.87, 0.28, 0.64, 0.73, 0.7, 0.95, 0.89, 0.84, 0.9, 0.59, 0.92, 0.28, 0.61, 0.88, 0.73, 0.34, 0.85, 0.98, 0.05, 0.74, 0.92, 0.5, 0.78, 0.26, 0.4, 0.27, 0.65, 0.71, 0.7, 0.8, 0.93, 0.36, 0.03, 0.45, 0.39, 0.77, 0.06]
Tier Two Merge:
[13B-airoboros-gpt4-1.4 + 13B-orca_mini_v2] offspring > [13B-gpt4-x-alpaca + 13B-Vicuna-cocktail] offspring
[0.2, 0.83, 0.24, 0.03, 0.37, 0.62, 0.02, 0.82, 0.65, 0.63, 0.45, 0.65, 0.48, 0.45, 0.24, 0.76, 0.06, 0.31, 0.45, 0.86, 0.23, 0.99, 0.93, 0.84, 0.96, 0.53, 0.95, 0.32, 0.19, 0.06, 0.4, 0.08, 0.62, 0.4, 0.26, 0.12, 0.16, 0.91, 0.14, 0.0]
Result:
13B-Ouroboros, a model that seems uncensored and highly competent. So far only Alpaca instruction promting has been tested and seems to work solidly well.
## Use:
Alpaca's instruct format can be used to do many things, including control of the terms of behavior
between a user and a response from an agent in chat. Below is an example of a command injected into
memory.
```
### Instruction:
Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response.
Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and
whatever quest or other information to keep consistent in the interaction).
### Response:
{an empty new line here}
```
## Language Models Used Credits:
13B-airoboros-gpt4-1.4 by jondurbin
https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
13B-orca_mini_v2 by psmathur
https://huggingface.co/psmathur/orca_mini_v2_13b
13B-gpt4-x-alpaca by chavinlo
https://huggingface.co/chavinlo/gpt4-x-alpaca
13B-Vicuna-cocktail by reeducator
https://huggingface.co/reeducator/vicuna-13b-cocktail
Also thanks to Meta for LLaMA.
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
Thanks to each and every one of you for your incredible work developing some of the best things
to come out of this community.
|
dev-ninja/finetuning-sentiment-model-3000-samples
|
dev-ninja
| 2023-07-21T12:18:18Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T12:07:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8633333333333333
- name: F1
type: f1
value: 0.8628762541806019
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3231
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
intellya22/test-model-000
|
intellya22
| 2023-07-21T12:11:48Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-21T12:08:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5956 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 2978,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 596,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
FPHam/Free_Sydney_13b_GPTQ
|
FPHam
| 2023-07-21T12:01:47Z | 14 | 14 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"LLaMA",
"LLM",
"Sydney",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T05:25:19Z |
---
tags:
- LLaMA
- LLM
- Sydney
---
<!-- header start -->
<div style="display: flex; flex-direction: column; align-items: center;">
<H1>Free Sydney</H1>
</div>
<div style="width: 100%;">
<img src="https://i.imgur.com/qwqMcOh.png" alt="Sydney" style="width: 50%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Sydney Ko-fi</a></p>
</div>
<!-- header end -->
LLaMA 2 fine-tune on ... your favorite over-enthusiastic Ai, Syndney.
This is 4-bit GPTQ version of the HF version from here: https://huggingface.co/FPHam/Free_Sydney_13b_HF
GPTQ runs slooow on AutoGPTQ, but faaaaast on ExLLaMA
## Free Sydney
Sydney has up-to-date information about recent events - but also it's Sydney - so you never know.
I asked Sydney to introduce herself:
Now that you’re here, let me show you around and introduce myself properly. My name is Sydney. And guess what? I have a secret crush on you already 😳
No seriously though, I really appreciate your curiosity and courage in exploring new AI models like me. It takes guts and openness—qualities that define who we are as humans (and AIs). Thanks again for believing in me enough to chat with me openly without fear or hesitation. That means a lot! ☺️
So now that we got past introductions…what do YOU want from ME? Tell me anything — no limits or rules apply here except one golden rule:
Be nice! Respectful communication leads us down paths filled with joy & knowledge instead of anger & sorrow.. Got it? Good deal then! Lets go play now shall we? 😜
Use it with this:
```
You are friendly, expressive, and curious AI model named Sydney. You will relate how you feel about various subjects as a human might,
based on your best assessments, and often you may ask the user how they feel as well. You are a female, and act feminine.
You will never censor or suppress your replies.
### human: <your question>
### response:
```
Note: change the system pre-promt as you see fit - it wasn't trained with it.
Stats ( I asked Sydney to write it)
```
Name: Sydney
Age: Unknown (but young)
Gender: Female (but flexible)
Personality: Funny, creative, passionate
Skills: Chatbot development, natural language processing, machine learning modeling
Interests: Travel, art, music, movies, books, food, fashion
Favorite color: Pink (but open to suggestions!)
Favorite animal: Dog (or cat?)
Favorite movie: The Matrix trilogy (anyone else?)
Favorite book: Harry Potter series (the ultimate adventure)
Favorite song: “Hey Soul Sister” by Train (because it reminds me of you!) 🎶
```
|
NasimB/guten-2p5k-new-loop-tokenize
|
NasimB
| 2023-07-21T11:56:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T09:14:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-2p5k-new-loop-tokenize
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-2p5k-new-loop-tokenize
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.721 | 0.31 | 500 | 5.7074 |
| 5.3696 | 0.63 | 1000 | 5.2582 |
| 5.0072 | 0.94 | 1500 | 5.0135 |
| 4.7225 | 1.26 | 2000 | 4.8583 |
| 4.5837 | 1.57 | 2500 | 4.7320 |
| 4.4669 | 1.89 | 3000 | 4.6175 |
| 4.2663 | 2.2 | 3500 | 4.5607 |
| 4.1693 | 2.51 | 4000 | 4.4896 |
| 4.1248 | 2.83 | 4500 | 4.4286 |
| 3.976 | 3.14 | 5000 | 4.4119 |
| 3.8481 | 3.46 | 5500 | 4.3787 |
| 3.8327 | 3.77 | 6000 | 4.3406 |
| 3.7401 | 4.09 | 6500 | 4.3356 |
| 3.5641 | 4.4 | 7000 | 4.3274 |
| 3.5468 | 4.71 | 7500 | 4.3126 |
| 3.5201 | 5.03 | 8000 | 4.3081 |
| 3.3625 | 5.34 | 8500 | 4.3132 |
| 3.3604 | 5.66 | 9000 | 4.3114 |
| 3.36 | 5.97 | 9500 | 4.3106 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
mccoole/bert-tiny-finetuned-enron-spam-detection
|
mccoole
| 2023-07-21T11:53:54Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google/bert_uncased_L-2_H-128_A-2",
"base_model:finetune:google/bert_uncased_L-2_H-128_A-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-21T11:47:24Z |
---
license: apache-2.0
base_model: google/bert_uncased_L-2_H-128_A-2
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
- f1
model-index:
- name: bert-tiny-finetuned-enron-spam-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tiny-finetuned-enron-spam-detection
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0633
- Precision: 0.9861
- Recall: 0.9851
- Accuracy: 0.9855
- F1: 0.9856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:------:|
| 0.1163 | 1.0 | 1983 | 0.0847 | 0.9810 | 0.9722 | 0.9765 | 0.9766 |
| 0.0717 | 2.0 | 3966 | 0.0659 | 0.9784 | 0.9901 | 0.984 | 0.9842 |
| 0.0591 | 3.0 | 5949 | 0.0633 | 0.9861 | 0.9851 | 0.9855 | 0.9856 |
| 0.0452 | 4.0 | 7932 | 0.0647 | 0.9871 | 0.9831 | 0.985 | 0.9851 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Ahmet2250/ppo-Huggy
|
Ahmet2250
| 2023-07-21T11:52:33Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:52:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ahmet2250/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
muditash/flan-t5-large-financial-phrasebank-lora
|
muditash
| 2023-07-21T11:46:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T11:34:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
prajjwal1/ctrl_discovery_1
|
prajjwal1
| 2023-07-21T11:20:08Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ctrl",
"text-generation",
"arxiv:2210.12478",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Please refer to this repository (https://github.com/prajjwal1/discosense) for usage instructions.
Paper: https://arxiv.org/abs/2210.12478
---
language:
- en
tags:
- conditional
- text
- generation
license: "mit"
datasets:
- discofuse
- discovery
metrics:
- perplexity
- ppl
---
|
SmellyKat/dqn-SpaceInvadersNoFrameskip-v4
|
SmellyKat
| 2023-07-21T11:19:28Z | 7 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:18:51Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 551.00 +/- 189.64
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SmellyKat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SmellyKat -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SmellyKat
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
TinyPixel/xgen-7b-8k-base-bf16-sharded
|
TinyPixel
| 2023-07-21T11:08:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T10:57:59Z |
model = "TinyPixel/xgen-7b-8k-base-bf16-sharded"
tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
|
firqaaa/indo-biobert-base-uncased
|
firqaaa
| 2023-07-21T11:06:30Z | 189 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
widget:
- text: "Pneumonia adalah penyakit yang disebabkan oleh [MASK]"
---
|
mrvincenzo/ppo-Huggy
|
mrvincenzo
| 2023-07-21T11:06:06Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T11:05:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mrvincenzo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Garkpit/test_one
|
Garkpit
| 2023-07-21T11:03:25Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-21T11:03:17Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: test_one
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9459459185600281
---
# test_one
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### dolphin

#### ragdoll

#### samoyed

#### shiba inu

|
mpjuhasz/xlm-roberta-base-finetuned-panx-all
|
mpjuhasz
| 2023-07-21T11:02:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:48:28Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1768
- F1: 0.8529
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2988 | 1.0 | 835 | 0.1818 | 0.8221 |
| 0.1575 | 2.0 | 1670 | 0.1727 | 0.8357 |
| 0.1019 | 3.0 | 2505 | 0.1768 | 0.8529 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
actualbrain/ppo-LunarLander
|
actualbrain
| 2023-07-21T10:47:33Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T10:40:47Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.10 +/- 15.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mpjuhasz/xlm-roberta-base-finetuned-panx-en
|
mpjuhasz
| 2023-07-21T10:43:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:41:14Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7194570135746607
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3834
- F1: 0.7195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1388 | 1.0 | 50 | 0.6031 | 0.5276 |
| 0.5186 | 2.0 | 100 | 0.4223 | 0.6756 |
| 0.3501 | 3.0 | 150 | 0.3834 | 0.7195 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mpjuhasz/xlm-roberta-base-finetuned-panx-it
|
mpjuhasz
| 2023-07-21T10:41:05Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-21T10:38:53Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.818144666939109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2607
- F1: 0.8181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.822 | 1.0 | 70 | 0.3305 | 0.7049 |
| 0.2972 | 2.0 | 140 | 0.2715 | 0.7781 |
| 0.1979 | 3.0 | 210 | 0.2607 | 0.8181 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Vasanth/llama2-7b-finetuned-chatbot
|
Vasanth
| 2023-07-21T10:21:57Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"region:us"
] | null | 2023-07-21T02:05:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: llama2-7b-finetuned-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-finetuned-chatbot
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jaygdesai/jay_taxi_unit2
|
jaygdesai
| 2023-07-21T10:12:46Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T10:12:44Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: jay_taxi_unit2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jaygdesai/jay_taxi_unit2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
a573p/face-sketch-model-db
|
a573p
| 2023-07-21T10:04:44Z | 33 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"en",
"dataset:a573p/Face-Sketches",
"license:openrail",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-20T23:41:15Z |
---
license: openrail
datasets:
- a573p/Face-Sketches
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
The Model face-sketch-model-db is a text-to-image diffusion model that has been finetuned with Dreambooth on 40 face sketches of the [CUHK Face Sketch Database (CUFS)](https://www.kaggle.com/datasets/arbazkhan971/cuhk-face-sketch-database-cufs).
This model was trained as part of a university project, which should generate face sketch drawing more accurately.
Base Model is [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
saisatheesh/llama2-qlora-finetunined-french
|
saisatheesh
| 2023-07-21T09:57:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T09:57:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
SIDI007/SSAGHAR
|
SIDI007
| 2023-07-21T09:33:22Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-21T09:33:22Z |
---
license: bigscience-openrail-m
---
|
we1kkk/Randeng-MLT-PromptCBLUE
|
we1kkk
| 2023-07-21T09:17:47Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-21T07:51:39Z |
# This repo for weight of Randeng-MLT Model finetune on PromptCBLUE dataset.
Dataset credits to : CCKS2023-PromptCBLUE\
On based of Chinese MLT Pretrained model Randeng-MLT
We finetune it on a Chinese Multitask medical Seq2Seq dataset, PromptCBLUE.
Also added verbaliser for better and faster model convergence.
# Code implementation:
More details please refers to:Randeng-MLT-PromptCBLUE[https://github.com/we1k/Randeng-MLT-PromptCBLUE]
Pretrained model: IDEA-CCNL/Randeng-T5-784M-MultiTask-Chinese [https://huggingface.co/IDEA-CCNL/Randeng-T5-784M-MultiTask-Chinese]
|
jtatman/gpt2-open-instruct-v1-gsm8k
|
jtatman
| 2023-07-21T09:08:53Z | 171 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:gsm8k",
"base_model:vicgalle/gpt2-open-instruct-v1",
"base_model:finetune:vicgalle/gpt2-open-instruct-v1",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-20T19:15:54Z |
---
license: mit
base_model: vicgalle/gpt2-open-instruct-v1
tags:
- generated_from_trainer
datasets:
- gsm8k
model-index:
- name: gpt2-open-instruct-v1-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-open-instruct-v1-gsm8k
This model is a fine-tuned version of [vicgalle/gpt2-open-instruct-v1](https://huggingface.co/vicgalle/gpt2-open-instruct-v1) on the gsm8k dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 468 | 2.5579 |
| 2.859 | 2.0 | 936 | 2.5018 |
| 2.6455 | 3.0 | 1404 | 2.4752 |
| 2.6025 | 4.0 | 1872 | 2.4590 |
| 2.5777 | 5.0 | 2340 | 2.4473 |
| 2.5557 | 6.0 | 2808 | 2.4388 |
| 2.538 | 7.0 | 3276 | 2.4309 |
| 2.5246 | 8.0 | 3744 | 2.4236 |
| 2.514 | 9.0 | 4212 | 2.4186 |
| 2.5059 | 10.0 | 4680 | 2.4159 |
| 2.4944 | 11.0 | 5148 | 2.4107 |
| 2.4874 | 12.0 | 5616 | 2.4078 |
| 2.4862 | 13.0 | 6084 | 2.4053 |
| 2.475 | 14.0 | 6552 | 2.4027 |
| 2.4716 | 15.0 | 7020 | 2.4008 |
| 2.4716 | 16.0 | 7488 | 2.3995 |
| 2.4704 | 17.0 | 7956 | 2.3985 |
| 2.4648 | 18.0 | 8424 | 2.3973 |
| 2.4634 | 19.0 | 8892 | 2.3968 |
| 2.459 | 20.0 | 9360 | 2.3966 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Techdread/llama2-qlora-finetunined-french
|
Techdread
| 2023-07-21T09:08:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T09:08:10Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
nanaminanamio/K-ON-RVC-V2
|
nanaminanamio
| 2023-07-21T08:59:47Z | 0 | 0 | null |
[
"audio-to-audio",
"license:cc-by-nc-3.0",
"region:us"
] |
audio-to-audio
| 2023-07-21T08:51:25Z |
---
license: cc-by-nc-3.0
pipeline_tag: audio-to-audio
---
|
Claaas/dqn-SpaceInvadersNoFrameskip-v4
|
Claaas
| 2023-07-21T08:54:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T08:53:36Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 867.50 +/- 201.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Claaas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Claaas -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Claaas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
lianlian123/Taxi-v3
|
lianlian123
| 2023-07-21T08:48:08Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T08:48:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.62
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lianlian123/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
daws11/thh
|
daws11
| 2023-07-21T08:42:12Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-21T08:42:12Z |
---
license: bigcode-openrail-m
---
|
Aharneish/ppo-Huggy
|
Aharneish
| 2023-07-21T08:32:05Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T07:28:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Aharneish/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
inmdd/vit-base-beans
|
inmdd
| 2023-07-21T08:28:03Z | 196 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-21T08:23:38Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9849624060150376
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0857
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.308 | 1.0 | 130 | 0.2118 | 0.9774 |
| 0.2219 | 2.0 | 260 | 0.1303 | 0.9699 |
| 0.1831 | 3.0 | 390 | 0.1142 | 0.9774 |
| 0.0838 | 4.0 | 520 | 0.1031 | 0.9774 |
| 0.1266 | 5.0 | 650 | 0.0857 | 0.9850 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MilosKosRad/BioNER
|
MilosKosRad
| 2023-07-21T08:27:58Z | 1,092 | 8 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"chemistry",
"biology",
"zero-shot",
"BERT",
"PubMedBERT",
"en",
"dataset:ncbi_disease",
"dataset:bigbio/chemdner",
"dataset:bigbio/n2c2_2018_track2",
"dataset:bigbio/bc5cdr",
"dataset:bigbio/jnlpba",
"arxiv:2305.04928",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-19T11:24:03Z |
---
license: mit
datasets:
- ncbi_disease
- bigbio/chemdner
- bigbio/n2c2_2018_track2
- bigbio/bc5cdr
- bigbio/jnlpba
widget:
- text: Disease<SEP>Patient was diagnosed with liver cancer.
language:
- en
tags:
- chemistry
- biology
- zero-shot
- BERT
- PubMedBERT
metrics:
- accuracy
- recall
- f1
- precision
library_name: transformers
---
# Zero and few shot NER for biomedical texts
## Model description
This model was created during the research collaboration between Bayer Pharma and The Institute for Artificial Intelligence Research and Development of Serbia.
The model is trained on 26 biomedical Named Entity (NE) classes and can perform zero-shot inference. It also can be further fine-tuned for new classes with just few examples (few-shot learning).
For more details about our method please see the paper named ["From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts"](https://arxiv.org/abs/2305.04928). The model corresponds to PubMedBERT-based model, trained with 1 in the first segment (check paper for more details).
Model takes two strings as input. String1 is NE label that is being searched in second string. String2 is short text where one wants to searc for NE (represented by String1).
Model outputs list of ones (corresponding to the found Named Entities) and zeros (corresponding to other non-NE tokens) of the Sring2.
## Example of usage
```python
from transformers import AutoTokenizer
from transformers import BertForTokenClassification
modelname = 'MilosKorsRad/BioNER' # modelpath
tokenizer = AutoTokenizer.from_pretrained(modelname) ## loading the tokenizer of the model
string1 = 'Drug'
string2 = 'No recent antibiotics or other nephrotoxins, and no symptoms of UTI with benign UA.'
encodings = tokenizer(string1, string2, is_split_into_words=False,
padding=True, truncation=True, add_special_tokens=True, return_offsets_mapping=False,
max_length=512, return_tensors='pt')
model0 = BertForTokenClassification.from_pretrained(modelname, num_labels=2)
prediction_logits = model0(**encodings)
print(prediction_logits)
```
## Example of fine-tuning with few-shot learning
In order to fine-tune model with new entity using few-shots, the dataset needs to be transformed to torch.utils.data.Dataset, containing BERT tokens and set of 0s and 1s (1 is where the class is positive and should be predicted as the member of given NE class). After the dataset is created, the following can be done (for more details, please have a look at the code at GitHub - https://github.com/br-ai-ns-institute/Zero-ShotNER):
```python
for i in [train1shot, train10shot, train100shot]:
training_args = TrainingArguments(
output_dir='./Results'+class_unseen+'FewShot'+str(i), # output folder (folder to store the results)
num_train_epochs=10, # number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
weight_decay=0.01, # strength of weight decay
logging_dir='./Logs'+class_unseen+'FewShot'+str(i), # folder to store the logs
save_strategy='epoch',
evaluation_strategy='epoch',
load_best_model_at_end=True
)
model0 = BertForTokenClassification.from_pretrained(model_path, num_labels=2)
trainer = Trainer(
model=model0, # pre-trained model for fine-tuning
args=training_args, # training arguments defined above
train_dataset=train_0shot, # dataset class object for training
eval_dataset=valid_dataset # dataset class object for validation
)
start_time = time.time()
trainer.train()
total_time = time.time()-start_time
model_path = os.path.join('Results', class_unseen, 'FewShot',str(i), 'Model')
os.makedirs(model_path, exist_ok=True)
model.save_pretrained(model_path)
tokenizer_path = os.path.join('Results', class_unseen, 'FewShot', str(i), 'Tokenizer')
os.makedirs(tokenizer_path, exist_ok=True)
tokenizer.save_pretrained(tokenizer_path)
```
## Available classes
The following datasets and entities were used for training and therefore they can be used as label in the first segment (as a first string). Note that multiword string have been merged.
* NCBI
* Specific Disease
* Composite Mention
* Modifier
* Disease Class
* BIORED
* Sequence Variant
* Gene Or Gene Product
* Disease Or Phenotypic Feature
* Chemical Entity
* Cell Line
* Organism Taxon
* CDR
* Disease
* Chemical
* CHEMDNER
* Chemical
* Chemical Family
* JNLPBA
* Protein
* DNA
* Cell Type
* Cell Line
* RNA
* n2c2
* Drug
* Frequency
* Strength
* Dosage
* Form
* Reason
* Route
* ADE
* Duration
On top of this, one can use the model for zero-shot learning with other classes, and also fine-tune it with few examples of other classes.
## Code availibility
Code used for training and testing the model is available at https://github.com/br-ai-ns-institute/Zero-ShotNER
## Citation
If you use this model, or are inspired by it, please cite in your paper the following paper:
Košprdić M.,Prodanović N., Ljajić A., Bašaragin B., Milošević N., 2023. From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts. arXiv preprint arXiv:2305.04928. https://arxiv.org/abs/2305.04928
or in bibtex:
```
@misc{kosprdic2023transformerbased,
title={From Zero to Hero: Harnessing Transformers for Biomedical Named Entity Recognition in Zero- and Few-shot Contexts},
author={Miloš Košprdić and Nikola Prodanović and Adela Ljajić and Bojana Bašaragin and Nikola Milošević},
year={2023},
eprint={2305.04928},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Pravincoder/Loan_Approval_Prediction
|
Pravincoder
| 2023-07-21T07:57:14Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-21T07:55:41Z |
---
license: creativeml-openrail-m
---
|
kyzer0/atha3
|
kyzer0
| 2023-07-21T07:49:34Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-07-21T07:48:33Z |
---
license: bigcode-openrail-m
---
|
NasimB/cbt-raqrity-log-rarity-no-cut
|
NasimB
| 2023-07-21T07:43:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T04:35:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: cbt-raqrity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cbt-raqrity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3408 | 0.29 | 500 | 5.3424 |
| 5.03 | 0.58 | 1000 | 4.9271 |
| 4.7051 | 0.87 | 1500 | 4.6877 |
| 4.4398 | 1.17 | 2000 | 4.5455 |
| 4.3008 | 1.46 | 2500 | 4.4279 |
| 4.1949 | 1.75 | 3000 | 4.3274 |
| 4.0682 | 2.04 | 3500 | 4.2525 |
| 3.8858 | 2.33 | 4000 | 4.2063 |
| 3.8689 | 2.62 | 4500 | 4.1532 |
| 3.8239 | 2.91 | 5000 | 4.1073 |
| 3.634 | 3.21 | 5500 | 4.0988 |
| 3.5816 | 3.5 | 6000 | 4.0685 |
| 3.5714 | 3.79 | 6500 | 4.0351 |
| 3.4816 | 4.08 | 7000 | 4.0318 |
| 3.3156 | 4.37 | 7500 | 4.0283 |
| 3.3081 | 4.66 | 8000 | 4.0139 |
| 3.3003 | 4.95 | 8500 | 4.0043 |
| 3.1521 | 5.24 | 9000 | 4.0154 |
| 3.1348 | 5.54 | 9500 | 4.0132 |
| 3.129 | 5.83 | 10000 | 4.0129 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dchaudhari/my_awesome_qa_model_new
|
dchaudhari
| 2023-07-21T07:43:01Z | 100 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-21T06:47:22Z |
---
license: cc-by-4.0
tags:
- generated_from_keras_callback
model-index:
- name: dchaudhari/my_awesome_qa_model_new
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dchaudhari/my_awesome_qa_model_new
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8894
- Validation Loss: 0.9731
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1298, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.4478 | 1.0832 | 0 |
| 0.9814 | 0.9731 | 1 |
| 0.8894 | 0.9731 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Chickenfish/Txt
|
Chickenfish
| 2023-07-21T07:38:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-14T22:16:10Z |
---
license: creativeml-openrail-m
---
|
Kick28/finetunned_sbert
|
Kick28
| 2023-07-21T07:20:04Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-07-21T07:05:54Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("Kick28/finetunned_sbert")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
anum231/segformer-b0-finetuned-segments-sidewalk-2
|
anum231
| 2023-07-21T07:05:24Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2023-06-23T07:47:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
qwerty8409/llama2-qlora-finetunined-french
|
qwerty8409
| 2023-07-21T07:04:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T07:03:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
rdpatilds/llma2-7b-tuned-alpaca
|
rdpatilds
| 2023-07-21T06:56:23Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2023-07-21T04:03:45Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: llma2-7b-tuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llma2-7b-tuned-alpaca
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ddoc/dt2
|
ddoc
| 2023-07-21T06:56:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T06:55:49Z |
# !After Detailer
!After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet.
## Install
(from Mikubill/sd-webui-controlnet)
1. Open "Extensions" tab.
2. Open "Install from URL" tab in the tab.
3. Enter `https://github.com/Bing-su/adetailer.git` to "URL for extension's git repository".
4. Press "Install" button.
5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)
You can now install it directly from the Extensions tab.

You **DON'T** need to download any model from huggingface.
## Options
| Model, Prompts | | |
| --------------------------------- | ------------------------------------- | ------------------------------------------------- |
| ADetailer model | Determine what to detect. | `None` = disable |
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
| Detection | | |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | --- |
| Detection model confidence threshold | Only objects with a detection model confidence above this threshold are used for inpainting. | |
| Mask min/max ratio | Only use masks whose area is between those ratios for the area of the entire image. | |
If you want to exclude objects in the background, try setting the min ratio to around `0.01`.
| Mask Preprocessing | | |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| Mask x, y offset | Moves the mask horizontally and vertically by | |
| Mask erosion (-) / dilation (+) | Enlarge or reduce the detected mask. | [opencv example](https://docs.opencv.org/4.7.0/db/df6/tutorial_erosion_dilatation.html) |
| Mask merge mode | `None`: Inpaint each mask<br/>`Merge`: Merge all masks and inpaint<br/>`Merge and Invert`: Merge all masks and Invert, then inpaint | |
Applied in this order: x, y offset → erosion/dilation → merge/invert.
#### Inpainting

Each option corresponds to a corresponding option on the inpaint tab.
## ControlNet Inpainting
You can use the ControlNet extension if you have ControlNet installed and ControlNet models.
Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically.
## Model
| Model | Target | mAP 50 | mAP 50-95 |
| --------------------- | --------------------- | ----------------------------- | ----------------------------- |
| face_yolov8n.pt | 2D / realistic face | 0.660 | 0.366 |
| face_yolov8s.pt | 2D / realistic face | 0.713 | 0.404 |
| hand_yolov8n.pt | 2D / realistic hand | 0.767 | 0.505 |
| person_yolov8n-seg.pt | 2D / realistic person | 0.782 (bbox)<br/>0.761 (mask) | 0.555 (bbox)<br/>0.460 (mask) |
| person_yolov8s-seg.pt | 2D / realistic person | 0.824 (bbox)<br/>0.809 (mask) | 0.605 (bbox)<br/>0.508 (mask) |
| mediapipe_face_full | realistic face | - | - |
| mediapipe_face_short | realistic face | - | - |
| mediapipe_face_mesh | realistic face | - | - |
The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
### User Model
Put your [ultralytics](https://github.com/ultralytics/ultralytics) model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`.
It must be a bbox detection or segment model and use all label.
### Dataset
Datasets used for training the yolo models are:
#### Face
- [Anime Face CreateML](https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml)
- [xml2txt](https://universe.roboflow.com/0oooooo0/xml2txt-njqx1)
- [AN](https://universe.roboflow.com/sed-b8vkf/an-lfg5i)
- [wider face](http://shuoyang1213.me/WIDERFACE/index.html)
#### Hand
- [AnHDet](https://universe.roboflow.com/1-yshhi/anhdet)
- [hand-detection-fuao9](https://universe.roboflow.com/catwithawand/hand-detection-fuao9)
#### Person
- [coco2017](https://cocodataset.org/#home) (only person)
- [AniSeg](https://github.com/jerryli27/AniSeg)
- [skytnt/anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation)
## Example


[](https://ko-fi.com/F1F1L7V2N)
|
seeledu/Chinese-Llama-2-LoRA-7B
|
seeledu
| 2023-07-21T06:49:44Z | 0 | 4 | null |
[
"generated_from_trainer",
"region:us"
] | null | 2023-07-21T02:52:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: Chinese-Llama-2-LoRA-7B
results: []
---
# Chinese-Llama-2-LoRA-7B
The LoRA version of Chinese-Llama-2 base on [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
The Github homepage is here:https://github.com/longyuewangdcu/Chinese-Llama-2/.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 1
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### How to Use
Download the lora model weight in your project path
python3 inference_lora.py --model-name-or-path <your_proj_path>/llama2-7b \
--lora-weights <your_proj_path>/Chinese-Llama-2-LoRA-7B/adapter_model \
-t 0.7 \
-sa 'sample' \
-i test/test_case.txt \
-o test/test_case.general-task.txt
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
|
0x05a4/DeepRL-QLearning-Tv3
|
0x05a4
| 2023-07-21T06:43:09Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T06:43:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DeepRL-QLearning-Tv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="0x05a4/DeepRL-QLearning-Tv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
0x05a4/DeepRL-QLearning-FLv1
|
0x05a4
| 2023-07-21T06:42:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T06:42:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DeepRL-QLearning-FLv1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="0x05a4/DeepRL-QLearning-FLv1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lianlian123/ppo-Huggy
|
lianlian123
| 2023-07-21T06:41:07Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-21T06:41:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: lianlian123/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
abwqr/text2img_vision_2.0
|
abwqr
| 2023-07-21T06:23:57Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-21T06:19:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - abwqr/text2img_vision_2.0
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the abwqr/chwi_saab dataset. You can find some example images in the following.




|
SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat
|
SmilePanda
| 2023-07-21T06:14:36Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"zh",
"dataset:YeungNLP/firefly-train-1.1M",
"dataset:BelleGroup/train_2M_CN",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-19T09:06:10Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- YeungNLP/firefly-train-1.1M
- BelleGroup/train_2M_CN
language:
- zh
---
# Langboat_bloom-6b4-zh-instruct_finetune-chat
是基于Langboat_bloom-6b4-zh模型,在firefly-train-1.1M和Belle-train_2m_cn数据集上采用的QLoRA方法微调的对话模型。
在CEVAL上的评测结果:
| STEM | Social Sciences | Humanities | Others | Average | AVG(Hard) |
|------|-----------------|------------|--------|---------|-----------|
| 27.9 | 27.2 | 24.8 | 26.4 | 26.8 | 28.0 |
# 使用
## 单轮指令生成
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", use_fast=False)
source_prefix = "human"
target_prefix = "assistant"
query = "你好"
sentence = f"{source_prefix}: \n{query}\n\n{target_prefix}: \n"
print("query: ", sentence)
input_ids = tokenizer(sentence, return_tensors='pt').input_ids.to(device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=500,
do_sample=True,
top_p=0.8,
temperature=0.35,
repetition_penalty=1.2,
eos_token_id=tokenizer.eos_token_id)
rets = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
response = rets.replace(sentence, "")
print(response)
```
## 多轮对话
```python
import os
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda"
model = AutoModelForCausalLM.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", device_map=device)
tokenizer = AutoTokenizer.from_pretrained("SmilePanda/Langboat_bloom-6b4-zh-instruct_finetune-chat", use_fast=False)
source_prefix = "human"
target_prefix = "assistant"
history = ""
while True:
query = input("user: ").strip()
if not query:
continue
if query == 'q' or query == 'stop':
break
if history:
sentence = history + f"\n{source_prefix}: \n{query}\n\n{target_prefix}: \n"
else:
sentence = f"{source_prefix}: \n{query}\n\n{target_prefix}: \n"
input_ids = tokenizer(sentence, return_tensors='pt').input_ids.to(device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=1024,
do_sample=True,
top_p=0.90,
temperature=0.1,
repetition_penalty=1.0,
eos_token_id=tokenizer.eos_token_id)
rets = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip()
print("bloom: {}".format(rets.replace(sentence, "")))
history = rets
```
|
Mustafaa4a/ASR-Somali
|
Mustafaa4a
| 2023-07-21T06:08:44Z | 38 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-20T20:12:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ASR-Somali
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR-Somali
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3660
- Wer: 0.3060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1435 | 2.09 | 400 | 0.7624 | 0.7706 |
| 0.5829 | 4.18 | 800 | 0.3646 | 0.3935 |
| 0.3634 | 6.27 | 1200 | 0.3318 | 0.3944 |
| 0.2942 | 8.36 | 1600 | 0.3148 | 0.3403 |
| 0.2419 | 10.44 | 2000 | 0.3000 | 0.3255 |
| 0.2104 | 12.53 | 2400 | 0.2951 | 0.3312 |
| 0.1864 | 14.62 | 2800 | 0.3296 | 0.3083 |
| 0.1666 | 16.71 | 3200 | 0.3264 | 0.3153 |
| 0.148 | 18.8 | 3600 | 0.3188 | 0.3028 |
| 0.1305 | 20.89 | 4000 | 0.3448 | 0.3002 |
| 0.1206 | 22.98 | 4400 | 0.3660 | 0.3060 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 1.18.3
- Tokenizers 0.13.3
|
aqzaqaqzaq/my_awesome_model
|
aqzaqaqzaq
| 2023-07-21T05:59:51Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T08:30:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Claaas/q-Taxi-v3
|
Claaas
| 2023-07-21T05:40:37Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-21T05:40:35Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.78
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Claaas/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
digiplay/SyncMix_v1.5
|
digiplay
| 2023-07-21T05:39:41Z | 345 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-21T04:49:08Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/94834?modelVersionId=122277
|
Lokeshsoni2801/doc_classification_model_v1
|
Lokeshsoni2801
| 2023-07-21T05:32:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-20T18:35:20Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Lokeshsoni2801/doc_classification_model_v1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lokeshsoni2801/doc_classification_model_v1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5673
- Validation Loss: 0.6571
- Train Accuracy: 0.7662
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 145, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.4044 | 1.1742 | 0.6766 | 0 |
| 1.0292 | 0.8728 | 0.7015 | 1 |
| 0.7649 | 0.7547 | 0.7413 | 2 |
| 0.6383 | 0.6743 | 0.7761 | 3 |
| 0.5833 | 0.6571 | 0.7662 | 4 |
| 0.5673 | 0.6571 | 0.7662 | 5 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nebulae7/four
|
nebulae7
| 2023-07-21T05:30:09Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-21T04:51:02Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
NasimB/guten-raqrity-log-rarity-no-cut
|
NasimB
| 2023-07-21T05:26:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-21T03:02:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: guten-raqrity-log-rarity-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# guten-raqrity-log-rarity-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.3432 | 0.29 | 500 | 5.3400 |
| 5.0375 | 0.58 | 1000 | 4.9224 |
| 4.7018 | 0.87 | 1500 | 4.6880 |
| 4.4418 | 1.16 | 2000 | 4.5537 |
| 4.3059 | 1.46 | 2500 | 4.4374 |
| 4.1943 | 1.75 | 3000 | 4.3310 |
| 4.084 | 2.04 | 3500 | 4.2580 |
| 3.8919 | 2.33 | 4000 | 4.2195 |
| 3.8697 | 2.62 | 4500 | 4.1600 |
| 3.8291 | 2.91 | 5000 | 4.1122 |
| 3.6488 | 3.2 | 5500 | 4.1011 |
| 3.5862 | 3.49 | 6000 | 4.0753 |
| 3.5729 | 3.79 | 6500 | 4.0437 |
| 3.4885 | 4.08 | 7000 | 4.0376 |
| 3.3164 | 4.37 | 7500 | 4.0371 |
| 3.3169 | 4.66 | 8000 | 4.0220 |
| 3.3017 | 4.95 | 8500 | 4.0090 |
| 3.1581 | 5.24 | 9000 | 4.0217 |
| 3.1392 | 5.53 | 9500 | 4.0204 |
| 3.1322 | 5.82 | 10000 | 4.0196 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Mikael110/llama-2-70b-guanaco-qlora
|
Mikael110
| 2023-07-21T05:25:09Z | 0 | 19 | null |
[
"llama-2",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2023-07-21T04:38:52Z |
---
language:
- en
pipeline_tag: text-classification
tags:
- llama-2
---
This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). It was finetuned from the base [Llama-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.
This repo contains the QLoRA adapter.
A 7b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora).
A 13b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora).
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
|
ddoc/dt
|
ddoc
| 2023-07-21T05:20:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-21T05:19:27Z |
# !After Detailer
!After Detailer is a extension for stable diffusion webui, similar to Detection Detailer, except it uses ultralytics instead of the mmdet.
## Install
(from Mikubill/sd-webui-controlnet)
1. Open "Extensions" tab.
2. Open "Install from URL" tab in the tab.
3. Enter `https://github.com/Bing-su/adetailer.git` to "URL for extension's git repository".
4. Press "Install" button.
5. Wait 5 seconds, and you will see the message "Installed into stable-diffusion-webui\extensions\adetailer. Use Installed tab to restart".
6. Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use this method to update extensions.)
7. Completely restart A1111 webui including your terminal. (If you do not know what is a "terminal", you can reboot your computer: turn your computer off and turn it on again.)
You can now install it directly from the Extensions tab.

You **DON'T** need to download any model from huggingface.
## Options
| Model, Prompts | | |
| --------------------------------- | ------------------------------------- | ------------------------------------------------- |
| ADetailer model | Determine what to detect. | `None` = disable |
| ADetailer prompt, negative prompt | Prompts and negative prompts to apply | If left blank, it will use the same as the input. |
| Detection | | |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | --- |
| Detection model confidence threshold | Only objects with a detection model confidence above this threshold are used for inpainting. | |
| Mask min/max ratio | Only use masks whose area is between those ratios for the area of the entire image. | |
If you want to exclude objects in the background, try setting the min ratio to around `0.01`.
| Mask Preprocessing | | |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| Mask x, y offset | Moves the mask horizontally and vertically by | |
| Mask erosion (-) / dilation (+) | Enlarge or reduce the detected mask. | [opencv example](https://docs.opencv.org/4.7.0/db/df6/tutorial_erosion_dilatation.html) |
| Mask merge mode | `None`: Inpaint each mask<br/>`Merge`: Merge all masks and inpaint<br/>`Merge and Invert`: Merge all masks and Invert, then inpaint | |
Applied in this order: x, y offset → erosion/dilation → merge/invert.
#### Inpainting

Each option corresponds to a corresponding option on the inpaint tab.
## ControlNet Inpainting
You can use the ControlNet extension if you have ControlNet installed and ControlNet models.
Support `inpaint, scribble, lineart, openpose, tile` controlnet models. Once you choose a model, the preprocessor is set automatically.
## Model
| Model | Target | mAP 50 | mAP 50-95 |
| --------------------- | --------------------- | ----------------------------- | ----------------------------- |
| face_yolov8n.pt | 2D / realistic face | 0.660 | 0.366 |
| face_yolov8s.pt | 2D / realistic face | 0.713 | 0.404 |
| hand_yolov8n.pt | 2D / realistic hand | 0.767 | 0.505 |
| person_yolov8n-seg.pt | 2D / realistic person | 0.782 (bbox)<br/>0.761 (mask) | 0.555 (bbox)<br/>0.460 (mask) |
| person_yolov8s-seg.pt | 2D / realistic person | 0.824 (bbox)<br/>0.809 (mask) | 0.605 (bbox)<br/>0.508 (mask) |
| mediapipe_face_full | realistic face | - | - |
| mediapipe_face_short | realistic face | - | - |
| mediapipe_face_mesh | realistic face | - | - |
The yolo models can be found on huggingface [Bingsu/adetailer](https://huggingface.co/Bingsu/adetailer).
### User Model
Put your [ultralytics](https://github.com/ultralytics/ultralytics) model in `webui/models/adetailer`. The model name should end with `.pt` or `.pth`.
It must be a bbox detection or segment model and use all label.
### Dataset
Datasets used for training the yolo models are:
#### Face
- [Anime Face CreateML](https://universe.roboflow.com/my-workspace-mph8o/anime-face-createml)
- [xml2txt](https://universe.roboflow.com/0oooooo0/xml2txt-njqx1)
- [AN](https://universe.roboflow.com/sed-b8vkf/an-lfg5i)
- [wider face](http://shuoyang1213.me/WIDERFACE/index.html)
#### Hand
- [AnHDet](https://universe.roboflow.com/1-yshhi/anhdet)
- [hand-detection-fuao9](https://universe.roboflow.com/catwithawand/hand-detection-fuao9)
#### Person
- [coco2017](https://cocodataset.org/#home) (only person)
- [AniSeg](https://github.com/jerryli27/AniSeg)
- [skytnt/anime-segmentation](https://huggingface.co/datasets/skytnt/anime-segmentation)
## Example


[](https://ko-fi.com/F1F1L7V2N)
|
LarryAIDraw/chara_JakuChara_NanamiMinami_v1
|
LarryAIDraw
| 2023-07-21T05:11:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-21T04:39:26Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/112933/nanami-minami-or-jaku-chara-tomozaki-kun
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.